Hi!
I see this... and it looks like there are 16 workqueues before nbd is
even used. Surely there are better ways to do that?
Best regards,
Pavel
257 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd0-recv
260 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd1-recv
263 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd2-recv
266 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd3-recv
269 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd4-recv
272 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd5-recv
275 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd6-recv
278 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd7-recv
281 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd8-recv
284 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd9-recv
287 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd10-recv
290 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd11-recv
293 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd12-recv
296 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd13-recv
299 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd14-recv
302 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 nbd15-recv
--
People of Russia, stop Putin before his war on Ukraine escalates.
On Tue, Nov 22, 2022 at 12:56:41PM +0100, Pavel Machek wrote:
> Hi!
>
> I see this... and it looks like there are 16 workqueues before nbd is
> even used. Surely there are better ways to do that?
Yes, it would be nice to create a pool of workers that only spawns up
threads when actual parallel requests are made. Are you willing to
help write the patch?
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
Hi!
> > I see this... and it looks like there are 16 workqueues before nbd is
> > even used. Surely there are better ways to do that?
>
> Yes, it would be nice to create a pool of workers that only spawns up
> threads when actual parallel requests are made. Are you willing to
> help write the patch?
I was thinking more "only spawn a workqueue when nbd is opened" or so.
I have 16 of them, and I'm not using nbd. Workqueue per open device is
okay, current situation... not so much.
Pavel
--
People of Russia, stop Putin before his war on Ukraine escalates.
Hi,
在 2022/11/24 6:01, Pavel Machek 写道:
> Hi!
>
>>> I see this... and it looks like there are 16 workqueues before nbd is
>>> even used. Surely there are better ways to do that?
>>
>> Yes, it would be nice to create a pool of workers that only spawns up
>> threads when actual parallel requests are made. Are you willing to
>> help write the patch?
>
> I was thinking more "only spawn a workqueue when nbd is opened" or so.
>
> I have 16 of them, and I'm not using nbd. Workqueue per open device is
> okay, current situation... not so much.
You can take a look at this commit:
e2daec488c57 ("nbd: Fix hungtask when nbd_config_put")
Allocation of recv_workq is moved from start device to alloc device to
fix hungtask. You might need to be careful if you want to move this.
Thanks,
Kuai
>
> Pavel
>
On Thu 2022-11-24 09:17:51, Yu Kuai wrote:
> Hi,
>
> 在 2022/11/24 6:01, Pavel Machek 写道:
> > Hi!
> >
> > > > I see this... and it looks like there are 16 workqueues before nbd is
> > > > even used. Surely there are better ways to do that?
> > >
> > > Yes, it would be nice to create a pool of workers that only spawns up
> > > threads when actual parallel requests are made. Are you willing to
> > > help write the patch?
> >
> > I was thinking more "only spawn a workqueue when nbd is opened" or so.
> >
> > I have 16 of them, and I'm not using nbd. Workqueue per open device is
> > okay, current situation... not so much.
>
> You can take a look at this commit:
>
> e2daec488c57 ("nbd: Fix hungtask when nbd_config_put")
>
> Allocation of recv_workq is moved from start device to alloc device to
> fix hungtask. You might need to be careful if you want to move this.
Can we get that reverted?
That is rather obscure bug (how many GFP_KERNEL failures do you
normally see?) and it costs, dunno, 100KB? of unswappable memory.
Best regards,
Pavel
--
People of Russia, stop Putin before his war on Ukraine escalates.
Hi,
在 2022/11/24 18:06, Pavel Machek 写道:
>
> Can we get that reverted?
>
> That is rather obscure bug (how many GFP_KERNEL failures do you
> normally see?) and it costs, dunno, 100KB? of unswappable memory.
>
No, I don't think that can be reverted. Introduce a BUG just to save
some memory sounds insane.
If you really want to do this. I think the right thing to do is to move
the allocation of recv_workq back to start device, and also fix the
problem properly.
Thanks,
Kuai
> Best regards,
> Pavel
>