2019-05-29 08:12:00

by Xiubo Li

[permalink] [raw]
Subject: [RFC PATCH] nbd: set the default nbds_max to 0

From: Xiubo Li <[email protected]>

There is one problem that when trying to check the nbd device
NBD_CMD_STATUS and at the same time insert the nbd.ko module,
we can randomly get some of the 16 /dev/nbd{0~15} are connected,
but they are not. This is because that the udev service in user
space will try to open /dev/nbd{0~15} devices to do some sanity
check when they are added in "__init nbd_init()" and then close
it asynchronousely.

Signed-off-by: Xiubo Li <[email protected]>
---

Not sure whether this patch make sense here, coz this issue can be
avoided by setting the "nbds_max=0" when inserting the nbd.ko modules.



drivers/block/nbd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 4c1de1c..98be6ca 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -137,7 +137,7 @@ struct nbd_cmd {

#define NBD_DEF_BLKSIZE 1024

-static unsigned int nbds_max = 16;
+static unsigned int nbds_max;
static int max_part = 16;
static struct workqueue_struct *recv_workqueue;
static int part_shift;
@@ -2310,6 +2310,6 @@ static void __exit nbd_cleanup(void)
MODULE_LICENSE("GPL");

module_param(nbds_max, int, 0444);
-MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 16)");
+MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 0)");
module_param(max_part, int, 0444);
MODULE_PARM_DESC(max_part, "number of partitions per device (default: 16)");
--
1.8.3.1


2019-05-29 13:50:11

by Josef Bacik

[permalink] [raw]
Subject: Re: [RFC PATCH] nbd: set the default nbds_max to 0

On Wed, May 29, 2019 at 04:08:36PM +0800, [email protected] wrote:
> From: Xiubo Li <[email protected]>
>
> There is one problem that when trying to check the nbd device
> NBD_CMD_STATUS and at the same time insert the nbd.ko module,
> we can randomly get some of the 16 /dev/nbd{0~15} are connected,
> but they are not. This is because that the udev service in user
> space will try to open /dev/nbd{0~15} devices to do some sanity
> check when they are added in "__init nbd_init()" and then close
> it asynchronousely.
>
> Signed-off-by: Xiubo Li <[email protected]>
> ---
>
> Not sure whether this patch make sense here, coz this issue can be
> avoided by setting the "nbds_max=0" when inserting the nbd.ko modules.
>

Yeah I'd rather not make this the default, as of right now most people still
probably use the old method of configuration and it may surprise them to
suddenly have to do nbds_max=16 to make their stuff work. Thanks,

Josef

2019-05-30 01:24:06

by Xiubo Li

[permalink] [raw]
Subject: Re: [RFC PATCH] nbd: set the default nbds_max to 0

On 2019/5/29 21:48, Josef Bacik wrote:
> On Wed, May 29, 2019 at 04:08:36PM +0800, [email protected] wrote:
>> From: Xiubo Li <[email protected]>
>>
>> There is one problem that when trying to check the nbd device
>> NBD_CMD_STATUS and at the same time insert the nbd.ko module,
>> we can randomly get some of the 16 /dev/nbd{0~15} are connected,
>> but they are not. This is because that the udev service in user
>> space will try to open /dev/nbd{0~15} devices to do some sanity
>> check when they are added in "__init nbd_init()" and then close
>> it asynchronousely.
>>
>> Signed-off-by: Xiubo Li <[email protected]>
>> ---
>>
>> Not sure whether this patch make sense here, coz this issue can be
>> avoided by setting the "nbds_max=0" when inserting the nbd.ko modules.
>>
> Yeah I'd rather not make this the default, as of right now most people still
> probably use the old method of configuration and it may surprise them to
> suddenly have to do nbds_max=16 to make their stuff work. Thanks,

Sure, make sense to me :-)

So this patch here in the mail list will as one note and reminder to
other who may hit the same issue in future.

Thanks.
BRs
Xiubo


> Josef
>