2018-08-14 07:35:17

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCHv2 0/5] genhd: register default groups with device_add_disk()

device_add_disk() doesn't allow to register with default sysfs groups,
which introduces a race with udev as these groups have to be created after
the uevent has been sent.
This patchset updates device_add_disk() to accept a 'groups' argument to
avoid this race and updates the obvious drivers to use it.
There are some more drivers which might benefit from this (eg loop or md),
but the interface is not straightforward so I haven't attempted it.

Patches are relative to Christophs nvme-4.19 branch.

As usual, comments and reviews are welcome.

Changes to v1:
- Drop first patch
- Convert lightnvm sysfs attributes as suggested by Bart

Hannes Reinecke (5):
block: genhd: add 'groups' argument to device_add_disk
nvme: register ns_id attributes as default sysfs groups
aoe: use device_add_disk_with_groups()
zram: register default groups with device_add_disk()
virtio-blk: modernize sysfs attribute creation

arch/um/drivers/ubd_kern.c | 2 +-
block/genhd.c | 19 ++++++++---
drivers/block/aoe/aoe.h | 1 -
drivers/block/aoe/aoeblk.c | 21 ++++--------
drivers/block/aoe/aoedev.c | 1 -
drivers/block/floppy.c | 2 +-
drivers/block/mtip32xx/mtip32xx.c | 2 +-
drivers/block/ps3disk.c | 2 +-
drivers/block/ps3vram.c | 2 +-
drivers/block/rsxx/dev.c | 2 +-
drivers/block/skd_main.c | 2 +-
drivers/block/sunvdc.c | 2 +-
drivers/block/virtio_blk.c | 68 +++++++++++++++++++++----------------
drivers/block/xen-blkfront.c | 2 +-
drivers/block/zram/zram_drv.c | 28 ++++-----------
drivers/ide/ide-cd.c | 2 +-
drivers/ide/ide-gd.c | 2 +-
drivers/memstick/core/ms_block.c | 2 +-
drivers/memstick/core/mspro_block.c | 2 +-
drivers/mmc/core/block.c | 2 +-
drivers/mtd/mtd_blkdevs.c | 2 +-
drivers/nvdimm/blk.c | 2 +-
drivers/nvdimm/btt.c | 2 +-
drivers/nvdimm/pmem.c | 2 +-
drivers/nvme/host/core.c | 21 +++++-------
drivers/nvme/host/lightnvm.c | 27 +++------------
drivers/nvme/host/multipath.c | 15 +++-----
drivers/nvme/host/nvme.h | 11 ++----
drivers/s390/block/dasd_genhd.c | 2 +-
drivers/s390/block/dcssblk.c | 2 +-
drivers/s390/block/scm_blk.c | 2 +-
drivers/scsi/sd.c | 2 +-
drivers/scsi/sr.c | 2 +-
include/linux/genhd.h | 5 +--
34 files changed, 113 insertions(+), 150 deletions(-)

--
2.12.3



2018-08-14 07:34:33

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.

Signed-off-by: Hannes Reinecke <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
drivers/nvme/host/core.c | 21 +++++++++------------
drivers/nvme/host/lightnvm.c | 27 ++++-----------------------
drivers/nvme/host/multipath.c | 15 ++++-----------
drivers/nvme/host/nvme.h | 11 +++--------
4 files changed, 20 insertions(+), 54 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0e824e8c8fd7..8e26d98e9a8f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
.is_visible = nvme_ns_id_attrs_are_visible,
};

+const struct attribute_group *nvme_ns_id_attr_groups[] = {
+ &nvme_ns_id_attr_group,
+ NULL, /* Will be filled in by lightnvm if present */
+ NULL,
+};
+
#define nvme_show_str_function(field) \
static ssize_t field##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
@@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)

nvme_get_ctrl(ctrl);

- device_add_disk(ctrl->device, ns->disk, NULL);
- if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
- &nvme_ns_id_attr_group))
- pr_warn("%s: failed to create sysfs group for identification\n",
- ns->disk->disk_name);
- if (ns->ndev && nvme_nvm_register_sysfs(ns))
- pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
- ns->disk->disk_name);
+ if (ns->ndev)
+ nvme_nvm_register_sysfs(ns);
+ device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);

nvme_mpath_add_disk(ns, id);
nvme_fault_inject_init(ns);
@@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)

nvme_fault_inject_fini(ns);
if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
- sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
- &nvme_ns_id_attr_group);
- if (ns->ndev)
- nvme_nvm_unregister_sysfs(ns);
del_gendisk(ns->disk);
blk_cleanup_queue(ns->queue);
if (blk_get_integrity(ns->disk))
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 6fe5923c95d4..7bf2f9da6293 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
.attrs = nvm_dev_attrs_20,
};

-int nvme_nvm_register_sysfs(struct nvme_ns *ns)
+void nvme_nvm_register_sysfs(struct nvme_ns *ns)
{
struct nvm_dev *ndev = ns->ndev;
struct nvm_geo *geo = &ndev->geo;

if (!ndev)
- return -EINVAL;
-
- switch (geo->major_ver_id) {
- case 1:
- return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
- &nvm_dev_attr_group_12);
- case 2:
- return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
- &nvm_dev_attr_group_20);
- }
-
- return -EINVAL;
-}
-
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
-{
- struct nvm_dev *ndev = ns->ndev;
- struct nvm_geo *geo = &ndev->geo;
+ return;

switch (geo->major_ver_id) {
case 1:
- sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
- &nvm_dev_attr_group_12);
+ nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
break;
case 2:
- sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
- &nvm_dev_attr_group_20);
+ nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
break;
}
}
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 477af51d01e8..8e846095c42d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
if (!head->disk)
return;

- if (!(head->disk->flags & GENHD_FL_UP)) {
- device_add_disk(&head->subsys->dev, head->disk, NULL);
- if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
- &nvme_ns_id_attr_group))
- dev_warn(&head->subsys->dev,
- "failed to create id group.\n");
- }
+ if (!(head->disk->flags & GENHD_FL_UP))
+ device_add_disk(&head->subsys->dev, head->disk,
+ nvme_ns_id_attr_groups);

kblockd_schedule_work(&ns->head->requeue_work);
}
@@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
{
if (!head->disk)
return;
- if (head->disk->flags & GENHD_FL_UP) {
- sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
- &nvme_ns_id_attr_group);
+ if (head->disk->flags & GENHD_FL_UP)
del_gendisk(head->disk);
- }
blk_set_queue_dying(head->disk->queue);
/* make sure all pending bios are cleaned up */
kblockd_schedule_work(&head->requeue_work);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bb4a2003c097..9ba6d67d8e0a 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
void *log, size_t size, u64 offset);

-extern const struct attribute_group nvme_ns_id_attr_group;
+extern const struct attribute_group *nvme_ns_id_attr_groups[];
extern const struct block_device_operations nvme_ns_head_ops;

#ifdef CONFIG_NVME_MULTIPATH
@@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
void nvme_nvm_unregister(struct nvme_ns *ns);
-int nvme_nvm_register_sysfs(struct nvme_ns *ns);
-void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
+void nvme_nvm_register_sysfs(struct nvme_ns *ns);
int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
#else
static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
@@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
}

static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
-static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
-{
- return 0;
-}
-static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
+static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
unsigned long arg)
{
--
2.12.3


2018-08-14 07:35:29

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 5/5] virtio-blk: modernize sysfs attribute creation

Use new-style DEVICE_ATTR_RO/DEVICE_ATTR_RW to create the sysfs attributes
and register the disk with default sysfs attribute groups.

Signed-off-by: Hannes Reinecke <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Michael S. Tsirkin <[email protected]>
---
drivers/block/virtio_blk.c | 68 ++++++++++++++++++++++++++--------------------
1 file changed, 39 insertions(+), 29 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index fe80560000a1..086c6bb12baa 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -351,8 +351,8 @@ static int minor_to_index(int minor)
return minor >> PART_BITS;
}

-static ssize_t virtblk_serial_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t serial_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
struct gendisk *disk = dev_to_disk(dev);
int err;
@@ -371,7 +371,7 @@ static ssize_t virtblk_serial_show(struct device *dev,
return err;
}

-static DEVICE_ATTR(serial, 0444, virtblk_serial_show, NULL);
+static DEVICE_ATTR_RO(serial);

/* The queue's logical block size must be set before calling this */
static void virtblk_update_capacity(struct virtio_blk *vblk, bool resize)
@@ -545,8 +545,8 @@ static const char *const virtblk_cache_types[] = {
};

static ssize_t
-virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+cache_type_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct gendisk *disk = dev_to_disk(dev);
struct virtio_blk *vblk = disk->private_data;
@@ -564,8 +564,7 @@ virtblk_cache_type_store(struct device *dev, struct device_attribute *attr,
}

static ssize_t
-virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
- char *buf)
+cache_type_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct gendisk *disk = dev_to_disk(dev);
struct virtio_blk *vblk = disk->private_data;
@@ -575,12 +574,38 @@ virtblk_cache_type_show(struct device *dev, struct device_attribute *attr,
return snprintf(buf, 40, "%s\n", virtblk_cache_types[writeback]);
}

-static const struct device_attribute dev_attr_cache_type_ro =
- __ATTR(cache_type, 0444,
- virtblk_cache_type_show, NULL);
-static const struct device_attribute dev_attr_cache_type_rw =
- __ATTR(cache_type, 0644,
- virtblk_cache_type_show, virtblk_cache_type_store);
+static DEVICE_ATTR_RW(cache_type);
+
+static struct attribute *virtblk_attrs[] = {
+ &dev_attr_serial.attr,
+ &dev_attr_cache_type.attr,
+ NULL,
+};
+
+static umode_t virtblk_attrs_are_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct gendisk *disk = dev_to_disk(dev);
+ struct virtio_blk *vblk = disk->private_data;
+ struct virtio_device *vdev = vblk->vdev;
+
+ if (a == &dev_attr_cache_type.attr &&
+ !virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
+ return S_IRUGO;
+
+ return a->mode;
+}
+
+static const struct attribute_group virtblk_attr_group = {
+ .attrs = virtblk_attrs,
+ .is_visible = virtblk_attrs_are_visible,
+};
+
+static const struct attribute_group *virtblk_attr_groups[] = {
+ &virtblk_attr_group,
+ NULL,
+};

static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq,
unsigned int hctx_idx, unsigned int numa_node)
@@ -780,24 +805,9 @@ static int virtblk_probe(struct virtio_device *vdev)
virtblk_update_capacity(vblk, false);
virtio_device_ready(vdev);

- device_add_disk(&vdev->dev, vblk->disk, NULL);
- err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
- if (err)
- goto out_del_disk;
-
- if (virtio_has_feature(vdev, VIRTIO_BLK_F_CONFIG_WCE))
- err = device_create_file(disk_to_dev(vblk->disk),
- &dev_attr_cache_type_rw);
- else
- err = device_create_file(disk_to_dev(vblk->disk),
- &dev_attr_cache_type_ro);
- if (err)
- goto out_del_disk;
+ device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups);
return 0;

-out_del_disk:
- del_gendisk(vblk->disk);
- blk_cleanup_queue(vblk->disk->queue);
out_free_tags:
blk_mq_free_tag_set(&vblk->tag_set);
out_put_disk:
--
2.12.3


2018-08-14 07:35:35

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 4/5] zram: register default groups with device_add_disk()

Register default sysfs groups during device_add-disk() to avoid a
race condition with udev during startup.

Signed-off-by: Hannes Reinecke <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Nitin Gupta <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
drivers/block/zram/zram_drv.c | 28 +++++++---------------------
1 file changed, 7 insertions(+), 21 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 2907a8156aaf..fc3626cb7fe5 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1618,6 +1618,11 @@ static const struct attribute_group zram_disk_attr_group = {
.attrs = zram_disk_attrs,
};

+static const struct attribute_group *zram_disk_attr_groups[] = {
+ &zram_disk_attr_group,
+ NULL,
+};
+
/*
* Allocate and initialize new zram device. the function returns
* '>= 0' device_id upon success, and negative value otherwise.
@@ -1698,24 +1703,14 @@ static int zram_add(void)

zram->disk->queue->backing_dev_info->capabilities |=
(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
- add_disk(zram->disk);
-
- ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
- &zram_disk_attr_group);
- if (ret < 0) {
- pr_err("Error creating sysfs group for device %d\n",
- device_id);
- goto out_free_disk;
- }
+ device_add_disk(NULL, zram->disk, zram_disk_attr_groups);
+
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));

zram_debugfs_register(zram);
pr_info("Added device: %s\n", zram->disk->disk_name);
return device_id;

-out_free_disk:
- del_gendisk(zram->disk);
- put_disk(zram->disk);
out_free_queue:
blk_cleanup_queue(queue);
out_free_idr:
@@ -1744,15 +1739,6 @@ static int zram_remove(struct zram *zram)
mutex_unlock(&bdev->bd_mutex);

zram_debugfs_unregister(zram);
- /*
- * Remove sysfs first, so no one will perform a disksize
- * store while we destroy the devices. This also helps during
- * hot_remove -- zram_reset_device() is the last holder of
- * ->init_lock, no later/concurrent disksize_store() or any
- * other sysfs handlers are possible.
- */
- sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
- &zram_disk_attr_group);

/* Make sure all the pending I/O are finished */
fsync_bdev(bdev);
--
2.12.3


2018-08-14 07:36:06

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 3/5] aoe: use device_add_disk_with_groups()

Use device_add_disk_with_groups() to avoid a race condition with
udev during startup.

Signed-off-by: Hannes Reinecke <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Cc: Ed L. Cachin <[email protected]>
---
drivers/block/aoe/aoe.h | 1 -
drivers/block/aoe/aoeblk.c | 21 +++++++--------------
drivers/block/aoe/aoedev.c | 1 -
3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h
index c0ebda1283cc..015c68017a1c 100644
--- a/drivers/block/aoe/aoe.h
+++ b/drivers/block/aoe/aoe.h
@@ -201,7 +201,6 @@ int aoeblk_init(void);
void aoeblk_exit(void);
void aoeblk_gdalloc(void *);
void aoedisk_rm_debugfs(struct aoedev *d);
-void aoedisk_rm_sysfs(struct aoedev *d);

int aoechr_init(void);
void aoechr_exit(void);
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index 429ebb84b592..ff770e7d9e52 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -177,10 +177,15 @@ static struct attribute *aoe_attrs[] = {
NULL,
};

-static const struct attribute_group attr_group = {
+static const struct attribute_group aoe_attr_group = {
.attrs = aoe_attrs,
};

+static const struct attribute_group *aoe_attr_groups[] = {
+ &aoe_attr_group,
+ NULL,
+};
+
static const struct file_operations aoe_debugfs_fops = {
.open = aoe_debugfs_open,
.read = seq_read,
@@ -220,17 +225,6 @@ aoedisk_rm_debugfs(struct aoedev *d)
}

static int
-aoedisk_add_sysfs(struct aoedev *d)
-{
- return sysfs_create_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-void
-aoedisk_rm_sysfs(struct aoedev *d)
-{
- sysfs_remove_group(&disk_to_dev(d->gd)->kobj, &attr_group);
-}
-
-static int
aoeblk_open(struct block_device *bdev, fmode_t mode)
{
struct aoedev *d = bdev->bd_disk->private_data;
@@ -417,8 +411,7 @@ aoeblk_gdalloc(void *vp)

spin_unlock_irqrestore(&d->lock, flags);

- add_disk(gd);
- aoedisk_add_sysfs(d);
+ device_add_disk(NULL, gd, aoe_attr_groups);
aoedisk_add_debugfs(d);

spin_lock_irqsave(&d->lock, flags);
diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c
index 697f735b07a4..d92fa1fe3580 100644
--- a/drivers/block/aoe/aoedev.c
+++ b/drivers/block/aoe/aoedev.c
@@ -275,7 +275,6 @@ freedev(struct aoedev *d)
del_timer_sync(&d->timer);
if (d->gd) {
aoedisk_rm_debugfs(d);
- aoedisk_rm_sysfs(d);
del_gendisk(d->gd);
put_disk(d->gd);
blk_cleanup_queue(d->blkq);
--
2.12.3


2018-08-14 07:41:39

by Hannes Reinecke

[permalink] [raw]
Subject: [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk

Update device_add_disk() to take an 'groups' argument so that
individual drivers can register a device with additional sysfs
attributes.
This avoids race condition the driver would otherwise have if these
groups were to be created with sysfs_add_groups().

Signed-off-by: Martin Wilck <[email protected]>
Signed-off-by: Hannes Reinecke <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
arch/um/drivers/ubd_kern.c | 2 +-
block/genhd.c | 19 ++++++++++++++-----
drivers/block/floppy.c | 2 +-
drivers/block/mtip32xx/mtip32xx.c | 2 +-
drivers/block/ps3disk.c | 2 +-
drivers/block/ps3vram.c | 2 +-
drivers/block/rsxx/dev.c | 2 +-
drivers/block/skd_main.c | 2 +-
drivers/block/sunvdc.c | 2 +-
drivers/block/virtio_blk.c | 2 +-
drivers/block/xen-blkfront.c | 2 +-
drivers/ide/ide-cd.c | 2 +-
drivers/ide/ide-gd.c | 2 +-
drivers/memstick/core/ms_block.c | 2 +-
drivers/memstick/core/mspro_block.c | 2 +-
drivers/mmc/core/block.c | 2 +-
drivers/mtd/mtd_blkdevs.c | 2 +-
drivers/nvdimm/blk.c | 2 +-
drivers/nvdimm/btt.c | 2 +-
drivers/nvdimm/pmem.c | 2 +-
drivers/nvme/host/core.c | 2 +-
drivers/nvme/host/multipath.c | 2 +-
drivers/s390/block/dasd_genhd.c | 2 +-
drivers/s390/block/dcssblk.c | 2 +-
drivers/s390/block/scm_blk.c | 2 +-
drivers/scsi/sd.c | 2 +-
drivers/scsi/sr.c | 2 +-
include/linux/genhd.h | 5 +++--
28 files changed, 43 insertions(+), 33 deletions(-)

diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 83c470364dfb..6ee4c56032f7 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -891,7 +891,7 @@ static int ubd_disk_register(int major, u64 size, int unit,

disk->private_data = &ubd_devs[unit];
disk->queue = ubd_devs[unit].queue;
- device_add_disk(parent, disk);
+ device_add_disk(parent, disk, NULL);

*disk_out = disk;
return 0;
diff --git a/block/genhd.c b/block/genhd.c
index 8cc719a37b32..ef0936184d69 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -567,7 +567,8 @@ static int exact_lock(dev_t devt, void *data)
return 0;
}

-static void register_disk(struct device *parent, struct gendisk *disk)
+static void register_disk(struct device *parent, struct gendisk *disk,
+ const struct attribute_group **groups)
{
struct device *ddev = disk_to_dev(disk);
struct block_device *bdev;
@@ -582,6 +583,10 @@ static void register_disk(struct device *parent, struct gendisk *disk)
/* delay uevents, until we scanned partition table */
dev_set_uevent_suppress(ddev, 1);

+ if (groups) {
+ WARN_ON(ddev->groups);
+ ddev->groups = groups;
+ }
if (device_add(ddev))
return;
if (!sysfs_deprecated) {
@@ -647,6 +652,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
* __device_add_disk - add disk information to kernel list
* @parent: parent device for the disk
* @disk: per-device partitioning information
+ * @groups: Additional per-device sysfs groups
* @register_queue: register the queue if set to true
*
* This function registers the partitioning information in @disk
@@ -655,6 +661,7 @@ static void register_disk(struct device *parent, struct gendisk *disk)
* FIXME: error handling
*/
static void __device_add_disk(struct device *parent, struct gendisk *disk,
+ const struct attribute_group **groups,
bool register_queue)
{
dev_t devt;
@@ -698,7 +705,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
blk_register_region(disk_devt(disk), disk->minors, NULL,
exact_match, exact_lock, disk);
}
- register_disk(parent, disk);
+ register_disk(parent, disk, groups);
if (register_queue)
blk_register_queue(disk);

@@ -712,15 +719,17 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
blk_integrity_add(disk);
}

-void device_add_disk(struct device *parent, struct gendisk *disk)
+void device_add_disk(struct device *parent, struct gendisk *disk,
+ const struct attribute_group **groups)
+
{
- __device_add_disk(parent, disk, true);
+ __device_add_disk(parent, disk, groups, true);
}
EXPORT_SYMBOL(device_add_disk);

void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk)
{
- __device_add_disk(parent, disk, false);
+ __device_add_disk(parent, disk, NULL, false);
}
EXPORT_SYMBOL(device_add_disk_no_queue_reg);

diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 48f622728ce6..1bc99e9dfaee 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4676,7 +4676,7 @@ static int __init do_floppy_init(void)
/* to be cleaned up... */
disks[drive]->private_data = (void *)(long)drive;
disks[drive]->flags |= GENHD_FL_REMOVABLE;
- device_add_disk(&floppy_device[drive].dev, disks[drive]);
+ device_add_disk(&floppy_device[drive].dev, disks[drive], NULL);
}

return 0;
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index c73626decb46..ac3b19ad8544 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3873,7 +3873,7 @@ static int mtip_block_initialize(struct driver_data *dd)
set_capacity(dd->disk, capacity);

/* Enable the block device and add it to /dev */
- device_add_disk(&dd->pdev->dev, dd->disk);
+ device_add_disk(&dd->pdev->dev, dd->disk, NULL);

dd->bdev = bdget_disk(dd->disk, 0);
/*
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index afe1508d82c6..29a4419e8ba3 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -500,7 +500,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
gendisk->disk_name, priv->model, priv->raw_capacity >> 11,
get_capacity(gendisk) >> 11);

- device_add_disk(&dev->sbd.core, gendisk);
+ device_add_disk(&dev->sbd.core, gendisk, NULL);
return 0;

fail_cleanup_queue:
diff --git a/drivers/block/ps3vram.c b/drivers/block/ps3vram.c
index 1e3d5de9d838..c0c50816a10b 100644
--- a/drivers/block/ps3vram.c
+++ b/drivers/block/ps3vram.c
@@ -769,7 +769,7 @@ static int ps3vram_probe(struct ps3_system_bus_device *dev)
dev_info(&dev->core, "%s: Using %lu MiB of GPU memory\n",
gendisk->disk_name, get_capacity(gendisk) >> 11);

- device_add_disk(&dev->core, gendisk);
+ device_add_disk(&dev->core, gendisk, NULL);
return 0;

fail_cleanup_queue:
diff --git a/drivers/block/rsxx/dev.c b/drivers/block/rsxx/dev.c
index 1a92f9e65937..3894aa0f350b 100644
--- a/drivers/block/rsxx/dev.c
+++ b/drivers/block/rsxx/dev.c
@@ -226,7 +226,7 @@ int rsxx_attach_dev(struct rsxx_cardinfo *card)
set_capacity(card->gendisk, card->size8 >> 9);
else
set_capacity(card->gendisk, 0);
- device_add_disk(CARD_TO_DEV(card), card->gendisk);
+ device_add_disk(CARD_TO_DEV(card), card->gendisk, NULL);
card->bdev_attached = 1;
}

diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 87b9e7fbf062..a85c9a622c41 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -3104,7 +3104,7 @@ static int skd_bdev_getgeo(struct block_device *bdev, struct hd_geometry *geo)
static int skd_bdev_attach(struct device *parent, struct skd_device *skdev)
{
dev_dbg(&skdev->pdev->dev, "add_disk\n");
- device_add_disk(parent, skdev->disk);
+ device_add_disk(parent, skdev->disk, NULL);
return 0;
}

diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 5ca56bfae63c..09409edce384 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -850,7 +850,7 @@ static int probe_disk(struct vdc_port *port)
port->vdisk_size, (port->vdisk_size >> (20 - 9)),
port->vio.ver.major, port->vio.ver.minor);

- device_add_disk(&port->vio.vdev->dev, g);
+ device_add_disk(&port->vio.vdev->dev, g, NULL);

return 0;
}
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 23752dc99b00..fe80560000a1 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -780,7 +780,7 @@ static int virtblk_probe(struct virtio_device *vdev)
virtblk_update_capacity(vblk, false);
virtio_device_ready(vdev);

- device_add_disk(&vdev->dev, vblk->disk);
+ device_add_disk(&vdev->dev, vblk->disk, NULL);
err = device_create_file(disk_to_dev(vblk->disk), &dev_attr_serial);
if (err)
goto out_del_disk;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 94300dbe358b..2d47c7a5e0fa 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2397,7 +2397,7 @@ static void blkfront_connect(struct blkfront_info *info)
for (i = 0; i < info->nr_rings; i++)
kick_pending_request_queues(&info->rinfo[i]);

- device_add_disk(&info->xbdev->dev, info->gd);
+ device_add_disk(&info->xbdev->dev, info->gd, NULL);

info->is_ready = 1;
return;
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 44a7a255ef74..f9b59d41813f 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -1784,7 +1784,7 @@ static int ide_cd_probe(ide_drive_t *drive)
ide_cd_read_toc(drive);
g->fops = &idecd_ops;
g->flags |= GENHD_FL_REMOVABLE | GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE;
- device_add_disk(&drive->gendev, g);
+ device_add_disk(&drive->gendev, g, NULL);
return 0;

out_free_disk:
diff --git a/drivers/ide/ide-gd.c b/drivers/ide/ide-gd.c
index e823394ed543..04e008e8f6f9 100644
--- a/drivers/ide/ide-gd.c
+++ b/drivers/ide/ide-gd.c
@@ -416,7 +416,7 @@ static int ide_gd_probe(ide_drive_t *drive)
if (drive->dev_flags & IDE_DFLAG_REMOVABLE)
g->flags = GENHD_FL_REMOVABLE;
g->fops = &ide_gd_ops;
- device_add_disk(&drive->gendev, g);
+ device_add_disk(&drive->gendev, g, NULL);
return 0;

out_free_disk:
diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index 716fc8ed31d3..8a02f11076f9 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -2146,7 +2146,7 @@ static int msb_init_disk(struct memstick_dev *card)
set_disk_ro(msb->disk, 1);

msb_start(card);
- device_add_disk(&card->dev, msb->disk);
+ device_add_disk(&card->dev, msb->disk, NULL);
dbg("Disk added");
return 0;

diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index 5ee932631fae..0cd30dcb6801 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -1236,7 +1236,7 @@ static int mspro_block_init_disk(struct memstick_dev *card)
set_capacity(msb->disk, capacity);
dev_dbg(&card->dev, "capacity set %ld\n", capacity);

- device_add_disk(&card->dev, msb->disk);
+ device_add_disk(&card->dev, msb->disk, NULL);
msb->active = 1;
return 0;

diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index a0b9102c4c6e..de8e1a8be690 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2698,7 +2698,7 @@ static int mmc_add_disk(struct mmc_blk_data *md)
int ret;
struct mmc_card *card = md->queue.card;

- device_add_disk(md->parent, md->disk);
+ device_add_disk(md->parent, md->disk, NULL);
md->force_ro.show = force_ro_show;
md->force_ro.store = force_ro_store;
sysfs_attr_init(&md->force_ro.attr);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 29c0bfd74e8a..6a41dfa3c36b 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -447,7 +447,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
if (new->readonly)
set_disk_ro(gd, 1);

- device_add_disk(&new->mtd->dev, gd);
+ device_add_disk(&new->mtd->dev, gd, NULL);

if (new->disk_attributes) {
ret = sysfs_create_group(&disk_to_dev(gd)->kobj,
diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
index 62e9cb167aad..db45c6bbb7bb 100644
--- a/drivers/nvdimm/blk.c
+++ b/drivers/nvdimm/blk.c
@@ -290,7 +290,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
}

set_capacity(disk, available_disk_size >> SECTOR_SHIFT);
- device_add_disk(dev, disk);
+ device_add_disk(dev, disk, NULL);
revalidate_disk(disk);
return 0;
}
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 0360c015f658..b123b0dcf274 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1556,7 +1556,7 @@ static int btt_blk_init(struct btt *btt)
}
}
set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
- device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
+ device_add_disk(&btt->nd_btt->dev, btt->btt_disk, NULL);
btt->nd_btt->size = btt->nlba * (u64)btt->sector_size;
revalidate_disk(btt->btt_disk);

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index dd17acd8fe68..dad84a1d23db 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -445,7 +445,7 @@ static int pmem_attach_disk(struct device *dev,
gendev = disk_to_dev(disk);
gendev->groups = pmem_attribute_groups;

- device_add_disk(dev, disk);
+ device_add_disk(dev, disk, NULL);
if (devm_add_action_or_reset(dev, pmem_release_disk, pmem))
return -ENOMEM;

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dd8ec1dd9219..0e824e8c8fd7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3099,7 +3099,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)

nvme_get_ctrl(ctrl);

- device_add_disk(ctrl->device, ns->disk);
+ device_add_disk(ctrl->device, ns->disk, NULL);
if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
&nvme_ns_id_attr_group))
pr_warn("%s: failed to create sysfs group for identification\n",
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 5a9562881d4e..477af51d01e8 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -283,7 +283,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
return;

if (!(head->disk->flags & GENHD_FL_UP)) {
- device_add_disk(&head->subsys->dev, head->disk);
+ device_add_disk(&head->subsys->dev, head->disk, NULL);
if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
&nvme_ns_id_attr_group))
dev_warn(&head->subsys->dev,
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 7036a6c6f86f..5542d9eadfe0 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -76,7 +76,7 @@ int dasd_gendisk_alloc(struct dasd_block *block)
gdp->queue = block->request_queue;
block->gdp = gdp;
set_capacity(block->gdp, 0);
- device_add_disk(&base->cdev->dev, block->gdp);
+ device_add_disk(&base->cdev->dev, block->gdp, NULL);
return 0;
}

diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index ed607288e696..a0738c6c4eab 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -685,7 +685,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
}

get_device(&dev_info->dev);
- device_add_disk(&dev_info->dev, dev_info->gd);
+ device_add_disk(&dev_info->dev, dev_info->gd, NULL);

switch (dev_info->segment_type) {
case SEG_TYPE_SR:
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index b1fcb76dd272..7559ac40f01b 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -499,7 +499,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)

/* 512 byte sectors */
set_capacity(bdev->gendisk, scmdev->size >> 9);
- device_add_disk(&scmdev->dev, bdev->gendisk);
+ device_add_disk(&scmdev->dev, bdev->gendisk, NULL);
return 0;

out_queue:
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index bbebdc3769b0..dbc7b7cb04c3 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3273,7 +3273,7 @@ static void sd_probe_async(void *data, async_cookie_t cookie)
}

blk_pm_runtime_init(sdp->request_queue, dev);
- device_add_disk(dev, gd);
+ device_add_disk(dev, gd, NULL);
if (sdkp->capacity)
sd_dif_config_host(sdkp);

diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
index 3f3cb72e0c0c..f628b04e0920 100644
--- a/drivers/scsi/sr.c
+++ b/drivers/scsi/sr.c
@@ -745,7 +745,7 @@ static int sr_probe(struct device *dev)

dev_set_drvdata(dev, cd);
disk->flags |= GENHD_FL_REMOVABLE;
- device_add_disk(&sdev->sdev_gendev, disk);
+ device_add_disk(&sdev->sdev_gendev, disk, NULL);

sdev_printk(KERN_DEBUG, sdev,
"Attached scsi CD-ROM %s\n", cd->cdi.name);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 57864422a2c8..0b820ff05839 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -399,10 +399,11 @@ static inline void free_part_info(struct hd_struct *part)
extern void part_round_stats(struct request_queue *q, int cpu, struct hd_struct *part);

/* block/genhd.c */
-extern void device_add_disk(struct device *parent, struct gendisk *disk);
+extern void device_add_disk(struct device *parent, struct gendisk *disk,
+ const struct attribute_group **groups);
static inline void add_disk(struct gendisk *disk)
{
- device_add_disk(NULL, disk);
+ device_add_disk(NULL, disk, NULL);
}
extern void device_add_disk_no_queue_reg(struct device *parent, struct gendisk *disk);
static inline void add_disk_no_queue_reg(struct gendisk *disk)
--
2.12.3


2018-08-14 09:04:49

by Javier González

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

>
> On 14 Aug 2018, at 09.33, Hannes Reinecke <[email protected]> wrote:
>
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> ---
> drivers/nvme/host/core.c | 21 +++++++++------------
> drivers/nvme/host/lightnvm.c | 27 ++++-----------------------
> drivers/nvme/host/multipath.c | 15 ++++-----------
> drivers/nvme/host/nvme.h | 11 +++--------
> 4 files changed, 20 insertions(+), 54 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
> .is_visible = nvme_ns_id_attrs_are_visible,
> };
>
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> + &nvme_ns_id_attr_group,
> + NULL, /* Will be filled in by lightnvm if present */
> + NULL,
> +};
> +
> #define nvme_show_str_function(field) \
> static ssize_t field##_show(struct device *dev, \
> struct device_attribute *attr, char *buf) \
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
>
> nvme_get_ctrl(ctrl);
>
> - device_add_disk(ctrl->device, ns->disk, NULL);
> - if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvme_ns_id_attr_group))
> - pr_warn("%s: failed to create sysfs group for identification\n",
> - ns->disk->disk_name);
> - if (ns->ndev && nvme_nvm_register_sysfs(ns))
> - pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> - ns->disk->disk_name);
> + if (ns->ndev)
> + nvme_nvm_register_sysfs(ns);
> + device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
>
> nvme_mpath_add_disk(ns, id);
> nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>
> nvme_fault_inject_fini(ns);
> if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvme_ns_id_attr_group);
> - if (ns->ndev)
> - nvme_nvm_unregister_sysfs(ns);
> del_gendisk(ns->disk);
> blk_cleanup_queue(ns->queue);
> if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
> .attrs = nvm_dev_attrs_20,
> };
>
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
> {
> struct nvm_dev *ndev = ns->ndev;
> struct nvm_geo *geo = &ndev->geo;
>
> if (!ndev)
> - return -EINVAL;
> -
> - switch (geo->major_ver_id) {
> - case 1:
> - return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_12);
> - case 2:
> - return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_20);
> - }
> -
> - return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> - struct nvm_dev *ndev = ns->ndev;
> - struct nvm_geo *geo = &ndev->geo;
> + return;
>
> switch (geo->major_ver_id) {
> case 1:
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_12);
> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
> break;
> case 2:
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_20);
> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> break;
> }
> }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
> if (!head->disk)
> return;
>
> - if (!(head->disk->flags & GENHD_FL_UP)) {
> - device_add_disk(&head->subsys->dev, head->disk, NULL);
> - if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> - &nvme_ns_id_attr_group))
> - dev_warn(&head->subsys->dev,
> - "failed to create id group.\n");
> - }
> + if (!(head->disk->flags & GENHD_FL_UP))
> + device_add_disk(&head->subsys->dev, head->disk,
> + nvme_ns_id_attr_groups);
>
> kblockd_schedule_work(&ns->head->requeue_work);
> }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> {
> if (!head->disk)
> return;
> - if (head->disk->flags & GENHD_FL_UP) {
> - sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> - &nvme_ns_id_attr_group);
> + if (head->disk->flags & GENHD_FL_UP)
> del_gendisk(head->disk);
> - }
> blk_set_queue_dying(head->disk->queue);
> /* make sure all pending bios are cleaned up */
> kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
> int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
> void *log, size_t size, u64 offset);
>
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
> extern const struct block_device_operations nvme_ns_head_ops;
>
> #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
> void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
> void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
> int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
> #else
> static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
> }
>
> static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> - return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
> static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
> unsigned long arg)
> {
> --
> 2.12.3
>
>

The lightnvm part you added to V2 looks good. Thanks!


Reviewed-by: Javier González <[email protected]>


Attachments:
signature.asc (849.00 B)
Message signed with OpenPGP

2018-08-14 10:04:04

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On 08/14/2018 09:33 AM, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> ---
> drivers/nvme/host/core.c | 21 +++++++++------------
> drivers/nvme/host/lightnvm.c | 27 ++++-----------------------
> drivers/nvme/host/multipath.c | 15 ++++-----------
> drivers/nvme/host/nvme.h | 11 +++--------
> 4 files changed, 20 insertions(+), 54 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 0e824e8c8fd7..8e26d98e9a8f 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2734,6 +2734,12 @@ const struct attribute_group nvme_ns_id_attr_group = {
> .is_visible = nvme_ns_id_attrs_are_visible,
> };
>
> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
> + &nvme_ns_id_attr_group,
> + NULL, /* Will be filled in by lightnvm if present */
> + NULL,
> +};
> +
> #define nvme_show_str_function(field) \
> static ssize_t field##_show(struct device *dev, \
> struct device_attribute *attr, char *buf) \
> @@ -3099,14 +3105,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
>
> nvme_get_ctrl(ctrl);
>
> - device_add_disk(ctrl->device, ns->disk, NULL);
> - if (sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvme_ns_id_attr_group))
> - pr_warn("%s: failed to create sysfs group for identification\n",
> - ns->disk->disk_name);
> - if (ns->ndev && nvme_nvm_register_sysfs(ns))
> - pr_warn("%s: failed to register lightnvm sysfs group for identification\n",
> - ns->disk->disk_name);
> + if (ns->ndev)
> + nvme_nvm_register_sysfs(ns);
> + device_add_disk(ctrl->device, ns->disk, nvme_ns_id_attr_groups);
>
> nvme_mpath_add_disk(ns, id);
> nvme_fault_inject_init(ns);
> @@ -3132,10 +3133,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>
> nvme_fault_inject_fini(ns);
> if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvme_ns_id_attr_group);
> - if (ns->ndev)
> - nvme_nvm_unregister_sysfs(ns);
> del_gendisk(ns->disk);
> blk_cleanup_queue(ns->queue);
> if (blk_get_integrity(ns->disk))
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 6fe5923c95d4..7bf2f9da6293 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -1270,39 +1270,20 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
> .attrs = nvm_dev_attrs_20,
> };
>
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns)
> {
> struct nvm_dev *ndev = ns->ndev;
> struct nvm_geo *geo = &ndev->geo;
>
> if (!ndev)
> - return -EINVAL;
> -
> - switch (geo->major_ver_id) {
> - case 1:
> - return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_12);
> - case 2:
> - return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_20);
> - }
> -
> - return -EINVAL;
> -}
> -
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> -{
> - struct nvm_dev *ndev = ns->ndev;
> - struct nvm_geo *geo = &ndev->geo;
> + return;
>
> switch (geo->major_ver_id) {
> case 1:
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_12);
> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
> break;
> case 2:
> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> - &nvm_dev_attr_group_20);
> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
> break;
> }
> }
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 477af51d01e8..8e846095c42d 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -282,13 +282,9 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
> if (!head->disk)
> return;
>
> - if (!(head->disk->flags & GENHD_FL_UP)) {
> - device_add_disk(&head->subsys->dev, head->disk, NULL);
> - if (sysfs_create_group(&disk_to_dev(head->disk)->kobj,
> - &nvme_ns_id_attr_group))
> - dev_warn(&head->subsys->dev,
> - "failed to create id group.\n");
> - }
> + if (!(head->disk->flags & GENHD_FL_UP))
> + device_add_disk(&head->subsys->dev, head->disk,
> + nvme_ns_id_attr_groups);
>
> kblockd_schedule_work(&ns->head->requeue_work);
> }
> @@ -494,11 +490,8 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> {
> if (!head->disk)
> return;
> - if (head->disk->flags & GENHD_FL_UP) {
> - sysfs_remove_group(&disk_to_dev(head->disk)->kobj,
> - &nvme_ns_id_attr_group);
> + if (head->disk->flags & GENHD_FL_UP)
> del_gendisk(head->disk);
> - }
> blk_set_queue_dying(head->disk->queue);
> /* make sure all pending bios are cleaned up */
> kblockd_schedule_work(&head->requeue_work);
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bb4a2003c097..9ba6d67d8e0a 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -459,7 +459,7 @@ int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
> int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp,
> void *log, size_t size, u64 offset);
>
> -extern const struct attribute_group nvme_ns_id_attr_group;
> +extern const struct attribute_group *nvme_ns_id_attr_groups[];
> extern const struct block_device_operations nvme_ns_head_ops;
>
> #ifdef CONFIG_NVME_MULTIPATH
> @@ -551,8 +551,7 @@ static inline void nvme_mpath_stop(struct nvme_ctrl *ctrl)
> void nvme_nvm_update_nvm_info(struct nvme_ns *ns);
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
> void nvme_nvm_unregister(struct nvme_ns *ns);
> -int nvme_nvm_register_sysfs(struct nvme_ns *ns);
> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns);
> +void nvme_nvm_register_sysfs(struct nvme_ns *ns);
> int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg);
> #else
> static inline void nvme_nvm_update_nvm_info(struct nvme_ns *ns) {};
> @@ -563,11 +562,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
> }
>
> static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
> -static inline int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> -{
> - return 0;
> -}
> -static inline void nvme_nvm_unregister_sysfs(struct nvme_ns *ns) {};
> +static inline void nvme_nvm_register_sysfs(struct nvme_ns *ns) {};
> static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
> unsigned long arg)
> {
>

Thanks Hannes.

Acked-by: Matias Bjørling <[email protected]>

2018-08-14 15:18:38

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 1/5] block: genhd: add 'groups' argument to device_add_disk

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- Update device+AF8-add+AF8-disk() to take an 'groups' argument so that
+AD4- individual drivers can register a device with additional sysfs
+AD4- attributes.
+AD4- This avoids race condition the driver would otherwise have if these
+AF4- +ACI-a+ACI- is missing?
+AD4- groups were to be created with sysfs+AF8-add+AF8-groups().

Anyway:

Reviewed-by: Bart Van Assche +ADw-bart.vanassche+AEA-wdc.com+AD4-



2018-08-14 15:22:42

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- +-const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- +- +ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- +- NULL, /+ACo- Will be filled in by lightnvm if present +ACo-/
+AD4- +- NULL,
+AD4- +-+AH0AOw-
+AFs- ... +AF0-
+AD4- -void nvme+AF8-nvm+AF8-unregister+AF8-sysfs(struct nvme+AF8-ns +ACo-ns)
+AD4- -+AHs-
+AD4- - struct nvm+AF8-dev +ACo-ndev +AD0- ns-+AD4-ndev+ADs-
+AD4- - struct nvm+AF8-geo +ACo-geo +AD0- +ACY-ndev-+AD4-geo+ADs-
+AD4- +- return+ADs-
+AD4-
+AD4- switch (geo-+AD4-major+AF8-ver+AF8-id) +AHs-
+AD4- case 1:
+AD4- - sysfs+AF8-remove+AF8-group(+ACY-disk+AF8-to+AF8-dev(ns-+AD4-disk)-+AD4-kobj,
+AD4- - +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12)+ADs-
+AD4- +- nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12+ADs-
+AD4- break+ADs-
+AD4- case 2:
+AD4- - sysfs+AF8-remove+AF8-group(+ACY-disk+AF8-to+AF8-dev(ns-+AD4-disk)-+AD4-kobj,
+AD4- - +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20)+ADs-
+AD4- +- nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20+ADs-

This patch introduces a really ugly dependency between the NVMe core code and
the lightnvm code, namely that the lightnvm code has to know at which position
in the nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups it can fill in its attribute group pointer. Have
you considered to make nvme+AF8-nvm+AF8-register+AF8-sysfs() return an attribute group
pointer such that the nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups can be changed from a global into
a static array?

Thanks,

Bart.


2018-08-14 15:24:19

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 4/5] zram: register default groups with device_add_disk()

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- Register default sysfs groups during device+AF8-add-disk() to avoid a
+AF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXgBeAF4AXg-
device+AF8-add+AF8-disk()?
+AD4- race condition with udev during startup.
+AD4-
+AD4- Signed-off-by: Hannes Reinecke +ADw-hare+AEA-suse.com+AD4-
+AD4- Cc: Minchan Kim +ADw-minchan+AEA-kernel.org+AD4-
+AD4- Cc: Nitin Gupta +ADw-ngupta+AEA-vflare.org+AD4-
+AD4- Reviewed-by: Christoph Hellwig +ADw-hch+AEA-lst.de+AD4-

Anyway:

Reviewed-by: Bart Van Assche +ADw-bart.vanassche+AEA-wdc.com+AD4-



2018-08-14 15:24:23

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 3/5] aoe: use device_add_disk_with_groups()

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- Use device+AF8-add+AF8-disk+AF8-with+AF8-groups() to avoid a race condition with
+AD4- udev during startup.
+AD4-
+AD4- Signed-off-by: Hannes Reinecke +ADw-hare+AEA-suse.com+AD4-
+AD4- Reviewed-by: Christoph Hellwig +ADw-hch+AEA-lst.de+AD4-
+AD4- Cc: Ed L. Cachin +ADw-ed.cashin+AEA-acm.org+AD4-

Reviewed-by: Bart Van Assche +ADw-bart.vanassche+AEA-wdc.com+AD4-



2018-08-14 15:26:56

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 5/5] virtio-blk: modernize sysfs attribute creation

On Tue, 2018-08-14 at 09:33 +-0200, Hannes Reinecke wrote:
+AD4- Use new-style DEVICE+AF8-ATTR+AF8-RO/DEVICE+AF8-ATTR+AF8-RW to create the sysfs attributes
+AD4- and register the disk with default sysfs attribute groups.

Reviewed-by: Bart Van Assche +ADw-bart.vanassche+AEA-wdc.com+AD4-



2018-08-14 15:42:59

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On 08/14/2018 05:20 PM, Bart Van Assche wrote:
> On Tue, 2018-08-14 at 09:33 +0200, Hannes Reinecke wrote:
>> +const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> + &nvme_ns_id_attr_group,
>> + NULL, /* Will be filled in by lightnvm if present */
>> + NULL,
>> +};
> [ ... ]
>> -void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
>> -{
>> - struct nvm_dev *ndev = ns->ndev;
>> - struct nvm_geo *geo = &ndev->geo;
>> + return;
>>
>> switch (geo->major_ver_id) {
>> case 1:
>> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> - &nvm_dev_attr_group_12);
>> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_12;
>> break;
>> case 2:
>> - sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> - &nvm_dev_attr_group_20);
>> + nvme_ns_id_attr_groups[1] = &nvm_dev_attr_group_20;
>
> This patch introduces a really ugly dependency between the NVMe core code and
> the lightnvm code, namely that the lightnvm code has to know at which position
> in the nvme_ns_id_attr_groups it can fill in its attribute group pointer. Have
> you considered to make nvme_nvm_register_sysfs() return an attribute group
> pointer such that the nvme_ns_id_attr_groups can be changed from a global into
> a static array?
>
Wouldn't help much, as the 'nvme_ns_id_attr_groups' needs to be a global
pointer anyway as it's references from drivers/nvme/host/core.c and
drivers/nvme/host/multipath.c.

While I have considered having nvme_nvm_register_sysfs() returning a
pointer I would then have to remove the 'static' declaration from the
nvm_dev_attr_group_12/20.
Which I didn't really like, either.

Cheers,

Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
[email protected] +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

2018-08-14 15:46:59

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On Tue, 2018-08-14 at 17:39 +-0200, Hannes Reinecke wrote:
+AD4- While I have considered having nvme+AF8-nvm+AF8-register+AF8-sysfs() returning a
+AD4- pointer I would then have to remove the 'static' declaration from the
+AD4- nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20.
+AD4- Which I didn't really like, either.

Hmm ... I don't see why the static declaration would have to be removed from
nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20 if nvme+AF8-nvm+AF8-register+AF8-sysfs() would return a pointer?
Am I perhaps missing something?

Thanks,

Bart.


2018-08-14 21:55:37

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

Hi Hannes,

I love your patch! Perhaps something to improve:

[auto build test WARNING on block/for-next]
[also build test WARNING on next-20180814]
[cannot apply to v4.18]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Hannes-Reinecke/genhd-register-default-groups-with-device_add_disk/20180815-034942
base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
reproduce:
# apt-get install sparse
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

include/uapi/linux/perf_event.h:147:56: sparse: cast truncates bits from constant value (8000000000000000 becomes 0)
drivers/nvme/host/core.c:506:28: sparse: expression using sizeof(void)
include/linux/blkdev.h:1269:16: sparse: expression using sizeof(void)
include/linux/slab.h:631:13: sparse: undefined identifier '__builtin_mul_overflow'
drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1062:32: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1063:26: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1844:32: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:1846:43: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2406:17: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:2417:42: sparse: expression using sizeof(void)
>> drivers/nvme/host/core.c:2732:30: sparse: symbol 'nvme_ns_id_attr_group' was not declared. Should it be static?
drivers/nvme/host/core.c:3203:33: sparse: expression using sizeof(void)
drivers/nvme/host/core.c:3566:17: sparse: expression using sizeof(void)
include/linux/slab.h:631:13: sparse: call with no type!
drivers/nvme/host/core.c:1264:23: sparse: context imbalance in 'nvme_get_ns_from_disk' - wrong count at exit
drivers/nvme/host/core.c:1282:33: sparse: context imbalance in 'nvme_put_ns_from_disk' - unexpected unlock

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation

2018-08-14 21:56:04

by Fengguang Wu

[permalink] [raw]
Subject: [RFC PATCH] nvme: nvme_ns_id_attr_group can be static


Fixes: 8bb7a8e4617e ("nvme: register ns_id attributes as default sysfs groups")
Signed-off-by: kbuild test robot <[email protected]>
---
core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8e26d98..76cdb60 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2729,7 +2729,7 @@ static umode_t nvme_ns_id_attrs_are_visible(struct kobject *kobj,
return a->mode;
}

-const struct attribute_group nvme_ns_id_attr_group = {
+static const struct attribute_group nvme_ns_id_attr_group = {
.attrs = nvme_ns_id_attrs,
.is_visible = nvme_ns_id_attrs_are_visible,
};

2018-08-17 06:54:18

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 3/5] aoe: use device_add_disk_with_groups()

On Tue, Aug 14, 2018 at 09:33:03AM +0200, Hannes Reinecke wrote:
> Use device_add_disk_with_groups() to avoid a race condition with
> udev during startup.

The desxription and the subject line need an update now that
device_add_disk_with_groups is no more.

2018-08-17 06:55:30

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On Tue, Aug 14, 2018 at 09:33:02AM +0200, Hannes Reinecke wrote:
> We should be registering the ns_id attribute as default sysfs
> attribute groups, otherwise we have a race condition between
> the uevent and the attributes appearing in sysfs.
>
> Signed-off-by: Hannes Reinecke <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>

I only reviewed the original patch without the lightnvm additions.
Please drop reviewed-by tags if you make non-trivial changes.

In this case I also wonder if you should have just kept the original
patch as-is and added a new one for lightnvm.

2018-08-17 07:02:01

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
> > While I have considered having nvme_nvm_register_sysfs() returning a
> > pointer I would then have to remove the 'static' declaration from the
> > nvm_dev_attr_group_12/20.
> > Which I didn't really like, either.
>
> Hmm ... I don't see why the static declaration would have to be removed from
> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
> Am I perhaps missing something?

No, I think that would be the preferable approach IFF patching the global
table of groups would be viable. I don't think it is, though - we can
have both normal NVMe and LightNVM devices in the same system, so we
can't just patch it over.

So we'll need three different attribut group arrays:

const struct attribute_group *nvme_ns_id_attr_groups[] = {
&nvme_ns_id_attr_group,
NULL,
};

const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
&nvme_ns_id_attr_group,
&nvm_dev_attr_group_12,
NULL,
};

const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
&nvme_ns_id_attr_group,
&nvm_dev_attr_group_20,
NULL,
};

and a function to select which one to use.

2018-08-17 07:54:57

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On 08/17/2018 09:00 AM, [email protected] wrote:
> On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
>> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>> pointer I would then have to remove the 'static' declaration from the
>>> nvm_dev_attr_group_12/20.
>>> Which I didn't really like, either.
>>
>> Hmm ... I don't see why the static declaration would have to be removed from
>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>> Am I perhaps missing something?
>
> No, I think that would be the preferable approach IFF patching the global
> table of groups would be viable. I don't think it is, though - we can
> have both normal NVMe and LightNVM devices in the same system, so we
> can't just patch it over.
>
> So we'll need three different attribut group arrays:
>
> const struct attribute_group *nvme_ns_id_attr_groups[] = {
> &nvme_ns_id_attr_group,
> NULL,
> };
>
> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
> &nvme_ns_id_attr_group,
> &nvm_dev_attr_group_12,
> NULL,
> };
>
> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
> &nvme_ns_id_attr_group,
> &nvm_dev_attr_group_20,
> NULL,
> };
>
> and a function to select which one to use.
>
Yeah, I figured the same thing.
I'll be redoing the patchset.

Cheers,

Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
[email protected] +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

2018-08-17 20:06:14

by Bart Van Assche

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On Fri, 2018-08-17 at 09:00 +-0200, hch+AEA-lst.de wrote:
+AD4- On Tue, Aug 14, 2018 at 03:44:57PM +-0000, Bart Van Assche wrote:
+AD4- +AD4- On Tue, 2018-08-14 at 17:39 +-0200, Hannes Reinecke wrote:
+AD4- +AD4- +AD4- While I have considered having nvme+AF8-nvm+AF8-register+AF8-sysfs() returning a
+AD4- +AD4- +AD4- pointer I would then have to remove the 'static' declaration from the
+AD4- +AD4- +AD4- nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20.
+AD4- +AD4- +AD4- Which I didn't really like, either.
+AD4- +AD4-
+AD4- +AD4- Hmm ... I don't see why the static declaration would have to be removed from
+AD4- +AD4- nvm+AF8-dev+AF8-attr+AF8-group+AF8-12/20 if nvme+AF8-nvm+AF8-register+AF8-sysfs() would return a pointer?
+AD4- +AD4- Am I perhaps missing something?
+AD4-
+AD4- No, I think that would be the preferable approach IFF patching the global
+AD4- table of groups would be viable. I don't think it is, though - we can
+AD4- have both normal NVMe and LightNVM devices in the same system, so we
+AD4- can't just patch it over.
+AD4-
+AD4- So we'll need three different attribut group arrays:
+AD4-
+AD4- const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- +ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- NULL,
+AD4- +AH0AOw-
+AD4-
+AD4- const struct attribute+AF8-group +ACo-lightnvm12+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- +ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12,
+AD4- NULL,
+AD4- +AH0AOw-
+AD4-
+AD4- const struct attribute+AF8-group +ACo-lightnvm20+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFsAXQ- +AD0- +AHs-
+AD4- +ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
+AD4- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20,
+AD4- NULL,
+AD4- +AH0AOw-
+AD4-
+AD4- and a function to select which one to use.

Hello Christoph,

How about applying the patch below on top of Hannes' patch? The patch below
has the advantage that it completely separates the open channel sysfs
attributes from the NVMe core sysfs attributes - the open channel code
doesn't have to know anything about the NVMe core sysfs attributes and the
NVMe core does not have to know anything about the open channel sysfs
attributes.

Thanks,

Bart.


diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8e26d98e9a8f..e0a9e1c5b30e 100644
--- a/drivers/nvme/host/core.c
+-+-+- b/drivers/nvme/host/core.c
+AEAAQA- -2736,7 +-2736,9 +AEAAQA- const struct attribute+AF8-group nvme+AF8-ns+AF8-id+AF8-attr+AF8-group +AD0- +AHs-

const struct attribute+AF8-group +ACo-nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFsAXQ- +AD0- +AHs-
+ACY-nvme+AF8-ns+AF8-id+AF8-attr+AF8-group,
- NULL, /+ACo- Will be filled in by lightnvm if present +ACo-/
+-+ACM-ifdef CONFIG+AF8-NVM
+- +ACY-nvme+AF8-nvm+AF8-attr+AF8-group,
+-+ACM-endif
NULL,
+AH0AOw-

+AEAAQA- -3105,8 +-3107,6 +AEAAQA- static void nvme+AF8-alloc+AF8-ns(struct nvme+AF8-ctrl +ACo-ctrl, unsigned nsid)

nvme+AF8-get+AF8-ctrl(ctrl)+ADs-

- if (ns-+AD4-ndev)
- nvme+AF8-nvm+AF8-register+AF8-sysfs(ns)+ADs-
device+AF8-add+AF8-disk(ctrl-+AD4-device, ns-+AD4-disk, nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups)+ADs-

nvme+AF8-mpath+AF8-add+AF8-disk(ns, id)+ADs-
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 7bf2f9da6293..b7b19d3561a4 100644
--- a/drivers/nvme/host/lightnvm.c
+-+-+- b/drivers/nvme/host/lightnvm.c
+AEAAQA- -1190,10 +-1190,29 +AEAAQA- static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8-RO(multiplane+AF8-modes)+ADs-
static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8-RO(media+AF8-capabilities)+ADs-
static NVM+AF8-DEV+AF8-ATTR+AF8-12+AF8-RO(max+AF8-phys+AF8-secs)+ADs-

-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-12+AFsAXQ- +AD0- +AHs-
+-/+ACo- 2.0 values +ACo-/
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(groups)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(punits)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(chunks)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(clba)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-min)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-opt)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxoc)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxocpu)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(mw+AF8-cunits)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-typ)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-max)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-typ)+ADs-
+-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-max)+ADs-
+-
+-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AFsAXQ- +AD0- +AHs-
+- /+ACo- version agnostic attrs +ACo-/
+ACY-dev+AF8-attr+AF8-version.attr,
+ACY-dev+AF8-attr+AF8-capabilities.attr,
+- +ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
+- +ACY-dev+AF8-attr+AF8-read+AF8-max.attr,

+- /+ACo- 1.2 attrs +ACo-/
+ACY-dev+AF8-attr+AF8-vendor+AF8-opcode.attr,
+ACY-dev+AF8-attr+AF8-device+AF8-mode.attr,
+ACY-dev+AF8-attr+AF8-media+AF8-manager.attr,
+AEAAQA- -1208,8 +-1227,6 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-12+AFsAXQ- +AD0- +AHs-
+ACY-dev+AF8-attr+AF8-page+AF8-size.attr,
+ACY-dev+AF8-attr+AF8-hw+AF8-sector+AF8-size.attr,
+ACY-dev+AF8-attr+AF8-oob+AF8-sector+AF8-size.attr,
- +ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
- +ACY-dev+AF8-attr+AF8-read+AF8-max.attr,
+ACY-dev+AF8-attr+AF8-prog+AF8-typ.attr,
+ACY-dev+AF8-attr+AF8-prog+AF8-max.attr,
+ACY-dev+AF8-attr+AF8-erase+AF8-typ.attr,
+AEAAQA- -1218,33 +-1235,7 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-12+AFsAXQ- +AD0- +AHs-
+ACY-dev+AF8-attr+AF8-media+AF8-capabilities.attr,
+ACY-dev+AF8-attr+AF8-max+AF8-phys+AF8-secs.attr,

- NULL,
-+AH0AOw-
-
-static const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group+AF8-12 +AD0- +AHs-
- .name +AD0- +ACI-lightnvm+ACI-,
- .attrs +AD0- nvm+AF8-dev+AF8-attrs+AF8-12,
-+AH0AOw-
-
-/+ACo- 2.0 values +ACo-/
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(groups)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(punits)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(chunks)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(clba)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-min)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(ws+AF8-opt)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxoc)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(maxocpu)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(mw+AF8-cunits)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-typ)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(write+AF8-max)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-typ)+ADs-
-static NVM+AF8-DEV+AF8-ATTR+AF8-20+AF8-RO(reset+AF8-max)+ADs-
-
-static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-20+AFsAXQ- +AD0- +AHs-
- +ACY-dev+AF8-attr+AF8-version.attr,
- +ACY-dev+AF8-attr+AF8-capabilities.attr,
-
+- /+ACo- 2.0 attrs +ACo-/
+ACY-dev+AF8-attr+AF8-groups.attr,
+ACY-dev+AF8-attr+AF8-punits.attr,
+ACY-dev+AF8-attr+AF8-chunks.attr,
+AEAAQA- -1255,8 +-1246,6 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-20+AFsAXQ- +AD0- +AHs-
+ACY-dev+AF8-attr+AF8-maxocpu.attr,
+ACY-dev+AF8-attr+AF8-mw+AF8-cunits.attr,

- +ACY-dev+AF8-attr+AF8-read+AF8-typ.attr,
- +ACY-dev+AF8-attr+AF8-read+AF8-max.attr,
+ACY-dev+AF8-attr+AF8-write+AF8-typ.attr,
+ACY-dev+AF8-attr+AF8-write+AF8-max.attr,
+ACY-dev+AF8-attr+AF8-reset+AF8-typ.attr,
+AEAAQA- -1265,25 +-1254,35 +AEAAQA- static struct attribute +ACo-nvm+AF8-dev+AF8-attrs+AF8-20+AFsAXQ- +AD0- +AHs-
NULL,
+AH0AOw-

-static const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group+AF8-20 +AD0- +AHs-
- .name +AD0- +ACI-lightnvm+ACI-,
- .attrs +AD0- nvm+AF8-dev+AF8-attrs+AF8-20,
-+AH0AOw-
-
-void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns +ACo-ns)
+-static umode+AF8-t nvm+AF8-dev+AF8-attrs+AF8-visible(struct kobject +ACo-kobj,
+- struct attribute +ACo-attr, int index)
+AHs-
+- struct device +ACo-dev +AD0- container+AF8-of(kobj, struct device, kobj)+ADs-
+- struct gendisk +ACo-disk +AD0- dev+AF8-to+AF8-disk(dev)+ADs-
+- struct nvme+AF8-ns +ACo-ns +AD0- disk-+AD4-private+AF8-data+ADs-
struct nvm+AF8-dev +ACo-ndev +AD0- ns-+AD4-ndev+ADs-
- struct nvm+AF8-geo +ACo-geo +AD0- +ACY-ndev-+AD4-geo+ADs-
+- struct device+AF8-attribute +ACo-dev+AF8-attr +AD0-
+- container+AF8-of(attr, typeof(+ACo-dev+AF8-attr), attr)+ADs-

- if (+ACE-ndev)
- return+ADs-
+- if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show)
+- return attr-+AD4-mode+ADs-

- switch (geo-+AD4-major+AF8-ver+AF8-id) +AHs-
+- switch (ndev ? ndev-+AD4-geo.major+AF8-ver+AF8-id : 0) +AHs-
case 1:
- nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-12+ADs-
+- if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show+AF8-12)
+- return attr-+AD4-mode+ADs-
break+ADs-
case 2:
- nvme+AF8-ns+AF8-id+AF8-attr+AF8-groups+AFs-1+AF0- +AD0- +ACY-nvm+AF8-dev+AF8-attr+AF8-group+AF8-20+ADs-
+- if (dev+AF8-attr-+AD4-show +AD0APQ- nvm+AF8-dev+AF8-attr+AF8-show+AF8-20)
+- return attr-+AD4-mode+ADs-
break+ADs-
+AH0-
+-
+- return 0+ADs-
+AH0-
+-
+-const struct attribute+AF8-group nvm+AF8-dev+AF8-attr+AF8-group +AD0- +AHs-
+- .name +AD0- +ACI-lightnvm+ACI-,
+- .attrs +AD0- nvm+AF8-dev+AF8-attrs,
+- .is+AF8-visible +AD0- nvm+AF8-dev+AF8-attrs+AF8-visible,
+-+AH0AOw-
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9ba6d67d8e0a..2503f8fd54da 100644
--- a/drivers/nvme/host/nvme.h
+-+-+- b/drivers/nvme/host/nvme.h
+AEAAQA- -551,7 +-551,7 +AEAAQA- static inline void nvme+AF8-mpath+AF8-stop(struct nvme+AF8-ctrl +ACo-ctrl)
void nvme+AF8-nvm+AF8-update+AF8-nvm+AF8-info(struct nvme+AF8-ns +ACo-ns)+ADs-
int nvme+AF8-nvm+AF8-register(struct nvme+AF8-ns +ACo-ns, char +ACo-disk+AF8-name, int node)+ADs-
void nvme+AF8-nvm+AF8-unregister(struct nvme+AF8-ns +ACo-ns)+ADs-
-void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns +ACo-ns)+ADs-
+-extern const struct attribute+AF8-group nvme+AF8-nvm+AF8-attr+AF8-group+ADs-
int nvme+AF8-nvm+AF8-ioctl(struct nvme+AF8-ns +ACo-ns, unsigned int cmd, unsigned long arg)+ADs-
+ACM-else
static inline void nvme+AF8-nvm+AF8-update+AF8-nvm+AF8-info(struct nvme+AF8-ns +ACo-ns) +AHsAfQA7-
+AEAAQA- -562,7 +-562,6 +AEAAQA- static inline int nvme+AF8-nvm+AF8-register(struct nvme+AF8-ns +ACo-ns, char +ACo-disk+AF8-name,
+AH0-

static inline void nvme+AF8-nvm+AF8-unregister(struct nvme+AF8-ns +ACo-ns) +AHsAfQA7-
-static inline void nvme+AF8-nvm+AF8-register+AF8-sysfs(struct nvme+AF8-ns +ACo-ns) +AHsAfQA7-
static inline int nvme+AF8-nvm+AF8-ioctl(struct nvme+AF8-ns +ACo-ns, unsigned int cmd,
unsigned long arg)
+AHs-


2018-08-17 22:48:30

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups


> Hello Christoph,
>
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.

This looks better to me Bart (unless I'm missing something)

2018-08-20 06:36:02

by Hannes Reinecke

[permalink] [raw]
Subject: Re: [PATCH 2/5] nvme: register ns_id attributes as default sysfs groups

On 08/17/2018 10:04 PM, Bart Van Assche wrote:
> On Fri, 2018-08-17 at 09:00 +0200, [email protected] wrote:
>> On Tue, Aug 14, 2018 at 03:44:57PM +0000, Bart Van Assche wrote:
>>> On Tue, 2018-08-14 at 17:39 +0200, Hannes Reinecke wrote:
>>>> While I have considered having nvme_nvm_register_sysfs() returning a
>>>> pointer I would then have to remove the 'static' declaration from the
>>>> nvm_dev_attr_group_12/20.
>>>> Which I didn't really like, either.
>>>
>>> Hmm ... I don't see why the static declaration would have to be removed from
>>> nvm_dev_attr_group_12/20 if nvme_nvm_register_sysfs() would return a pointer?
>>> Am I perhaps missing something?
>>
>> No, I think that would be the preferable approach IFF patching the global
>> table of groups would be viable. I don't think it is, though - we can
>> have both normal NVMe and LightNVM devices in the same system, so we
>> can't just patch it over.
>>
>> So we'll need three different attribut group arrays:
>>
>> const struct attribute_group *nvme_ns_id_attr_groups[] = {
>> &nvme_ns_id_attr_group,
>> NULL,
>> };
>>
>> const struct attribute_group *lightnvm12_ns_id_attr_groups[] = {
>> &nvme_ns_id_attr_group,
>> &nvm_dev_attr_group_12,
>> NULL,
>> };
>>
>> const struct attribute_group *lightnvm20_ns_id_attr_groups[] = {
>> &nvme_ns_id_attr_group,
>> &nvm_dev_attr_group_20,
>> NULL,
>> };
>>
>> and a function to select which one to use.
>
> Hello Christoph,
>
> How about applying the patch below on top of Hannes' patch? The patch below
> has the advantage that it completely separates the open channel sysfs
> attributes from the NVMe core sysfs attributes - the open channel code
> doesn't have to know anything about the NVMe core sysfs attributes and the
> NVMe core does not have to know anything about the open channel sysfs
> attributes.
>
Yes, this looks like the best approach.
I'll fold it into my patchset.

Cheers,

Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
[email protected] +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)