2023-07-17 08:07:03

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v6 0/3] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code
00000000 54 57 |TW|
crc32
00000000 bb cd 51 98 |..Q.|
device-version
00000000 02 |.|
diag-version
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision
00000000 44 31 |D1|
mac-address
00000000 18 be 92 13 9a 00 |......|
manufacture-date
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer
00000000 44 4e 49 |DNI|
num-macs
00000000 00 40 |.@|
onie-version
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor
00000000 44 4e 49 |DNI|

Here is a list of known limitations though:
* It is currently not possible to know whether the cell contains ASCII
or binary data, so by default all cells are exposed in binary form.
* For now the implementation focuses on the read aspect. Technically
speaking, in some cases, it could be acceptable to write the cells, I
guess, but for now read-only files sound more than enough. A writable
path can be added later anyway.
* The sysfs entries are created when the device probes, not when the
NVMEM driver does. This means, if an NVMEM layout is used *and*
compiled as a module *and* not installed properly in the system (a
usermode helper tries to load the module otherwise), then the sysfs
cells won't appear when the layout is actually insmod'ed because the
sysfs folders/files have already been populated.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.


Miquel Raynal (3):
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Create all cells before adding the nvmem device
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 19 ++++
drivers/nvmem/core.c | 113 ++++++++++++++++++--
2 files changed, 126 insertions(+), 6 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

--
2.34.1



2023-07-17 08:38:36

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.

Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.

Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.

Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.

There is one limitation though: if a layout is built as a module but is
not properly installed in the system and loaded manually with insmod
while the nvmem device driver was built-in, the cells won't appear in
sysfs. But if done like that, the cells won't be usable by the built-in
kernel drivers anyway.

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
drivers/nvmem/core.c | 101 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 48659106a1e2..6c04a9cf6919 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -325,6 +325,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}

+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ size_t cell_sz, read_len;
+ void *content;
+
+ entry = attr->private;
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell))
+ return PTR_ERR(cell);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -346,8 +383,14 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};

+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
+ &nvmem_cells_group,
NULL,
};

@@ -406,6 +449,58 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}

+static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
+{
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells))
+ goto unlock_mutex;
+
+ /* Allocate an array of attributes with a sentinel */
+ ncells = list_count_nodes(&nvmem->cells);
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ attrs[i].private = entry;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */

static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

+#ifdef CONFIG_NVMEM_SYSFS
+ rval = nvmem_populate_sysfs_cells(nvmem);
+ if (rval)
+ goto err_remove_cells;
+#endif
+
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);

rval = device_add(&nvmem->dev);
--
2.34.1


2023-07-17 08:40:22

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..b2d15a8d36e5
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,19 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by
+ the nvmem device. The name of the file is the cell name.
+ The length of the file is the size of the cell (when
+ known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without
+ trailing character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1


2023-07-17 08:48:34

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v6 2/3] nvmem: core: Create all cells before adding the nvmem device

Let's pack all the cells creation in one place, so they are all created
before we add the nvmem device.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 3f8c7718412b..48659106a1e2 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -998,12 +998,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
-
- rval = device_add(&nvmem->dev);
- if (rval)
- goto err_remove_cells;
-
rval = nvmem_add_cells_from_fixed_layout(nvmem);
if (rval)
goto err_remove_cells;
@@ -1012,6 +1006,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

+ dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+
+ rval = device_add(&nvmem->dev);
+ if (rval)
+ goto err_remove_cells;
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;
--
2.34.1


2023-07-17 12:41:10

by Michael Walle

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi,

> There is one limitation though: if a layout is built as a module but is
> not properly installed in the system and loaded manually with insmod
> while the nvmem device driver was built-in, the cells won't appear in
> sysfs. But if done like that, the cells won't be usable by the built-in
> kernel drivers anyway.

What is the difference between manual loading with insmod and automatic
module loading? Or is the limitation, layout as M and device driver as Y
doesn't work?

-michael

2023-07-17 15:31:33

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> The binary content of nvmem devices is available to the user so in the
> easiest cases, finding the content of a cell is rather easy as it is
> just a matter of looking at a known and fixed offset. However, nvmem
> layouts have been recently introduced to cope with more advanced
> situations, where the offset and size of the cells is not known in
> advance or is dynamic. When using layouts, more advanced parsers are
> used by the kernel in order to give direct access to the content of each
> cell, regardless of its position/size in the underlying
> device. Unfortunately, these information are not accessible by users,
> unless by fully re-implementing the parser logic in userland.
>
> Let's expose the cells and their content through sysfs to avoid these
> situations. Of course the relevant NVMEM sysfs Kconfig option must be
> enabled for this support to be available.
>
> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> group member will be filled at runtime only when relevant and will
> remain empty otherwise. In this case, as the cells attribute group will
> be empty, it will not lead to any additional folder/file creation.
>
> Exposed cells are read-only. There is, in practice, everything in the
> core to support a write path, but as I don't see any need for that, I
> prefer to keep the interface simple (and probably safer). The interface
> is documented as being in the "testing" state which means we can later
> add a write attribute if though relevant.
>
> There is one limitation though: if a layout is built as a module but is
> not properly installed in the system and loaded manually with insmod
> while the nvmem device driver was built-in, the cells won't appear in
> sysfs. But if done like that, the cells won't be usable by the built-in
> kernel drivers anyway.

Wait, what? That should not be an issue here, if so, then this change
is not correct and should be fixed as this is NOT an issue for sysfs
(otherwise the whole tree wouldn't work.)

Please fix up your dependancies if this is somehow not working properly.

thanks,

greg k-h

2023-07-17 17:09:33

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Greg,

[email protected] wrote on Mon, 17 Jul 2023 16:32:09 +0200:

> On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> > The binary content of nvmem devices is available to the user so in the
> > easiest cases, finding the content of a cell is rather easy as it is
> > just a matter of looking at a known and fixed offset. However, nvmem
> > layouts have been recently introduced to cope with more advanced
> > situations, where the offset and size of the cells is not known in
> > advance or is dynamic. When using layouts, more advanced parsers are
> > used by the kernel in order to give direct access to the content of each
> > cell, regardless of its position/size in the underlying
> > device. Unfortunately, these information are not accessible by users,
> > unless by fully re-implementing the parser logic in userland.
> >
> > Let's expose the cells and their content through sysfs to avoid these
> > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > enabled for this support to be available.
> >
> > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > group member will be filled at runtime only when relevant and will
> > remain empty otherwise. In this case, as the cells attribute group will
> > be empty, it will not lead to any additional folder/file creation.
> >
> > Exposed cells are read-only. There is, in practice, everything in the
> > core to support a write path, but as I don't see any need for that, I
> > prefer to keep the interface simple (and probably safer). The interface
> > is documented as being in the "testing" state which means we can later
> > add a write attribute if though relevant.
> >
> > There is one limitation though: if a layout is built as a module but is
> > not properly installed in the system and loaded manually with insmod
> > while the nvmem device driver was built-in, the cells won't appear in
> > sysfs. But if done like that, the cells won't be usable by the built-in
> > kernel drivers anyway.
>
> Wait, what? That should not be an issue here, if so, then this change
> is not correct and should be fixed as this is NOT an issue for sysfs
> (otherwise the whole tree wouldn't work.)
>
> Please fix up your dependancies if this is somehow not working properly.

I'm not sure I fully get your point.

There is no way we can describe any dependency between a storage device
driver and an nvmem layout. NVMEM is a pure software abstraction, the
layout that will be chosen depends on the device tree, but if the
layout has not been installed, there is no existing mechanism in
the kernel to prevent it from being loaded (how do you know it's
not on purpose?).

Thanks,
Miquèl

2023-07-17 17:10:22

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Michael,

[email protected] wrote on Mon, 17 Jul 2023 14:24:45 +0200:

> Hi,
>
> > There is one limitation though: if a layout is built as a module but is
> > not properly installed in the system and loaded manually with insmod
> > while the nvmem device driver was built-in, the cells won't appear in
> > sysfs. But if done like that, the cells won't be usable by the built-in
> > kernel drivers anyway.
>
> What is the difference between manual loading with insmod and automatic
> module loading? Or is the limitation, layout as M and device driver as Y
> doesn't work?

The nvmem core uses usermodehelper to load the relevant layout
module, but that only works if the module was installed correctly (make
modules_install).

The limitation is:
* Any storage device driver that registers an nvmem interface =y (or =m
but loaded before the nvmem layout)
* The relevant nvmem layout =m *and* not installed with make
modules_install

If you see a way to workaround this, let me know, but there is no way
we can enforce Kconfig dependencies between storage drivers and nvmem
layouts IMHO.

Thanks,
Miquèl

2023-07-17 17:27:27

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:
> Hi Greg,
>
> [email protected] wrote on Mon, 17 Jul 2023 16:32:09 +0200:
>
> > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> > > The binary content of nvmem devices is available to the user so in the
> > > easiest cases, finding the content of a cell is rather easy as it is
> > > just a matter of looking at a known and fixed offset. However, nvmem
> > > layouts have been recently introduced to cope with more advanced
> > > situations, where the offset and size of the cells is not known in
> > > advance or is dynamic. When using layouts, more advanced parsers are
> > > used by the kernel in order to give direct access to the content of each
> > > cell, regardless of its position/size in the underlying
> > > device. Unfortunately, these information are not accessible by users,
> > > unless by fully re-implementing the parser logic in userland.
> > >
> > > Let's expose the cells and their content through sysfs to avoid these
> > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > enabled for this support to be available.
> > >
> > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > group member will be filled at runtime only when relevant and will
> > > remain empty otherwise. In this case, as the cells attribute group will
> > > be empty, it will not lead to any additional folder/file creation.
> > >
> > > Exposed cells are read-only. There is, in practice, everything in the
> > > core to support a write path, but as I don't see any need for that, I
> > > prefer to keep the interface simple (and probably safer). The interface
> > > is documented as being in the "testing" state which means we can later
> > > add a write attribute if though relevant.
> > >
> > > There is one limitation though: if a layout is built as a module but is
> > > not properly installed in the system and loaded manually with insmod
> > > while the nvmem device driver was built-in, the cells won't appear in
> > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > kernel drivers anyway.
> >
> > Wait, what? That should not be an issue here, if so, then this change
> > is not correct and should be fixed as this is NOT an issue for sysfs
> > (otherwise the whole tree wouldn't work.)
> >
> > Please fix up your dependancies if this is somehow not working properly.
>
> I'm not sure I fully get your point.
>
> There is no way we can describe any dependency between a storage device
> driver and an nvmem layout. NVMEM is a pure software abstraction, the
> layout that will be chosen depends on the device tree, but if the
> layout has not been installed, there is no existing mechanism in
> the kernel to prevent it from being loaded (how do you know it's
> not on purpose?).

Once a layout has been loaded, the sysfs files should show up, right?
Otherwise what does a "layout" do? (hint, I have no idea, it's an odd
term to me...)

thanks,

greg k-h

2023-07-18 10:47:05

by Chen-Yu Tsai

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi,

On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> The binary content of nvmem devices is available to the user so in the
> easiest cases, finding the content of a cell is rather easy as it is
> just a matter of looking at a known and fixed offset. However, nvmem
> layouts have been recently introduced to cope with more advanced
> situations, where the offset and size of the cells is not known in
> advance or is dynamic. When using layouts, more advanced parsers are
> used by the kernel in order to give direct access to the content of each
> cell, regardless of its position/size in the underlying
> device. Unfortunately, these information are not accessible by users,
> unless by fully re-implementing the parser logic in userland.
>
> Let's expose the cells and their content through sysfs to avoid these
> situations. Of course the relevant NVMEM sysfs Kconfig option must be
> enabled for this support to be available.
>
> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> group member will be filled at runtime only when relevant and will
> remain empty otherwise. In this case, as the cells attribute group will
> be empty, it will not lead to any additional folder/file creation.
>
> Exposed cells are read-only. There is, in practice, everything in the
> core to support a write path, but as I don't see any need for that, I
> prefer to keep the interface simple (and probably safer). The interface
> is documented as being in the "testing" state which means we can later
> add a write attribute if though relevant.
>
> There is one limitation though: if a layout is built as a module but is
> not properly installed in the system and loaded manually with insmod
> while the nvmem device driver was built-in, the cells won't appear in
> sysfs. But if done like that, the cells won't be usable by the built-in
> kernel drivers anyway.
>
> Signed-off-by: Miquel Raynal <[email protected]>
> Reviewed-by: Greg Kroah-Hartman <[email protected]>
> ---
> drivers/nvmem/core.c | 101 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 101 insertions(+)
>
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 48659106a1e2..6c04a9cf6919 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -325,6 +325,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
> return nvmem_bin_attr_get_umode(nvmem);
> }
>
> +static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
> + const char *id, int index);
> +
> +static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
> + struct bin_attribute *attr, char *buf,
> + loff_t pos, size_t count)
> +{
> + struct nvmem_cell_entry *entry;
> + struct nvmem_cell *cell = NULL;
> + size_t cell_sz, read_len;
> + void *content;
> +
> + entry = attr->private;
> + cell = nvmem_create_cell(entry, entry->name, 0);
> + if (IS_ERR(cell))
> + return PTR_ERR(cell);
> +
> + if (!cell)
> + return -EINVAL;
> +
> + content = nvmem_cell_read(cell, &cell_sz);
> + if (IS_ERR(content)) {
> + read_len = PTR_ERR(content);
> + goto destroy_cell;
> + }
> +
> + read_len = min_t(unsigned int, cell_sz - pos, count);
> + memcpy(buf, content + pos, read_len);
> + kfree(content);
> +
> +destroy_cell:
> + kfree_const(cell->id);
> + kfree(cell);
> +
> + return read_len;
> +}
> +
> /* default read/write permissions */
> static struct bin_attribute bin_attr_rw_nvmem = {
> .attr = {
> @@ -346,8 +383,14 @@ static const struct attribute_group nvmem_bin_group = {
> .is_bin_visible = nvmem_bin_attr_is_visible,
> };
>
> +/* Cell attributes will be dynamically allocated */
> +static struct attribute_group nvmem_cells_group = {
> + .name = "cells",
> +};
> +
> static const struct attribute_group *nvmem_dev_groups[] = {
> &nvmem_bin_group,
> + &nvmem_cells_group,
> NULL,
> };
>
> @@ -406,6 +449,58 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
> device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
> }
>
> +static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
> +{
> + struct bin_attribute **cells_attrs, *attrs;
> + struct nvmem_cell_entry *entry;
> + unsigned int ncells = 0, i = 0;
> + int ret = 0;
> +
> + mutex_lock(&nvmem_mutex);
> +
> + if (list_empty(&nvmem->cells))
> + goto unlock_mutex;
> +
> + /* Allocate an array of attributes with a sentinel */
> + ncells = list_count_nodes(&nvmem->cells);
> + cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
> + sizeof(struct bin_attribute *), GFP_KERNEL);
> + if (!cells_attrs) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
> + if (!attrs) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + /* Initialize each attribute to take the name and size of the cell */
> + list_for_each_entry(entry, &nvmem->cells, node) {
> + sysfs_bin_attr_init(&attrs[i]);
> + attrs[i].attr.name = devm_kstrdup(&nvmem->dev, entry->name, GFP_KERNEL);
> + attrs[i].attr.mode = 0444;
> + attrs[i].size = entry->bytes;
> + attrs[i].read = &nvmem_cell_attr_read;
> + attrs[i].private = entry;
> + if (!attrs[i].attr.name) {
> + ret = -ENOMEM;
> + goto unlock_mutex;
> + }
> +
> + cells_attrs[i] = &attrs[i];
> + i++;
> + }
> +
> + nvmem_cells_group.bin_attrs = cells_attrs;
> +
> +unlock_mutex:
> + mutex_unlock(&nvmem_mutex);
> +
> + return ret;
> +}
> +
> #else /* CONFIG_NVMEM_SYSFS */
>
> static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
> @@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> +#ifdef CONFIG_NVMEM_SYSFS
> + rval = nvmem_populate_sysfs_cells(nvmem);
> + if (rval)
> + goto err_remove_cells;

This breaks nvmem / efuse devices with multiple cells that share the
same name. Something like this in DT:

efuse: efuse@11f10000 {
compatible = "mediatek,mt8183-efuse",
"mediatek,efuse";
reg = <0 0x11f10000 0 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
thermal_calibration: calib@180 {
reg = <0x180 0xc>;
};

mipi_tx_calibration: calib@190 {
reg = <0x190 0xc>;
};

svs_calibration: calib@580 {
reg = <0x580 0x64>;
};
};

creates three cells, all named DT, and sysfs will complain:

sysfs: cannot create duplicate filename '/devices/platform/soc/11f10000.efuse/nvmem1/cells/calib'
mediatek,efuse: probe of 11f10000.efuse failed with error -17

This causes the MT8183-based Chromebooks to lose display capability,
among other things.

The problem lies in the nvmem DT parsing code, where the cell name is
derived from the node name, without including the address portion.
However I'm not sure we can change that, since it could be considered
ABI?


ChenYu

> +#endif
> +
> dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
>
> rval = device_add(&nvmem->dev);
> --
> 2.34.1
>

2023-07-23 20:09:35

by John Thomson

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs

Hi Miquel,

On Mon, 17 Jul 2023, at 07:51, Miquel Raynal wrote:
> The binary content of nvmem devices is available to the user so in the
> easiest cases, finding the content of a cell is rather easy as it is
> just a matter of looking at a known and fixed offset. However, nvmem
> layouts have been recently introduced to cope with more advanced
> situations, where the offset and size of the cells is not known in
> advance or is dynamic. When using layouts, more advanced parsers are
> used by the kernel in order to give direct access to the content of each
> cell regardless of their position/size in the underlying device, but
> these information were not accessible to the user.
>
> By exposing the nvmem cells to the user through a dedicated cell/ folder
> containing one file per cell, we provide a straightforward access to
> useful user information without the need for re-writing a userland
> parser. Content of nvmem cells is usually: product names, manufacturing
> date, MAC addresses, etc,
>
> Signed-off-by: Miquel Raynal <[email protected]>
> Reviewed-by: Greg Kroah-Hartman <[email protected]>
> ---
> Documentation/ABI/testing/sysfs-nvmem-cells | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
> create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
>
> diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells
> b/Documentation/ABI/testing/sysfs-nvmem-cells
> new file mode 100644
> index 000000000000..b2d15a8d36e5
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-nvmem-cells
> @@ -0,0 +1,19 @@
> +What: /sys/bus/nvmem/devices/.../cells/<cell-name>
> +Date: May 2023
> +KernelVersion: 6.5
> +Contact: Miquel Raynal <[email protected]>
> +Description:
> + The "cells" folder contains one file per cell exposed by
> + the nvmem device. The name of the file is the cell name.

Could we consider using a file within a folder (name defined by cell propertys) to access the cell bytes?
Example (pick the best path and filename):
/sys/bus/nvmem/devices/.../cells/<cell-name>/bytes

That way, it is much easier to expand this at a later stage,
like adding an of_node link at
/sys/bus/nvmem/devices/.../cells/<cell-name>/of_node
or exposing other nvmem cell properties.

This is particularly relevant given the cell-name alone does not always
uniquely represent a cell on an nvmem device.
https://lore.kernel.org/lkml/[email protected]/
https://lore.kernel.org/lkml/[email protected]/

> + The length of the file is the size of the cell (when
> + known). The content of the file is the binary content of
> + the cell (may sometimes be ASCII, likely without
> + trailing character).
> + Note: This file is only present if CONFIG_NVMEM_SYSFS
> + is enabled.
> +
> + Example::
> +
> + hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name
> + 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
> + 0000000a
> --
> 2.34.1

Cheers,

--
John Thomson

2023-07-31 16:21:35

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs

Hi John,

Srinivas, a question for you below.

[email protected] wrote on Sun, 23 Jul 2023 19:39:50
+0000:

> Hi Miquel,
>
> On Mon, 17 Jul 2023, at 07:51, Miquel Raynal wrote:
> > The binary content of nvmem devices is available to the user so in the
> > easiest cases, finding the content of a cell is rather easy as it is
> > just a matter of looking at a known and fixed offset. However, nvmem
> > layouts have been recently introduced to cope with more advanced
> > situations, where the offset and size of the cells is not known in
> > advance or is dynamic. When using layouts, more advanced parsers are
> > used by the kernel in order to give direct access to the content of each
> > cell regardless of their position/size in the underlying device, but
> > these information were not accessible to the user.
> >
> > By exposing the nvmem cells to the user through a dedicated cell/ folder
> > containing one file per cell, we provide a straightforward access to
> > useful user information without the need for re-writing a userland
> > parser. Content of nvmem cells is usually: product names, manufacturing
> > date, MAC addresses, etc,
> >
> > Signed-off-by: Miquel Raynal <[email protected]>
> > Reviewed-by: Greg Kroah-Hartman <[email protected]>
> > ---
> > Documentation/ABI/testing/sysfs-nvmem-cells | 19 +++++++++++++++++++
> > 1 file changed, 19 insertions(+)
> > create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
> >
> > diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells
> > b/Documentation/ABI/testing/sysfs-nvmem-cells
> > new file mode 100644
> > index 000000000000..b2d15a8d36e5
> > --- /dev/null
> > +++ b/Documentation/ABI/testing/sysfs-nvmem-cells
> > @@ -0,0 +1,19 @@
> > +What: /sys/bus/nvmem/devices/.../cells/<cell-name>
> > +Date: May 2023
> > +KernelVersion: 6.5
> > +Contact: Miquel Raynal <[email protected]>
> > +Description:
> > + The "cells" folder contains one file per cell exposed by
> > + the nvmem device. The name of the file is the cell name.
>
> Could we consider using a file within a folder (name defined by cell propertys) to access the cell bytes?
> Example (pick the best path and filename):
> /sys/bus/nvmem/devices/.../cells/<cell-name>/bytes
>
> That way, it is much easier to expand this at a later stage,
> like adding an of_node link at
> /sys/bus/nvmem/devices/.../cells/<cell-name>/of_node
> or exposing other nvmem cell properties.

I have no strong opinion. Srinivas what do you prefer? I'm fine either
ways. I like the simplicity of the current approach more, but it's true
that it is more easy to make it grow if we follow John idea.

> This is particularly relevant given the cell-name alone does not always
> uniquely represent a cell on an nvmem device.
> https://lore.kernel.org/lkml/[email protected]/

It seems like this is gonna be fixed by suffixing @<offset> to the
name, as anyway whatever solution we choose, it is gonna be needed.

> https://lore.kernel.org/lkml/[email protected]/
>
> > + The length of the file is the size of the cell (when
> > + known). The content of the file is the binary content of
> > + the cell (may sometimes be ASCII, likely without
> > + trailing character).
> > + Note: This file is only present if CONFIG_NVMEM_SYSFS
> > + is enabled.
> > +
> > + Example::
> > +
> > + hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name
> > + 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
> > + 0000000a
> > --
> > 2.34.1
>
> Cheers,
>


Thanks,
Miquèl

2023-07-31 17:48:57

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Chen-Yu,

> > static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
> > @@ -1006,6 +1101,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > if (rval)
> > goto err_remove_cells;
> >
> > +#ifdef CONFIG_NVMEM_SYSFS
> > + rval = nvmem_populate_sysfs_cells(nvmem);
> > + if (rval)
> > + goto err_remove_cells;
>
> This breaks nvmem / efuse devices with multiple cells that share the
> same name. Something like this in DT:
>
> efuse: efuse@11f10000 {
> compatible = "mediatek,mt8183-efuse",
> "mediatek,efuse";
> reg = <0 0x11f10000 0 0x1000>;
> #address-cells = <1>;
> #size-cells = <1>;
> thermal_calibration: calib@180 {
> reg = <0x180 0xc>;
> };
>
> mipi_tx_calibration: calib@190 {
> reg = <0x190 0xc>;
> };
>
> svs_calibration: calib@580 {
> reg = <0x580 0x64>;
> };
> };
>
> creates three cells, all named DT, and sysfs will complain:
>
> sysfs: cannot create duplicate filename '/devices/platform/soc/11f10000.efuse/nvmem1/cells/calib'
> mediatek,efuse: probe of 11f10000.efuse failed with error -17
>
> This causes the MT8183-based Chromebooks to lose display capability,
> among other things.

Sorry for the breakage, I did not identify this case, but you're right
this is incorrectly handled currently.

> The problem lies in the nvmem DT parsing code, where the cell name is
> derived from the node name, without including the address portion.
> However I'm not sure we can change that, since it could be considered
> ABI?

I would be in favor suffixing the cell names anyway as they have not
been exposed yet to userspace at all (well, not more than a couple of
days in -next).

Thanks,
Miquèl

2023-07-31 19:23:53

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Greg,

[email protected] wrote on Mon, 17 Jul 2023 18:59:52 +0200:

> On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:
> > Hi Greg,
> >
> > [email protected] wrote on Mon, 17 Jul 2023 16:32:09 +0200:
> >
> > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> > > > The binary content of nvmem devices is available to the user so in the
> > > > easiest cases, finding the content of a cell is rather easy as it is
> > > > just a matter of looking at a known and fixed offset. However, nvmem
> > > > layouts have been recently introduced to cope with more advanced
> > > > situations, where the offset and size of the cells is not known in
> > > > advance or is dynamic. When using layouts, more advanced parsers are
> > > > used by the kernel in order to give direct access to the content of each
> > > > cell, regardless of its position/size in the underlying
> > > > device. Unfortunately, these information are not accessible by users,
> > > > unless by fully re-implementing the parser logic in userland.
> > > >
> > > > Let's expose the cells and their content through sysfs to avoid these
> > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > > enabled for this support to be available.
> > > >
> > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > > group member will be filled at runtime only when relevant and will
> > > > remain empty otherwise. In this case, as the cells attribute group will
> > > > be empty, it will not lead to any additional folder/file creation.
> > > >
> > > > Exposed cells are read-only. There is, in practice, everything in the
> > > > core to support a write path, but as I don't see any need for that, I
> > > > prefer to keep the interface simple (and probably safer). The interface
> > > > is documented as being in the "testing" state which means we can later
> > > > add a write attribute if though relevant.
> > > >
> > > > There is one limitation though: if a layout is built as a module but is
> > > > not properly installed in the system and loaded manually with insmod
> > > > while the nvmem device driver was built-in, the cells won't appear in
> > > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > > kernel drivers anyway.
> > >
> > > Wait, what? That should not be an issue here, if so, then this change
> > > is not correct and should be fixed as this is NOT an issue for sysfs
> > > (otherwise the whole tree wouldn't work.)
> > >
> > > Please fix up your dependancies if this is somehow not working properly.
> >
> > I'm not sure I fully get your point.
> >
> > There is no way we can describe any dependency between a storage device
> > driver and an nvmem layout. NVMEM is a pure software abstraction, the
> > layout that will be chosen depends on the device tree, but if the
> > layout has not been installed, there is no existing mechanism in
> > the kernel to prevent it from being loaded (how do you know it's
> > not on purpose?).
>
> Once a layout has been loaded, the sysfs files should show up, right?
> Otherwise what does a "layout" do? (hint, I have no idea, it's an odd
> term to me...)

Sorry for the latency in responding to these questions, I'll try to
clarify the situation.

We have:
- device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which
typically probe and register their devices into the nvmem
layer to expose their content through NVMEM.
- each registration in NVMEM leads to the creation of the relevant
NVMEM cells which can then be used by other device drivers
(typically: a network controller retrieving a MAC address from an
EEPROM through the generic NVMEM abstraction).

We recently covered a slightly new case: the NVMEM cells can be in
random places in the storage devices so we need a "dynamic" way to
discover them: this is the purpose of the NVMEM layouts. We know cell X
is in the device, we just don't know where it is exactly at compile
time, the layout driver will discover it dynamically for us at runtime.

While the "static cells" parser is built-in the NVMEM subsystem, you
explicitly asked to have the layouts modularized. This means
registering a storage device in nvmem while no layout driver has been
inserted yet is now a scenario. We cannot describe any dependency
between a storage device and a layout driver. We cannot defer the probe
either because device drivers which don't get access to their NVMEM
cell are responsible of choosing what to do (most of the time, the idea
is to fallback to a default value to avoid failing the probe for no
reason).

So to answer your original question:

> Once a layout has been loaded, the sysfs files should show up, right?

No. The layouts are kind of "libraries" that the NVMEM subsystem uses
to try exposing cells *when* a new device is registered in NVMEM (not
later). The registration of an NVMEM layout does not trigger any new
parsing, because that is not how the NVMEM subsystem was designed.

I must emphasize that if the layout driver is installed in
/lib/modules/ there is no problem, it will be loaded with
usermodehelper. But if it is not, we can very well have the layout
driver inserted after, and this case, while in practice possible, is
irrelevant from a driver standpoint. It does not make any sense to have
these cells created "after" because they are mostly used during probes.
An easy workaround would be to unregister/register again the underlying
storage device driver.

Do these explanations clarify the situation?

Thanks,
Miquèl

2023-08-01 10:17:23

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs



On 31/07/2023 16:51, Miquel Raynal wrote:
> Hi John,
>
> Srinivas, a question for you below.
>
> [email protected] wrote on Sun, 23 Jul 2023 19:39:50
> +0000:
>
>> Hi Miquel,
>>
>> On Mon, 17 Jul 2023, at 07:51, Miquel Raynal wrote:
>>> The binary content of nvmem devices is available to the user so in the
>>> easiest cases, finding the content of a cell is rather easy as it is
>>> just a matter of looking at a known and fixed offset. However, nvmem
>>> layouts have been recently introduced to cope with more advanced
>>> situations, where the offset and size of the cells is not known in
>>> advance or is dynamic. When using layouts, more advanced parsers are
>>> used by the kernel in order to give direct access to the content of each
>>> cell regardless of their position/size in the underlying device, but
>>> these information were not accessible to the user.
>>>
>>> By exposing the nvmem cells to the user through a dedicated cell/ folder
>>> containing one file per cell, we provide a straightforward access to
>>> useful user information without the need for re-writing a userland
>>> parser. Content of nvmem cells is usually: product names, manufacturing
>>> date, MAC addresses, etc,
>>>
>>> Signed-off-by: Miquel Raynal <[email protected]>
>>> Reviewed-by: Greg Kroah-Hartman <[email protected]>
>>> ---
>>> Documentation/ABI/testing/sysfs-nvmem-cells | 19 +++++++++++++++++++
>>> 1 file changed, 19 insertions(+)
>>> create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
>>>
>>> diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells
>>> b/Documentation/ABI/testing/sysfs-nvmem-cells
>>> new file mode 100644
>>> index 000000000000..b2d15a8d36e5
>>> --- /dev/null
>>> +++ b/Documentation/ABI/testing/sysfs-nvmem-cells
>>> @@ -0,0 +1,19 @@
>>> +What: /sys/bus/nvmem/devices/.../cells/<cell-name>
>>> +Date: May 2023
>>> +KernelVersion: 6.5
>>> +Contact: Miquel Raynal <[email protected]>
>>> +Description:
>>> + The "cells" folder contains one file per cell exposed by
>>> + the nvmem device. The name of the file is the cell name.
>>
>> Could we consider using a file within a folder (name defined by cell propertys) to access the cell bytes?
>> Example (pick the best path and filename):
>> /sys/bus/nvmem/devices/.../cells/<cell-name>/bytes
>>
>> That way, it is much easier to expand this at a later stage,
>> like adding an of_node link at
>> /sys/bus/nvmem/devices/.../cells/<cell-name>/of_node
>> or exposing other nvmem cell properties.
>
> I have no strong opinion. Srinivas what do you prefer? I'm fine either
> ways. I like the simplicity of the current approach more, but it's true
> that it is more easy to make it grow if we follow John idea.

Sounds sensible to me.


>
>> This is particularly relevant given the cell-name alone does not always
>> uniquely represent a cell on an nvmem device.
>> https://lore.kernel.org/lkml/[email protected]/
>
> It seems like this is gonna be fixed by suffixing @<offset> to the
> name, as anyway whatever solution we choose, it is gonna be needed.

we have to be careful here not to break the nvmem_cell_get() users.


--srini


>
>> https://lore.kernel.org/lkml/[email protected]/
>>
>>> + The length of the file is the size of the cell (when
>>> + known). The content of the file is the binary content of
>>> + the cell (may sometimes be ASCII, likely without
>>> + trailing character).
>>> + Note: This file is only present if CONFIG_NVMEM_SYSFS
>>> + is enabled.
>>> +
>>> + Example::
>>> +
>>> + hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name
>>> + 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
>>> + 0000000a
>>> --
>>> 2.34.1
>>
>> Cheers,
>>
>
>
> Thanks,
> Miquèl

2023-08-01 10:50:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote:
> Hi Greg,
>
> [email protected] wrote on Mon, 17 Jul 2023 18:59:52 +0200:
>
> > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:
> > > Hi Greg,
> > >
> > > [email protected] wrote on Mon, 17 Jul 2023 16:32:09 +0200:
> > >
> > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> > > > > The binary content of nvmem devices is available to the user so in the
> > > > > easiest cases, finding the content of a cell is rather easy as it is
> > > > > just a matter of looking at a known and fixed offset. However, nvmem
> > > > > layouts have been recently introduced to cope with more advanced
> > > > > situations, where the offset and size of the cells is not known in
> > > > > advance or is dynamic. When using layouts, more advanced parsers are
> > > > > used by the kernel in order to give direct access to the content of each
> > > > > cell, regardless of its position/size in the underlying
> > > > > device. Unfortunately, these information are not accessible by users,
> > > > > unless by fully re-implementing the parser logic in userland.
> > > > >
> > > > > Let's expose the cells and their content through sysfs to avoid these
> > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > > > enabled for this support to be available.
> > > > >
> > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > > > group member will be filled at runtime only when relevant and will
> > > > > remain empty otherwise. In this case, as the cells attribute group will
> > > > > be empty, it will not lead to any additional folder/file creation.
> > > > >
> > > > > Exposed cells are read-only. There is, in practice, everything in the
> > > > > core to support a write path, but as I don't see any need for that, I
> > > > > prefer to keep the interface simple (and probably safer). The interface
> > > > > is documented as being in the "testing" state which means we can later
> > > > > add a write attribute if though relevant.
> > > > >
> > > > > There is one limitation though: if a layout is built as a module but is
> > > > > not properly installed in the system and loaded manually with insmod
> > > > > while the nvmem device driver was built-in, the cells won't appear in
> > > > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > > > kernel drivers anyway.
> > > >
> > > > Wait, what? That should not be an issue here, if so, then this change
> > > > is not correct and should be fixed as this is NOT an issue for sysfs
> > > > (otherwise the whole tree wouldn't work.)
> > > >
> > > > Please fix up your dependancies if this is somehow not working properly.
> > >
> > > I'm not sure I fully get your point.
> > >
> > > There is no way we can describe any dependency between a storage device
> > > driver and an nvmem layout. NVMEM is a pure software abstraction, the
> > > layout that will be chosen depends on the device tree, but if the
> > > layout has not been installed, there is no existing mechanism in
> > > the kernel to prevent it from being loaded (how do you know it's
> > > not on purpose?).
> >
> > Once a layout has been loaded, the sysfs files should show up, right?
> > Otherwise what does a "layout" do? (hint, I have no idea, it's an odd
> > term to me...)
>
> Sorry for the latency in responding to these questions, I'll try to
> clarify the situation.
>
> We have:
> - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which
> typically probe and register their devices into the nvmem
> layer to expose their content through NVMEM.
> - each registration in NVMEM leads to the creation of the relevant
> NVMEM cells which can then be used by other device drivers
> (typically: a network controller retrieving a MAC address from an
> EEPROM through the generic NVMEM abstraction).


So is a "cell" here a device in the device model? Or something else?

> We recently covered a slightly new case: the NVMEM cells can be in
> random places in the storage devices so we need a "dynamic" way to
> discover them: this is the purpose of the NVMEM layouts. We know cell X
> is in the device, we just don't know where it is exactly at compile
> time, the layout driver will discover it dynamically for us at runtime.

So you then create the needed device when it is found?

> While the "static cells" parser is built-in the NVMEM subsystem, you
> explicitly asked to have the layouts modularized. This means
> registering a storage device in nvmem while no layout driver has been
> inserted yet is now a scenario. We cannot describe any dependency
> between a storage device and a layout driver. We cannot defer the probe
> either because device drivers which don't get access to their NVMEM
> cell are responsible of choosing what to do (most of the time, the idea
> is to fallback to a default value to avoid failing the probe for no
> reason).
>
> So to answer your original question:
>
> > Once a layout has been loaded, the sysfs files should show up, right?
>
> No. The layouts are kind of "libraries" that the NVMEM subsystem uses
> to try exposing cells *when* a new device is registered in NVMEM (not
> later). The registration of an NVMEM layout does not trigger any new
> parsing, because that is not how the NVMEM subsystem was designed.

So they are a type of "class" right? Why not just use class devices
then?

> I must emphasize that if the layout driver is installed in
> /lib/modules/ there is no problem, it will be loaded with
> usermodehelper. But if it is not, we can very well have the layout
> driver inserted after, and this case, while in practice possible, is
> irrelevant from a driver standpoint. It does not make any sense to have
> these cells created "after" because they are mostly used during probes.
> An easy workaround would be to unregister/register again the underlying
> storage device driver.

We really do not support any situation where a module is NOT in the
proper place when device discovery happens. So this shouldn't be an
issue, yet you all mention it? So how is it happening?

And if you used the class code, would that work better as mentioned
above?

thanks

greg k-h

2023-08-01 17:06:03

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs

Hello,

[email protected] wrote on Tue, 1 Aug 2023 10:06:14 +0100:

> On 31/07/2023 16:51, Miquel Raynal wrote:
> > Hi John,
> >
> > Srinivas, a question for you below.
> >
> > [email protected] wrote on Sun, 23 Jul 2023 19:39:50
> > +0000:
> >
> >> Hi Miquel,
> >>
> >> On Mon, 17 Jul 2023, at 07:51, Miquel Raynal wrote:
> >>> The binary content of nvmem devices is available to the user so in the
> >>> easiest cases, finding the content of a cell is rather easy as it is
> >>> just a matter of looking at a known and fixed offset. However, nvmem
> >>> layouts have been recently introduced to cope with more advanced
> >>> situations, where the offset and size of the cells is not known in
> >>> advance or is dynamic. When using layouts, more advanced parsers are
> >>> used by the kernel in order to give direct access to the content of each
> >>> cell regardless of their position/size in the underlying device, but
> >>> these information were not accessible to the user.
> >>>
> >>> By exposing the nvmem cells to the user through a dedicated cell/ folder
> >>> containing one file per cell, we provide a straightforward access to
> >>> useful user information without the need for re-writing a userland
> >>> parser. Content of nvmem cells is usually: product names, manufacturing
> >>> date, MAC addresses, etc,
> >>>
> >>> Signed-off-by: Miquel Raynal <[email protected]>
> >>> Reviewed-by: Greg Kroah-Hartman <[email protected]>
> >>> ---
> >>> Documentation/ABI/testing/sysfs-nvmem-cells | 19 +++++++++++++++++++
> >>> 1 file changed, 19 insertions(+)
> >>> create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
> >>>
> >>> diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells
> >>> b/Documentation/ABI/testing/sysfs-nvmem-cells
> >>> new file mode 100644
> >>> index 000000000000..b2d15a8d36e5
> >>> --- /dev/null
> >>> +++ b/Documentation/ABI/testing/sysfs-nvmem-cells
> >>> @@ -0,0 +1,19 @@
> >>> +What: /sys/bus/nvmem/devices/.../cells/<cell-name>
> >>> +Date: May 2023
> >>> +KernelVersion: 6.5
> >>> +Contact: Miquel Raynal <[email protected]>
> >>> +Description:
> >>> + The "cells" folder contains one file per cell exposed by
> >>> + the nvmem device. The name of the file is the cell name.
> >>
> >> Could we consider using a file within a folder (name defined by cell propertys) to access the cell bytes?
> >> Example (pick the best path and filename):
> >> /sys/bus/nvmem/devices/.../cells/<cell-name>/bytes
> >>
> >> That way, it is much easier to expand this at a later stage,
> >> like adding an of_node link at
> >> /sys/bus/nvmem/devices/.../cells/<cell-name>/of_node
> >> or exposing other nvmem cell properties.
> >
> > I have no strong opinion. Srinivas what do you prefer? I'm fine either
> > ways. I like the simplicity of the current approach more, but it's true
> > that it is more easy to make it grow if we follow John idea.
>
> Sounds sensible to me.

I've looked a bit more in depth how to do that and to be honest I did
not find an easy way. Attributes and attribute groups are meant to be
used with only one indirection level and making an additional one seems
terribly more complex. Maybe I'm wrong, if you have a piece of code
doing that please share it and I'll make my best to integrate it,
otherwise I think I'll keep the simplest approach.

> >> This is particularly relevant given the cell-name alone does not always
> >> uniquely represent a cell on an nvmem device.
> >> https://lore.kernel.org/lkml/[email protected]/
> >
> > It seems like this is gonna be fixed by suffixing @<offset> to the
> > name, as anyway whatever solution we choose, it is gonna be needed.
>
> we have to be careful here not to break the nvmem_cell_get() users.

I believe this only applies to sysfs names, so nvmem_cell_get() which
uses real cells names should not be affected.

Thanks,
Miquèl

2023-08-01 18:29:04

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs

Hi Greg,

[email protected] wrote on Tue, 1 Aug 2023 11:56:40 +0200:

> On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote:
> > Hi Greg,
> >
> > [email protected] wrote on Mon, 17 Jul 2023 18:59:52 +0200:
> >
> > > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:
> > > > Hi Greg,
> > > >
> > > > [email protected] wrote on Mon, 17 Jul 2023 16:32:09 +0200:
> > > >
> > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:
> > > > > > The binary content of nvmem devices is available to the user so in the
> > > > > > easiest cases, finding the content of a cell is rather easy as it is
> > > > > > just a matter of looking at a known and fixed offset. However, nvmem
> > > > > > layouts have been recently introduced to cope with more advanced
> > > > > > situations, where the offset and size of the cells is not known in
> > > > > > advance or is dynamic. When using layouts, more advanced parsers are
> > > > > > used by the kernel in order to give direct access to the content of each
> > > > > > cell, regardless of its position/size in the underlying
> > > > > > device. Unfortunately, these information are not accessible by users,
> > > > > > unless by fully re-implementing the parser logic in userland.
> > > > > >
> > > > > > Let's expose the cells and their content through sysfs to avoid these
> > > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > > > > enabled for this support to be available.
> > > > > >
> > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > > > > group member will be filled at runtime only when relevant and will
> > > > > > remain empty otherwise. In this case, as the cells attribute group will
> > > > > > be empty, it will not lead to any additional folder/file creation.
> > > > > >
> > > > > > Exposed cells are read-only. There is, in practice, everything in the
> > > > > > core to support a write path, but as I don't see any need for that, I
> > > > > > prefer to keep the interface simple (and probably safer). The interface
> > > > > > is documented as being in the "testing" state which means we can later
> > > > > > add a write attribute if though relevant.
> > > > > >
> > > > > > There is one limitation though: if a layout is built as a module but is
> > > > > > not properly installed in the system and loaded manually with insmod
> > > > > > while the nvmem device driver was built-in, the cells won't appear in
> > > > > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > > > > kernel drivers anyway.
> > > > >
> > > > > Wait, what? That should not be an issue here, if so, then this change
> > > > > is not correct and should be fixed as this is NOT an issue for sysfs
> > > > > (otherwise the whole tree wouldn't work.)
> > > > >
> > > > > Please fix up your dependancies if this is somehow not working properly.
> > > >
> > > > I'm not sure I fully get your point.
> > > >
> > > > There is no way we can describe any dependency between a storage device
> > > > driver and an nvmem layout. NVMEM is a pure software abstraction, the
> > > > layout that will be chosen depends on the device tree, but if the
> > > > layout has not been installed, there is no existing mechanism in
> > > > the kernel to prevent it from being loaded (how do you know it's
> > > > not on purpose?).
> > >
> > > Once a layout has been loaded, the sysfs files should show up, right?
> > > Otherwise what does a "layout" do? (hint, I have no idea, it's an odd
> > > term to me...)
> >
> > Sorry for the latency in responding to these questions, I'll try to
> > clarify the situation.
> >
> > We have:
> > - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which
> > typically probe and register their devices into the nvmem
> > layer to expose their content through NVMEM.
> > - each registration in NVMEM leads to the creation of the relevant
> > NVMEM cells which can then be used by other device drivers
> > (typically: a network controller retrieving a MAC address from an
> > EEPROM through the generic NVMEM abstraction).
>
>
> So is a "cell" here a device in the device model? Or something else?

It is not a device in the device model, but I am wondering if it should
not be one actually. I discussed with Rafal about another issue in the
current design (dependence over a layout driver which might defer
forever a storage device probe) which might be solved if the core was
handling these layouts differently.

> > We recently covered a slightly new case: the NVMEM cells can be in
> > random places in the storage devices so we need a "dynamic" way to
> > discover them: this is the purpose of the NVMEM layouts. We know cell X
> > is in the device, we just don't know where it is exactly at compile
> > time, the layout driver will discover it dynamically for us at runtime.
>
> So you then create the needed device when it is found?

We don't create devices, but we match the layouts with the NVMEM
devices thanks to the of_ logic.

> > While the "static cells" parser is built-in the NVMEM subsystem, you
> > explicitly asked to have the layouts modularized. This means
> > registering a storage device in nvmem while no layout driver has been
> > inserted yet is now a scenario. We cannot describe any dependency
> > between a storage device and a layout driver. We cannot defer the probe
> > either because device drivers which don't get access to their NVMEM
> > cell are responsible of choosing what to do (most of the time, the idea
> > is to fallback to a default value to avoid failing the probe for no
> > reason).
> >
> > So to answer your original question:
> >
> > > Once a layout has been loaded, the sysfs files should show up, right?
> >
> > No. The layouts are kind of "libraries" that the NVMEM subsystem uses
> > to try exposing cells *when* a new device is registered in NVMEM (not
> > later). The registration of an NVMEM layout does not trigger any new
> > parsing, because that is not how the NVMEM subsystem was designed.
>
> So they are a type of "class" right? Why not just use class devices
> then?
>
> > I must emphasize that if the layout driver is installed in
> > /lib/modules/ there is no problem, it will be loaded with
> > usermodehelper. But if it is not, we can very well have the layout
> > driver inserted after, and this case, while in practice possible, is
> > irrelevant from a driver standpoint. It does not make any sense to have
> > these cells created "after" because they are mostly used during probes.
> > An easy workaround would be to unregister/register again the underlying
> > storage device driver.
>
> We really do not support any situation where a module is NOT in the
> proper place when device discovery happens.

Great, I didn't know. Then there is no issue.

> So this shouldn't be an
> issue, yet you all mention it? So how is it happening?

Just transparency, I'm giving all details I can.

I'll try to come with something slightly different than what we have
with the current approach.

Thanks,
Miquèl