2023-08-08 17:29:23

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 0/7] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code@77
00000000 54 57 |TW|
crc32@88
00000000 bb cd 51 98 |..Q.|
device-version@49
00000000 02 |.|
diag-version@80
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision@4c
00000000 44 31 |D1|
mac-address@2c
00000000 18 be 92 13 9a 00 |......|
manufacture-date@34
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer@72
00000000 44 4e 49 |DNI|
num-macs@6e
00000000 00 40 |.@|
onie-version@61
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name@50
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name@d
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number@19
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor@7b
00000000 44 4e 49 |DNI|

This layout with a cells/ folder containing one file per cell has been
legitimately challenged by John Thomson. I am not against the idea of
having a sub-folder per cell but I did not find a relevant way to do
that so for know I did not change the sysfs organization. If someone
really wants this other layout, please provide a code snipped which I
can integrate.

Current support does not include:
* The knowledge of the type of data (binary vs. ASCII), so by default
all cells are exposed in binary form.
* Write support.

Changes in v9:
* Hopefully fixed the creation of sysfs entries when describing the
cells using the legacy layout, as reported by Chen-Yu.
* Dropped the nvmem-specific device list and used the driver core list
instead as advised by Greg.

Changes in v8:
* Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
* Add a patch to return NULL when no layout is found (reported by Dan
Carpenter).
* Fixed the documentation as well as the cover letter regarding the
addition of addresses in the cell names.

Changes in v7:
* Rework the layouts registration mechanism to use the platform devices
logic.
* Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
them consist in suffixing '@<offset>' to the cell name to create the
sysfs files in order to be sure they are all unique.
* Update the doc.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.

Miquel Raynal (7):
nvmem: core: Create all cells before adding the nvmem device
nvmem: core: Return NULL when no nvmem layout is found
nvmem: core: Do not open-code existing functions
nvmem: core: Notify when a new layout is registered
nvmem: core: Rework layouts to become platform devices
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
drivers/nvmem/core.c | 270 +++++++++++++++++---
drivers/nvmem/layouts/onie-tlv.c | 39 ++-
drivers/nvmem/layouts/sl28vpd.c | 39 ++-
include/linux/nvmem-consumer.h | 4 +-
include/linux/nvmem-provider.h | 11 +-
6 files changed, 329 insertions(+), 55 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

--
2.34.1



2023-08-08 17:56:02

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 5/7] nvmem: core: Rework layouts to become platform devices

Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Technically, the layouts are more like a "plus" and, even we
consider that the hardware description shall be correct, we could still
probe the storage device (especially if it contains the rootfs).

One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 139 ++++++++++++++++++++++++-------
drivers/nvmem/layouts/onie-tlv.c | 39 +++++++--
drivers/nvmem/layouts/sl28vpd.c | 39 +++++++--
include/linux/nvmem-provider.h | 11 +--
4 files changed, 179 insertions(+), 49 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 4fb6d4d7fe40..32d9973df90b 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -23,6 +23,7 @@
struct nvmem_device {
struct module *owner;
struct device dev;
+ struct list_head node;
int stride;
int word_size;
int id;
@@ -76,6 +77,7 @@ static LIST_HEAD(nvmem_cell_tables);
static DEFINE_MUTEX(nvmem_lookup_mutex);
static LIST_HEAD(nvmem_lookup_list);

+struct notifier_block nvmem_nb;
static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);

static DEFINE_SPINLOCK(nvmem_layout_lock);
@@ -791,23 +793,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
{
struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
+ struct nvmem_layout *l, *layout = NULL;

layout_np = of_nvmem_layout_get_container(nvmem);
if (!layout_np)
return NULL;

- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
spin_lock(&nvmem_layout_lock);

list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
+ if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
if (try_module_get(l->owner))
layout = l;

@@ -864,7 +859,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;

layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev->driver->of_match_table, layout_np);

return match ? match->data : NULL;
}
@@ -883,6 +878,7 @@ EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
struct nvmem_device *nvmem_register(const struct nvmem_config *config)
{
struct nvmem_device *nvmem;
+ struct device_node *layout_np;
int rval;

if (!config->dev)
@@ -975,19 +971,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}

- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -1006,24 +989,27 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- rval = nvmem_add_cells_from_layout(nvmem);
- if (rval)
- goto err_remove_cells;
-
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);

rval = device_add(&nvmem->dev);
if (rval)
goto err_remove_cells;

+ /* Populate layouts as devices */
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ rval = of_platform_populate(nvmem->dev.of_node, NULL, NULL, NULL);
+ if (rval)
+ goto err_remove_cells;
+ of_node_put(layout_np);
+ }
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -2125,13 +2111,106 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
}
EXPORT_SYMBOL_GPL(nvmem_dev_name);

+static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
+{
+ struct device_node *layout_np;
+
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ of_request_module(layout_np);
+ of_node_put(layout_np);
+ }
+}
+
+static int nvmem_match_available_layout(struct nvmem_device *nvmem)
+{
+ int ret;
+
+ if (nvmem->layout)
+ return 0;
+
+ nvmem->layout = nvmem_layout_get(nvmem);
+ if (!nvmem->layout)
+ return 0;
+
+ ret = nvmem_add_cells_from_layout(nvmem);
+ if (ret) {
+ nvmem_layout_put(nvmem->layout);
+ nvmem->layout = NULL;
+ return ret;
+ }
+
+ return 0;
+}
+
+static int nvmem_dev_match_available_layout(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+
+ return nvmem_match_available_layout(nvmem);
+}
+
+static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
+{
+ return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
+}
+
+/*
+ * When an NVMEM device is registered, try to match against a layout and
+ * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
+ * which could use it properly expose their cells.
+ */
+static int nvmem_notifier_call(struct notifier_block *notifier,
+ unsigned long event_flags, void *context)
+{
+ struct nvmem_device *nvmem = NULL;
+ int ret;
+
+ switch (event_flags) {
+ case NVMEM_ADD:
+ nvmem = context;
+ break;
+ case NVMEM_LAYOUT_ADD:
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ if (nvmem) {
+ /*
+ * In case the nvmem device was built-in while the layout was
+ * built as a module, manually request loading the layout driver.
+ */
+ nvmem_try_loading_layout_driver(nvmem);
+
+ /* Populate the cells of the new nvmem device from its layout, if any */
+ ret = nvmem_match_available_layout(nvmem);
+ } else {
+ /* NVMEM devices might be "waiting" for this layout */
+ ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
+ }
+
+ if (ret)
+ return notifier_from_errno(ret);
+
+ return NOTIFY_OK;
+}
+
static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ nvmem_nb.notifier_call = &nvmem_notifier_call;
+ return nvmem_register_notifier(&nvmem_nb);
}

static void __exit nvmem_exit(void)
{
+ nvmem_unregister_notifier(&nvmem_nb);
bus_unregister(&nvmem_bus_type);
}

diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
index 59fc87ccfcff..3d54d3be2c93 100644
--- a/drivers/nvmem/layouts/onie-tlv.c
+++ b/drivers/nvmem/layouts/onie-tlv.c
@@ -13,6 +13,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>

#define ONIE_TLV_MAX_LEN 2048
#define ONIE_TLV_CRC_FIELD_SZ 6
@@ -226,18 +227,46 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int onie_tlv_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = onie_tlv_parse_table;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int onie_tlv_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);

-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct platform_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ },
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
};
-module_nvmem_layout_driver(onie_tlv_layout);
+module_platform_driver(onie_tlv_layout);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Miquel Raynal <[email protected]>");
diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
index 05671371f631..ad0c39fc7943 100644
--- a/drivers/nvmem/layouts/sl28vpd.c
+++ b/drivers/nvmem/layouts/sl28vpd.c
@@ -5,6 +5,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>
#include <uapi/linux/if_ether.h>

#define SL28VPD_MAGIC 'V'
@@ -135,18 +136,46 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int sl28vpd_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = sl28vpd_add_cells;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int sl28vpd_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);

-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct platform_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ },
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
};
-module_nvmem_layout_driver(sl28vpd_layout);
+module_platform_driver(sl28vpd_layout);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michael Walle <[email protected]>");
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index dae26295e6be..c72064780b50 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -154,8 +154,7 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
@@ -170,8 +169,7 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device *dev;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
@@ -243,9 +241,4 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
}

#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
--
2.34.1


2023-08-08 18:45:19

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 4/7] nvmem: core: Notify when a new layout is registered

Tell listeners a new layout was introduced and is now available.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 4 ++++
include/linux/nvmem-consumer.h | 2 ++
2 files changed, 6 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 257328887263..4fb6d4d7fe40 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -772,12 +772,16 @@ int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner)
list_add(&layout->node, &nvmem_layouts);
spin_unlock(&nvmem_layout_lock);

+ blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_ADD, layout);
+
return 0;
}
EXPORT_SYMBOL_GPL(__nvmem_layout_register);

void nvmem_layout_unregister(struct nvmem_layout *layout)
{
+ blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_REMOVE, layout);
+
spin_lock(&nvmem_layout_lock);
list_del(&layout->node);
spin_unlock(&nvmem_layout_lock);
diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
index 27373024856d..4523e4e83319 100644
--- a/include/linux/nvmem-consumer.h
+++ b/include/linux/nvmem-consumer.h
@@ -43,6 +43,8 @@ enum {
NVMEM_REMOVE,
NVMEM_CELL_ADD,
NVMEM_CELL_REMOVE,
+ NVMEM_LAYOUT_ADD,
+ NVMEM_LAYOUT_REMOVE,
};

#if IS_ENABLED(CONFIG_NVMEM)
--
2.34.1


2023-08-08 18:46:42

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 2/7] nvmem: core: Return NULL when no nvmem layout is found

Currently, of_nvmem_layout_get_container() returns NULL on error, or an
error pointer if either CONFIG_NVMEM or CONFIG_OF is turned off. We
should likely avoid this kind of mix for two reasons: to clarify the
intend and anyway fix the !CONFIG_OF which will likely always if we use
this helper somewhere else. Let's just return NULL when no layout is
found, we don't need an error value here.

Link: https://staticthinking.wordpress.com/2022/08/01/mixing-error-pointers-and-null/
Fixes: 266570f496b9 ("nvmem: core: introduce NVMEM layouts")
Reported-by: kernel test robot <[email protected]>
Reported-by: Dan Carpenter <[email protected]>
Closes: https://lore.kernel.org/r/[email protected]/
Signed-off-by: Miquel Raynal <[email protected]>

---

Backports to stable kernels is likely not needed as I believe the error
pointer will be discarded "magically" by the of/ code.
---
include/linux/nvmem-consumer.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
index fa030d93b768..27373024856d 100644
--- a/include/linux/nvmem-consumer.h
+++ b/include/linux/nvmem-consumer.h
@@ -256,7 +256,7 @@ static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np,
static inline struct device_node *
of_nvmem_layout_get_container(struct nvmem_device *nvmem)
{
- return ERR_PTR(-EOPNOTSUPP);
+ return NULL;
}
#endif /* CONFIG_NVMEM && CONFIG_OF */

--
2.34.1


2023-08-08 18:53:44

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 7/7] nvmem: core: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.

Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.

Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.

Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 117 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 32d9973df90b..11b9e5cc0b45 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -43,6 +43,7 @@ struct nvmem_device {
struct gpio_desc *wp_gpio;
struct nvmem_layout *layout;
void *priv;
+ bool sysfs_cells_populated;
};

#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)
@@ -327,6 +328,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}

+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ size_t cell_sz, read_len;
+ void *content;
+
+ entry = attr->private;
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell))
+ return PTR_ERR(cell);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -348,11 +386,21 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};

+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
NULL,
};

+static const struct attribute_group *nvmem_cells_groups[] = {
+ &nvmem_cells_group,
+ NULL,
+};
+
static struct bin_attribute bin_attr_nvmem_eeprom_compat = {
.attr = {
.name = "eeprom",
@@ -408,6 +456,69 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}

+static int nvmem_dev_populate_sysfs_cells(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells) || nvmem->sysfs_cells_populated) {
+ nvmem_cells_group.bin_attrs = NULL;
+ goto unlock_mutex;
+ }
+
+ /* Allocate an array of attributes with a sentinel */
+ ncells = list_count_nodes(&nvmem->cells);
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL,
+ "%s@%x", entry->name,
+ entry->offset);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ attrs[i].private = entry;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+ ret = devm_device_add_groups(&nvmem->dev, nvmem_cells_groups);
+ if (ret)
+ goto unlock_mutex;
+
+ nvmem->sysfs_cells_populated = true;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */

static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -2193,6 +2304,12 @@ static int nvmem_notifier_call(struct notifier_block *notifier,
if (ret)
return notifier_from_errno(ret);

+#ifdef CONFIG_NVMEM_SYSFS
+ ret = nvmem_for_each_dev(nvmem_dev_populate_sysfs_cells);
+ if (ret)
+ return notifier_from_errno(ret);
+#endif
+
return NOTIFY_OK;
}

--
2.34.1


2023-08-08 19:01:37

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 6/7] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..7af70adf3690
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,21 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by the
+ NVMEM device. The name of the file is: <name>@<where>, with
+ <name> being the cell name and <where> its location in the NVMEM
+ device, in hexadecimal (without the '0x' prefix, to mimic device
+ tree node names). The length of the file is the size of the cell
+ (when known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without trailing
+ character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1


2023-08-08 19:02:19

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 1/7] nvmem: core: Create all cells before adding the nvmem device

Let's pack all the cells creation in one place, so they are all created
before we add the nvmem device.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 3f8c7718412b..48659106a1e2 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -998,12 +998,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
-
- rval = device_add(&nvmem->dev);
- if (rval)
- goto err_remove_cells;
-
rval = nvmem_add_cells_from_fixed_layout(nvmem);
if (rval)
goto err_remove_cells;
@@ -1012,6 +1006,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

+ dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+
+ rval = device_add(&nvmem->dev);
+ if (rval)
+ goto err_remove_cells;
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;
--
2.34.1


2023-08-08 19:05:32

by Michael Walle

[permalink] [raw]
Subject: Re: [PATCH v9 1/7] nvmem: core: Create all cells before adding the nvmem device

Am 2023-08-08 08:29, schrieb Miquel Raynal:
> Let's pack all the cells creation in one place, so they are all created
> before we add the nvmem device.
>
> Signed-off-by: Miquel Raynal <[email protected]>

Reviewed-by: Michael Walle <[email protected]>

2023-08-08 19:06:35

by Michael Walle

[permalink] [raw]
Subject: Re: [PATCH v9 2/7] nvmem: core: Return NULL when no nvmem layout is found

Am 2023-08-08 08:29, schrieb Miquel Raynal:
> Currently, of_nvmem_layout_get_container() returns NULL on error, or an
> error pointer if either CONFIG_NVMEM or CONFIG_OF is turned off. We
> should likely avoid this kind of mix for two reasons: to clarify the
> intend and anyway fix the !CONFIG_OF which will likely always if we use
> this helper somewhere else. Let's just return NULL when no layout is
> found, we don't need an error value here.
>
> Link:
> https://staticthinking.wordpress.com/2022/08/01/mixing-error-pointers-and-null/
> Fixes: 266570f496b9 ("nvmem: core: introduce NVMEM layouts")
> Reported-by: kernel test robot <[email protected]>
> Reported-by: Dan Carpenter <[email protected]>
> Closes: https://lore.kernel.org/r/[email protected]/
> Signed-off-by: Miquel Raynal <[email protected]>

Reviewed-by: Michael Walle <[email protected]>

2023-08-08 22:49:27

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v9 3/7] nvmem: core: Do not open-code existing functions

Use of_nvmem_layout_get_container() instead of hardcoding it.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 48659106a1e2..257328887263 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -786,10 +786,10 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);

static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
{
- struct device_node *layout_np, *np = nvmem->dev.of_node;
+ struct device_node *layout_np;
struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);

- layout_np = of_get_child_by_name(np, "nvmem-layout");
+ layout_np = of_nvmem_layout_get_container(nvmem);
if (!layout_np)
return NULL;

--
2.34.1


2023-08-11 12:42:09

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v9 1/7] nvmem: core: Create all cells before adding the nvmem device



On 11/08/2023 13:11, Miquel Raynal wrote:

>>
>>
>
> nvmem_register() calls device_initialize() and later device_add(),
> which is exactly the content of device_register(). Upon error
> after device_add(), we currently call device_put(), whereas
> device_unregister would call both device_del() and device_put().
>
> I would expect device_del() to be first called upon error before
> device_put() *after* device_add() has succeded, no?

That is correct afaiu, if device_add is succeed we need to call
device_del(). As the patch now moved the device_add to end of function
we really do not need device_del() in err path.

>
>>> I also see the layout_np below should be freed before jumping in the
>>> error section.
>>
>> you mean missing of_node_put()?
>
> Yes, I need to call of_node_put() before jumping into the error path.

Are we not already doing it in nvmem_layout_get() and
nvmem_add_cells_from_fixed_layout() ?


>
> Thanks,
> Miquèl

2023-08-11 12:51:26

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v9 1/7] nvmem: core: Create all cells before adding the nvmem device

Hi Srinivas,

[email protected] wrote on Fri, 11 Aug 2023 12:11:19 +0100:

> On 08/08/2023 08:24, Miquel Raynal wrote:
> > Hi Srinivas,
> >
> > [email protected] wrote on Tue, 8 Aug 2023 07:56:47 +0100:
> >
> >> On 08/08/2023 07:29, Miquel Raynal wrote:
> >>> Let's pack all the cells creation in one place, so they are all created
> >>> before we add the nvmem device.
> >>>
> >>> Signed-off-by: Miquel Raynal <[email protected]>
> >>> ---
> >>> drivers/nvmem/core.c | 12 ++++++------
> >>> 1 file changed, 6 insertions(+), 6 deletions(-)
> >>>
> >>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> >>> index 3f8c7718412b..48659106a1e2 100644
> >>> --- a/drivers/nvmem/core.c
> >>> +++ b/drivers/nvmem/core.c
> >>> @@ -998,12 +998,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> >>> if (rval)
> >>> goto err_remove_cells;
> >>> > - dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
> >>> -
> >>> - rval = device_add(&nvmem->dev);
> >>> - if (rval)
> >>> - goto err_remove_cells;
> >>> -
> >>> rval = nvmem_add_cells_from_fixed_layout(nvmem);
> >>> if (rval)
> >>> goto err_remove_cells;
> >>> @@ -1012,6 +1006,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> >>> if (rval)
> >>> goto err_remove_cells;
> >>> > + dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
> >>> +
> >>> + rval = device_add(&nvmem->dev);
> >>> + if (rval)
> >>> + goto err_remove_cells;
> >>
> >> All the error handling paths are now messed up with this patch, put_device() in error path will be called incorrectly from multiple places.
> >
> > I'm not sure what this means. Perhaps I should additionally call
> > device_del() after device_add was successful to mimic the
> > device_unregister() call from the remove path. Is that what you mean?
>
>
> This looks perfectly fine, no change required. This also fixes a bug of missing device_del() in err path.
>
> pl, Ignore my old comments.

nvmem_register() calls device_initialize() and later device_add(),
which is exactly the content of device_register(). Upon error
after device_add(), we currently call device_put(), whereas
device_unregister would call both device_del() and device_put().

I would expect device_del() to be first called upon error before
device_put() *after* device_add() has succeded, no?

> > I also see the layout_np below should be freed before jumping in the
> > error section.
>
> you mean missing of_node_put()?

Yes, I need to call of_node_put() before jumping into the error path.

Thanks,
Miquèl

2023-08-11 13:27:29

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v9 1/7] nvmem: core: Create all cells before adding the nvmem device

Hi Srinivas,

[email protected] wrote on Fri, 11 Aug 2023 13:26:24 +0100:

> On 11/08/2023 13:11, Miquel Raynal wrote:
>
> >>
> >>
> >
> > nvmem_register() calls device_initialize() and later device_add(),
> > which is exactly the content of device_register(). Upon error
> > after device_add(), we currently call device_put(), whereas
> > device_unregister would call both device_del() and device_put().
> >
> > I would expect device_del() to be first called upon error before
> > device_put() *after* device_add() has succeded, no?
>
> That is correct afaiu, if device_add is succeed we need to call device_del(). As the patch now moved the device_add to end of function we really do not need device_del() in err path.

Right, I'm looking at the end of the series where I need to add
device_del() in the error path because something gets added after
device_add(). So we are aligned, thanks for the feedback.

> >>> I also see the layout_np below should be freed before jumping in the
> >>> error section.
> >>
> >> you mean missing of_node_put()?
> >
> > Yes, I need to call of_node_put() before jumping into the error path.
>
> Are we not already doing it in nvmem_layout_get() and nvmem_add_cells_from_fixed_layout() ?

We perform the layout_get for two reasons:
- knowing if there is a layout
- using the layout
Here we are in the first case, and we don't want to retain a reference
from here. Only in the second case.

Thanks,
Miquèl

2023-08-14 11:06:15

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: (subset) [PATCH v9 0/7] NVMEM cells in sysfs


On Tue, 08 Aug 2023 08:29:25 +0200, Miquel Raynal wrote:
> As part of a previous effort, support for dynamic NVMEM layouts was
> brought into mainline, helping a lot in getting information from NVMEM
> devices at non-static locations. One common example of NVMEM cell is the
> MAC address that must be used. Sometimes the cell content is mainly (or
> only) useful to the kernel, and sometimes it is not. Users might also
> want to know the content of cells such as: the manufacturing place and
> date, the hardware version, the unique ID, etc. Two possibilities in
> this case: either the users re-implement their own parser to go through
> the whole device and search for the information they want, or the kernel
> can expose the content of the cells if deemed relevant. This second
> approach sounds way more relevant than the first one to avoid useless
> code duplication, so here is a series bringing NVMEM cells content to
> the user through sysfs.
>
> [...]

Applied, thanks!

[1/7] nvmem: core: Create all cells before adding the nvmem device
commit: ad004687dafea0921c2551c7d3e7ad56837984fc
[2/7] nvmem: core: Return NULL when no nvmem layout is found
commit: a29eacf7e6376a44f37cc80950c92a59ca285992
[3/7] nvmem: core: Do not open-code existing functions
commit: 95735bc038a828d649fe7f66f9bb67099c18a47a
[4/7] nvmem: core: Notify when a new layout is registered
commit: 0e4a8e9e49ea29af87f9f308dc3e01fab969102f

Best regards,
--
Srinivas Kandagatla <[email protected]>