2023-09-22 17:53:21

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v10 0/3] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code@77
00000000 54 57 |TW|
crc32@88
00000000 bb cd 51 98 |..Q.|
device-version@49
00000000 02 |.|
diag-version@80
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision@4c
00000000 44 31 |D1|
mac-address@2c
00000000 18 be 92 13 9a 00 |......|
manufacture-date@34
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer@72
00000000 44 4e 49 |DNI|
num-macs@6e
00000000 00 40 |.@|
onie-version@61
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name@50
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name@d
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number@19
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor@7b
00000000 44 4e 49 |DNI|

This layout with a cells/ folder containing one file per cell has been
legitimately challenged by John Thomson. I am not against the idea of
having a sub-folder per cell but I did not find a relevant way to do
that so for know I did not change the sysfs organization. If someone
really wants this other layout, please provide a code snipped which I
can integrate.

Current support does not include:
* The knowledge of the type of data (binary vs. ASCII), so by default
all cells are exposed in binary form.
* Write support.

Changes in v10:
* All preparation patches have been picked-up by Srinivas.
* Rebased on top of v6.6-rc1.
* Fix an error path in the probe due to the recent additions.

Changes in v9:
* Hopefully fixed the creation of sysfs entries when describing the
cells using the legacy layout, as reported by Chen-Yu.
* Dropped the nvmem-specific device list and used the driver core list
instead as advised by Greg.

Changes in v8:
* Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
* Add a patch to return NULL when no layout is found (reported by Dan
Carpenter).
* Fixed the documentation as well as the cover letter regarding the
addition of addresses in the cell names.

Changes in v7:
* Rework the layouts registration mechanism to use the platform devices
logic.
* Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
them consist in suffixing '@<offset>' to the cell name to create the
sysfs files in order to be sure they are all unique.
* Update the doc.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.


Miquel Raynal (3):
nvmem: core: Rework layouts to become platform devices
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
drivers/nvmem/core.c | 257 +++++++++++++++++---
drivers/nvmem/layouts/onie-tlv.c | 39 ++-
drivers/nvmem/layouts/sl28vpd.c | 39 ++-
include/linux/nvmem-provider.h | 11 +-
5 files changed, 318 insertions(+), 49 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

--
2.34.1


2023-09-22 19:00:54

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Technically, the layouts are more like a "plus" and, even we
consider that the hardware description shall be correct, we could still
probe the storage device (especially if it contains the rootfs).

One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 140 ++++++++++++++++++++++++-------
drivers/nvmem/layouts/onie-tlv.c | 39 +++++++--
drivers/nvmem/layouts/sl28vpd.c | 39 +++++++--
include/linux/nvmem-provider.h | 11 +--
4 files changed, 180 insertions(+), 49 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index eaf6a3fe8ca6..14dd3ae777bb 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -17,11 +17,13 @@
#include <linux/nvmem-provider.h>
#include <linux/gpio/consumer.h>
#include <linux/of.h>
+#include <linux/of_platform.h>
#include <linux/slab.h>

struct nvmem_device {
struct module *owner;
struct device dev;
+ struct list_head node;
int stride;
int word_size;
int id;
@@ -75,6 +77,7 @@ static LIST_HEAD(nvmem_cell_tables);
static DEFINE_MUTEX(nvmem_lookup_mutex);
static LIST_HEAD(nvmem_lookup_list);

+struct notifier_block nvmem_nb;
static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);

static DEFINE_SPINLOCK(nvmem_layout_lock);
@@ -790,23 +793,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
{
struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
+ struct nvmem_layout *l, *layout = NULL;

layout_np = of_nvmem_layout_get_container(nvmem);
if (!layout_np)
return NULL;

- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
spin_lock(&nvmem_layout_lock);

list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
+ if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
if (try_module_get(l->owner))
layout = l;

@@ -863,7 +859,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;

layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev->driver->of_match_table, layout_np);

return match ? match->data : NULL;
}
@@ -882,6 +878,7 @@ EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
struct nvmem_device *nvmem_register(const struct nvmem_config *config)
{
struct nvmem_device *nvmem;
+ struct device_node *layout_np;
int rval;

if (!config->dev)
@@ -974,19 +971,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}

- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -1005,24 +989,27 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- rval = nvmem_add_cells_from_layout(nvmem);
- if (rval)
- goto err_remove_cells;
-
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);

rval = device_add(&nvmem->dev);
if (rval)
goto err_remove_cells;

+ /* Populate layouts as devices */
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ rval = of_platform_populate(nvmem->dev.of_node, NULL, NULL, NULL);
+ of_node_put(layout_np);
+ if (rval)
+ goto err_remove_cells;
+ }
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -2124,13 +2111,106 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
}
EXPORT_SYMBOL_GPL(nvmem_dev_name);

+static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
+{
+ struct device_node *layout_np;
+
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ of_request_module(layout_np);
+ of_node_put(layout_np);
+ }
+}
+
+static int nvmem_match_available_layout(struct nvmem_device *nvmem)
+{
+ int ret;
+
+ if (nvmem->layout)
+ return 0;
+
+ nvmem->layout = nvmem_layout_get(nvmem);
+ if (!nvmem->layout)
+ return 0;
+
+ ret = nvmem_add_cells_from_layout(nvmem);
+ if (ret) {
+ nvmem_layout_put(nvmem->layout);
+ nvmem->layout = NULL;
+ return ret;
+ }
+
+ return 0;
+}
+
+static int nvmem_dev_match_available_layout(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+
+ return nvmem_match_available_layout(nvmem);
+}
+
+static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
+{
+ return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
+}
+
+/*
+ * When an NVMEM device is registered, try to match against a layout and
+ * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
+ * which could use it properly expose their cells.
+ */
+static int nvmem_notifier_call(struct notifier_block *notifier,
+ unsigned long event_flags, void *context)
+{
+ struct nvmem_device *nvmem = NULL;
+ int ret;
+
+ switch (event_flags) {
+ case NVMEM_ADD:
+ nvmem = context;
+ break;
+ case NVMEM_LAYOUT_ADD:
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ if (nvmem) {
+ /*
+ * In case the nvmem device was built-in while the layout was
+ * built as a module, manually request loading the layout driver.
+ */
+ nvmem_try_loading_layout_driver(nvmem);
+
+ /* Populate the cells of the new nvmem device from its layout, if any */
+ ret = nvmem_match_available_layout(nvmem);
+ } else {
+ /* NVMEM devices might be "waiting" for this layout */
+ ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
+ }
+
+ if (ret)
+ return notifier_from_errno(ret);
+
+ return NOTIFY_OK;
+}
+
static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ nvmem_nb.notifier_call = &nvmem_notifier_call;
+ return nvmem_register_notifier(&nvmem_nb);
}

static void __exit nvmem_exit(void)
{
+ nvmem_unregister_notifier(&nvmem_nb);
bus_unregister(&nvmem_bus_type);
}

diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
index 59fc87ccfcff..3d54d3be2c93 100644
--- a/drivers/nvmem/layouts/onie-tlv.c
+++ b/drivers/nvmem/layouts/onie-tlv.c
@@ -13,6 +13,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>

#define ONIE_TLV_MAX_LEN 2048
#define ONIE_TLV_CRC_FIELD_SZ 6
@@ -226,18 +227,46 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int onie_tlv_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = onie_tlv_parse_table;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int onie_tlv_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);

-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct platform_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ },
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
};
-module_nvmem_layout_driver(onie_tlv_layout);
+module_platform_driver(onie_tlv_layout);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Miquel Raynal <[email protected]>");
diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
index 05671371f631..ad0c39fc7943 100644
--- a/drivers/nvmem/layouts/sl28vpd.c
+++ b/drivers/nvmem/layouts/sl28vpd.c
@@ -5,6 +5,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>
#include <uapi/linux/if_ether.h>

#define SL28VPD_MAGIC 'V'
@@ -135,18 +136,46 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int sl28vpd_probe(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(&pdev->dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = sl28vpd_add_cells;
+ layout->dev = &pdev->dev;
+
+ platform_set_drvdata(pdev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int sl28vpd_remove(struct platform_device *pdev)
+{
+ struct nvmem_layout *layout = platform_get_drvdata(pdev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);

-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct platform_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ },
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
};
-module_nvmem_layout_driver(sl28vpd_layout);
+module_platform_driver(sl28vpd_layout);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Michael Walle <[email protected]>");
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index dae26295e6be..c72064780b50 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -154,8 +154,7 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
@@ -170,8 +169,7 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device *dev;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
@@ -243,9 +241,4 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
}

#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
--
2.34.1

2023-09-22 21:19:40

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v10 2/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..7af70adf3690
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,21 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by the
+ NVMEM device. The name of the file is: <name>@<where>, with
+ <name> being the cell name and <where> its location in the NVMEM
+ device, in hexadecimal (without the '0x' prefix, to mimic device
+ tree node names). The length of the file is the size of the cell
+ (when known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without trailing
+ character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1

2023-09-22 21:22:15

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v10 3/3] nvmem: core: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.

Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.

Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.

Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 117 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 14dd3ae777bb..b3c4345ab48a 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -43,6 +43,7 @@ struct nvmem_device {
struct gpio_desc *wp_gpio;
struct nvmem_layout *layout;
void *priv;
+ bool sysfs_cells_populated;
};

#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)
@@ -327,6 +328,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}

+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ size_t cell_sz, read_len;
+ void *content;
+
+ entry = attr->private;
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell))
+ return PTR_ERR(cell);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -348,11 +386,21 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};

+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
NULL,
};

+static const struct attribute_group *nvmem_cells_groups[] = {
+ &nvmem_cells_group,
+ NULL,
+};
+
static struct bin_attribute bin_attr_nvmem_eeprom_compat = {
.attr = {
.name = "eeprom",
@@ -408,6 +456,69 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}

+static int nvmem_dev_populate_sysfs_cells(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells) || nvmem->sysfs_cells_populated) {
+ nvmem_cells_group.bin_attrs = NULL;
+ goto unlock_mutex;
+ }
+
+ /* Allocate an array of attributes with a sentinel */
+ ncells = list_count_nodes(&nvmem->cells);
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL,
+ "%s@%x", entry->name,
+ entry->offset);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ attrs[i].private = entry;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+ ret = devm_device_add_groups(&nvmem->dev, nvmem_cells_groups);
+ if (ret)
+ goto unlock_mutex;
+
+ nvmem->sysfs_cells_populated = true;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */

static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -2193,6 +2304,12 @@ static int nvmem_notifier_call(struct notifier_block *notifier,
if (ret)
return notifier_from_errno(ret);

+#ifdef CONFIG_NVMEM_SYSFS
+ ret = nvmem_for_each_dev(nvmem_dev_populate_sysfs_cells);
+ if (ret)
+ return notifier_from_errno(ret);
+#endif
+
return NOTIFY_OK;
}

--
2.34.1

2023-09-28 16:02:34

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] nvmem: core: Expose cells through sysfs

On 2023-09-22 19:48, Miquel Raynal wrote:
> The binary content of nvmem devices is available to the user so in the
> easiest cases, finding the content of a cell is rather easy as it is
> just a matter of looking at a known and fixed offset. However, nvmem
> layouts have been recently introduced to cope with more advanced
> situations, where the offset and size of the cells is not known in
> advance or is dynamic. When using layouts, more advanced parsers are
> used by the kernel in order to give direct access to the content of
> each
> cell, regardless of its position/size in the underlying
> device. Unfortunately, these information are not accessible by users,
> unless by fully re-implementing the parser logic in userland.
>
> Let's expose the cells and their content through sysfs to avoid these
> situations. Of course the relevant NVMEM sysfs Kconfig option must be
> enabled for this support to be available.
>
> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> group member will be filled at runtime only when relevant and will
> remain empty otherwise. In this case, as the cells attribute group will
> be empty, it will not lead to any additional folder/file creation.
>
> Exposed cells are read-only. There is, in practice, everything in the
> core to support a write path, but as I don't see any need for that, I
> prefer to keep the interface simple (and probably safer). The interface
> is documented as being in the "testing" state which means we can later
> add a write attribute if though relevant.
>
> Signed-off-by: Miquel Raynal <[email protected]>

Tested-by: Rafał Miłecki <[email protected]>

# hexdump -C /sys/bus/nvmem/devices/u-boot-env0/cells/ipaddr@15c
00000000 31 39 32 2e 31 36 38 2e 31 2e 31
|192.168.1.1|
0000000b

--
Rafał Miłecki

2023-09-28 21:29:19

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

On 2023-09-22 19:48, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware
> description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked
> to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device
> creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <[email protected]>

I rebased & tested my patch converting U-Boot NVMEM device to NVMEM
layout on top of this. It worked!

Tested-by: Rafał Miłecki <[email protected]>

For reference what I used:

partitions {
partition-loader {
compatible = "brcm,u-boot";

partition-u-boot-env {
compatible = "nvmem-cells";

nvmem-layout {
compatible = "brcm,env";

base_mac_addr: ethaddr {
#nvmem-cell-cells = <1>;
};
};
};
};
};

--
Rafał Miłecki

2023-09-29 06:10:45

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] nvmem: core: Expose cells through sysfs

On 2023-09-28 17:31, Rafał Miłecki wrote:
> On 2023-09-22 19:48, Miquel Raynal wrote:
>> The binary content of nvmem devices is available to the user so in the
>> easiest cases, finding the content of a cell is rather easy as it is
>> just a matter of looking at a known and fixed offset. However, nvmem
>> layouts have been recently introduced to cope with more advanced
>> situations, where the offset and size of the cells is not known in
>> advance or is dynamic. When using layouts, more advanced parsers are
>> used by the kernel in order to give direct access to the content of
>> each
>> cell, regardless of its position/size in the underlying
>> device. Unfortunately, these information are not accessible by users,
>> unless by fully re-implementing the parser logic in userland.
>>
>> Let's expose the cells and their content through sysfs to avoid these
>> situations. Of course the relevant NVMEM sysfs Kconfig option must be
>> enabled for this support to be available.
>>
>> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
>> group member will be filled at runtime only when relevant and will
>> remain empty otherwise. In this case, as the cells attribute group
>> will
>> be empty, it will not lead to any additional folder/file creation.
>>
>> Exposed cells are read-only. There is, in practice, everything in the
>> core to support a write path, but as I don't see any need for that, I
>> prefer to keep the interface simple (and probably safer). The
>> interface
>> is documented as being in the "testing" state which means we can later
>> add a write attribute if though relevant.
>>
>> Signed-off-by: Miquel Raynal <[email protected]>
>
> Tested-by: Rafał Miłecki <[email protected]>
>
> # hexdump -C /sys/bus/nvmem/devices/u-boot-env0/cells/ipaddr@15c
> 00000000 31 39 32 2e 31 36 38 2e 31 2e 31
> |192.168.1.1|
> 0000000b

The same test after converting U-Boot env into layout driver:

# hexdump -C /sys/bus/nvmem/devices/mtd1/cells/ipaddr@15c
00000000 31 39 32 2e 31 36 38 2e 31 2e 31
|192.168.1.1|
0000000b

Looks good!

--
Rafał Miłecki

2023-10-02 03:16:47

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v10 3/3] nvmem: core: Expose cells through sysfs

Hi Rafał,

[email protected] wrote on Fri, 29 Sep 2023 07:18:32 +0200:

> On 2023-09-28 17:31, Rafał Miłecki wrote:
> > On 2023-09-22 19:48, Miquel Raynal wrote:
> >> The binary content of nvmem devices is available to the user so in the
> >> easiest cases, finding the content of a cell is rather easy as it is
> >> just a matter of looking at a known and fixed offset. However, nvmem
> >> layouts have been recently introduced to cope with more advanced
> >> situations, where the offset and size of the cells is not known in
> >> advance or is dynamic. When using layouts, more advanced parsers are
> >> used by the kernel in order to give direct access to the content of >> each
> >> cell, regardless of its position/size in the underlying
> >> device. Unfortunately, these information are not accessible by users,
> >> unless by fully re-implementing the parser logic in userland.
> >> >> Let's expose the cells and their content through sysfs to avoid these
> >> situations. Of course the relevant NVMEM sysfs Kconfig option must be
> >> enabled for this support to be available.
> >> >> Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> >> group member will be filled at runtime only when relevant and will
> >> remain empty otherwise. In this case, as the cells attribute group >> will
> >> be empty, it will not lead to any additional folder/file creation.
> >> >> Exposed cells are read-only. There is, in practice, everything in the
> >> core to support a write path, but as I don't see any need for that, I
> >> prefer to keep the interface simple (and probably safer). The >> interface
> >> is documented as being in the "testing" state which means we can later
> >> add a write attribute if though relevant.
> >> >> Signed-off-by: Miquel Raynal <[email protected]>
> >
> > Tested-by: Rafał Miłecki <[email protected]>
> >
> > # hexdump -C /sys/bus/nvmem/devices/u-boot-env0/cells/ipaddr@15c
> > 00000000 31 39 32 2e 31 36 38 2e 31 2e 31 > |192.168.1.1|
> > 0000000b
>
> The same test after converting U-Boot env into layout driver:
>
> # hexdump -C /sys/bus/nvmem/devices/mtd1/cells/ipaddr@15c
> 00000000 31 39 32 2e 31 36 38 2e 31 2e 31 |192.168.1.1|
> 0000000b
>
> Looks good!
>

Great! Thanks a lot for testing!

Miquèl

2023-10-02 09:35:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

On Fri, Sep 22, 2023 at 07:48:52PM +0200, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <[email protected]>

Did I miss why these were decided to be platform devices and not normal
devices on their own "bus" that are attached to the parent device
properly? Why platform for a dynamic thing?

If I did agree with this, it should be documented here in the changelog
why this is required to be this way so I don't ask the question again in
the future :)

thanks,

greg k-h

2023-10-02 18:28:35

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

Thanks Miquel for the work on this.
I have one comment below.


On 22/09/2023 18:48, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> Signed-off-by: Miquel Raynal <[email protected]>
> ---
> drivers/nvmem/core.c | 140 ++++++++++++++++++++++++-------
> drivers/nvmem/layouts/onie-tlv.c | 39 +++++++--
> drivers/nvmem/layouts/sl28vpd.c | 39 +++++++--
> include/linux/nvmem-provider.h | 11 +--
> 4 files changed, 180 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index eaf6a3fe8ca6..14dd3ae777bb 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -17,11 +17,13 @@
> #include <linux/nvmem-provider.h>
> #include <linux/gpio/consumer.h>
> #include <linux/of.h>
> +#include <linux/of_platform.h>
> #include <linux/slab.h>
>
> struct nvmem_device {
> struct module *owner;
> struct device dev;
> + struct list_head node;
> int stride;
> int word_size;
> int id;
> @@ -75,6 +77,7 @@ static LIST_HEAD(nvmem_cell_tables);
> static DEFINE_MUTEX(nvmem_lookup_mutex);
> static LIST_HEAD(nvmem_lookup_list);
>
> +struct notifier_block nvmem_nb;
> static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>
> static DEFINE_SPINLOCK(nvmem_layout_lock);
> @@ -790,23 +793,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
> static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
> {
> struct device_node *layout_np;
> - struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
> + struct nvmem_layout *l, *layout = NULL;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> if (!layout_np)
> return NULL;
>
> - /*
> - * In case the nvmem device was built-in while the layout was built as a
> - * module, we shall manually request the layout driver loading otherwise
> - * we'll never have any match.
> - */
> - of_request_module(layout_np);
> -
> spin_lock(&nvmem_layout_lock);
>
> list_for_each_entry(l, &nvmem_layouts, node) {
> - if (of_match_node(l->of_match_table, layout_np)) {
> + if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
> if (try_module_get(l->owner))
> layout = l;
>
> @@ -863,7 +859,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> const struct of_device_id *match;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> - match = of_match_node(layout->of_match_table, layout_np);
> + match = of_match_node(layout->dev->driver->of_match_table, layout_np);
>
> return match ? match->data : NULL;
> }
> @@ -882,6 +878,7 @@ EXPORT_SYMBOL_GPL(nvmem_layout_get_match_data);
> struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> {
> struct nvmem_device *nvmem;
> + struct device_node *layout_np;
> int rval;
>
> if (!config->dev)
> @@ -974,19 +971,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> goto err_put_device;
> }
>
> - /*
> - * If the driver supplied a layout by config->layout, the module
> - * pointer will be NULL and nvmem_layout_put() will be a noop.
> - */
> - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> - if (IS_ERR(nvmem->layout)) {
> - rval = PTR_ERR(nvmem->layout);
> - nvmem->layout = NULL;
> -
> - if (rval == -EPROBE_DEFER)
> - goto err_teardown_compat;
> - }
> -
> if (config->cells) {
> rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> if (rval)
> @@ -1005,24 +989,27 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> - rval = nvmem_add_cells_from_layout(nvmem);
> - if (rval)
> - goto err_remove_cells;
> -
> dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
>
> rval = device_add(&nvmem->dev);
> if (rval)
> goto err_remove_cells;
>
> + /* Populate layouts as devices */
> + layout_np = of_nvmem_layout_get_container(nvmem);
> + if (layout_np) {
> + rval = of_platform_populate(nvmem->dev.of_node, NULL, NULL, NULL);
> + of_node_put(layout_np);
> + if (rval)
> + goto err_remove_cells;
> + }
> +
> blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>
> return nvmem;
>
> err_remove_cells:
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> -err_teardown_compat:
> if (config->compat)
> nvmem_sysfs_remove_compat(nvmem, config);
> err_put_device:
> @@ -2124,13 +2111,106 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
> }
> EXPORT_SYMBOL_GPL(nvmem_dev_name);
>
> +static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
> +{
> + struct device_node *layout_np;
> +
> + layout_np = of_nvmem_layout_get_container(nvmem);
> + if (layout_np) {
> + of_request_module(layout_np);
> + of_node_put(layout_np);
> + }
> +}
> +
> +static int nvmem_match_available_layout(struct nvmem_device *nvmem)
> +{
> + int ret;
> +
> + if (nvmem->layout)
> + return 0;
> +
> + nvmem->layout = nvmem_layout_get(nvmem);
> + if (!nvmem->layout)
> + return 0;
> +
> + ret = nvmem_add_cells_from_layout(nvmem);
> + if (ret) {
> + nvmem_layout_put(nvmem->layout);
> + nvmem->layout = NULL;
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> +{
> + struct nvmem_device *nvmem = to_nvmem_device(dev);
> +
> + return nvmem_match_available_layout(nvmem);
> +}
> +
> +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> +{
> + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> +}
> +
> +/*
> + * When an NVMEM device is registered, try to match against a layout and
> + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> + * which could use it properly expose their cells.
> + */
> +static int nvmem_notifier_call(struct notifier_block *notifier,
> + unsigned long event_flags, void *context)
> +{
> + struct nvmem_device *nvmem = NULL;
> + int ret;
> +
> + switch (event_flags) {
> + case NVMEM_ADD:
> + nvmem = context;
> + break;
> + case NVMEM_LAYOUT_ADD:
> + break;
> + default:
> + return NOTIFY_DONE;
> + }

It looks bit unnatural for core to register notifier for its own events.


Why do we need the notifier at core level, can we not just handle this
in core before raising these events, instead of registering a notifier cb?


--srini


> +
> + if (nvmem) {
> + /*
> + * In case the nvmem device was built-in while the layout was
> + * built as a module, manually request loading the layout driver.
> + */
> + nvmem_try_loading_layout_driver(nvmem);
> +
> + /* Populate the cells of the new nvmem device from its layout, if any */
> + ret = nvmem_match_available_layout(nvmem);
> + } else {
> + /* NVMEM devices might be "waiting" for this layout */
> + ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
> + }
> +
> + if (ret)
> + return notifier_from_errno(ret);
> +
> + return NOTIFY_OK;
> +}
> +
> static int __init nvmem_init(void)
> {
> - return bus_register(&nvmem_bus_type);
> + int ret;
> +
> + ret = bus_register(&nvmem_bus_type);
> + if (ret)
> + return ret;
> +
> + nvmem_nb.notifier_call = &nvmem_notifier_call;
> + return nvmem_register_notifier(&nvmem_nb);
> }
>
> static void __exit nvmem_exit(void)
> {
> + nvmem_unregister_notifier(&nvmem_nb);
> bus_unregister(&nvmem_bus_type);
> }
>

2023-10-02 20:28:22

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

Hi Greg,

[email protected] wrote on Mon, 2 Oct 2023 11:35:02 +0200:

> On Fri, Sep 22, 2023 at 07:48:52PM +0200, Miquel Raynal wrote:
> > Current layout support was initially written without modules support in
> > mind. When the requirement for module support rose, the existing base
> > was improved to adopt modularization support, but kind of a design flaw
> > was introduced. With the existing implementation, when a storage device
> > registers into NVMEM, the core tries to hook a layout (if any) and
> > populates its cells immediately. This means, if the hardware description
> > expects a layout to be hooked up, but no driver was provided for that,
> > the storage medium will fail to probe and try later from
> > scratch. Technically, the layouts are more like a "plus" and, even we
> > consider that the hardware description shall be correct, we could still
> > probe the storage device (especially if it contains the rootfs).
> >
> > One way to overcome this situation is to consider the layouts as
> > devices, and leverage the existing notifier mechanism. When a new NVMEM
> > device is registered, we can:
> > - populate its nvmem-layout child, if any
> > - try to modprobe the relevant driver, if relevant
> > - try to hook the NVMEM device with a layout in the notifier
> > And when a new layout is registered:
> > - try to hook all the existing NVMEM devices which are not yet hooked to
> > a layout with the new layout
> > This way, there is no strong order to enforce, any NVMEM device creation
> > or NVMEM layout driver insertion will be observed as a new event which
> > may lead to the creation of additional cells, without disturbing the
> > probes with costly (and sometimes endless) deferrals.
> >
> > Signed-off-by: Miquel Raynal <[email protected]>
>
> Did I miss why these were decided to be platform devices and not normal
> devices on their own "bus" that are attached to the parent device
> properly? Why platform for a dynamic thing?

I don't think you missed anything, following the discussion "how to
picture these layouts as devices" I came up with the simplest
approach: using the platform infrastructure. I thought creating my own
additional bus just for that would involve too much code duplication.
I agree the current implementation kind of abuses the platform
infrastructure. I will have a look in order to maybe mutate this into
its own bus.

> If I did agree with this, it should be documented here in the changelog
> why this is required to be this way so I don't ask the question again in
> the future :)

Haha, I don't think you did ;)

Thanks,
Miquèl

2023-10-03 09:43:40

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

Hi Srinivas,

> > +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> > +{
> > + struct nvmem_device *nvmem = to_nvmem_device(dev);
> > +
> > + return nvmem_match_available_layout(nvmem);
> > +}
> > +
> > +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> > +{
> > + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> > +}
> > +
> > +/*
> > + * When an NVMEM device is registered, try to match against a layout and
> > + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> > + * which could use it properly expose their cells.
> > + */
> > +static int nvmem_notifier_call(struct notifier_block *notifier,
> > + unsigned long event_flags, void *context)
> > +{
> > + struct nvmem_device *nvmem = NULL;
> > + int ret;
> > +
> > + switch (event_flags) {
> > + case NVMEM_ADD:
> > + nvmem = context;
> > + break;
> > + case NVMEM_LAYOUT_ADD:
> > + break;
> > + default:
> > + return NOTIFY_DONE;
> > + }
>
> It looks bit unnatural for core to register notifier for its own events.
>
>
> Why do we need the notifier at core level, can we not just handle this in core before raising these events, instead of registering a notifier cb?

There is no good place to do that "synchronously". We need some kind of
notification mechanism in these two cases:
* A memory device is being probed -> if a matching layout driver is
already available, we need to parse the device and expose the cells,
but not in the thread registering the memory device.
* A layout driver is being insmod'ed -> if a memory device needs it to
create cells we need to parse the device content, but I find it
crappy to start device-specific parsing in the registration handler.

So probe of the memory device is not a good place for this, nor is the
registration of the layout driver. Yet, we need to do the same
operation upon two different "events".

This notifier mechanism is a clean and easy way to get notified and
implement a callback which is also not blocking the thread doing the
initial registration. I am personally not bothered using it only
internally. If you have another mechanism in mind to perform a similar
operation, or a way to avoid this need I'll do the switch.

Thanks,
Miquèl

2023-10-05 15:43:42

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v10 1/3] nvmem: core: Rework layouts to become platform devices

Hi Miquel,

[email protected] wrote on Tue, 3 Oct 2023 11:43:26 +0200:

> Hi Srinivas,
>
> > > +static int nvmem_dev_match_available_layout(struct device *dev, void *data)
> > > +{
> > > + struct nvmem_device *nvmem = to_nvmem_device(dev);
> > > +
> > > + return nvmem_match_available_layout(nvmem);
> > > +}
> > > +
> > > +static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
> > > +{
> > > + return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
> > > +}
> > > +
> > > +/*
> > > + * When an NVMEM device is registered, try to match against a layout and
> > > + * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
> > > + * which could use it properly expose their cells.
> > > + */
> > > +static int nvmem_notifier_call(struct notifier_block *notifier,
> > > + unsigned long event_flags, void *context)
> > > +{
> > > + struct nvmem_device *nvmem = NULL;
> > > + int ret;
> > > +
> > > + switch (event_flags) {
> > > + case NVMEM_ADD:
> > > + nvmem = context;
> > > + break;
> > > + case NVMEM_LAYOUT_ADD:
> > > + break;
> > > + default:
> > > + return NOTIFY_DONE;
> > > + }
> >
> > It looks bit unnatural for core to register notifier for its own events.
> >
> >
> > Why do we need the notifier at core level, can we not just handle this in core before raising these events, instead of registering a notifier cb?
>
> There is no good place to do that "synchronously". We need some kind of
> notification mechanism in these two cases:
> * A memory device is being probed -> if a matching layout driver is
> already available, we need to parse the device and expose the cells,
> but not in the thread registering the memory device.
> * A layout driver is being insmod'ed -> if a memory device needs it to
> create cells we need to parse the device content, but I find it
> crappy to start device-specific parsing in the registration handler.
>
> So probe of the memory device is not a good place for this, nor is the
> registration of the layout driver. Yet, we need to do the same
> operation upon two different "events".
>
> This notifier mechanism is a clean and easy way to get notified and
> implement a callback which is also not blocking the thread doing the
> initial registration. I am personally not bothered using it only
> internally. If you have another mechanism in mind to perform a similar
> operation, or a way to avoid this need I'll do the switch.

Since I've changed the way nvmem devices and layouts are dependent in
v11, I've been giving this a second thought and I think this can now be
avoided. I've improved the layout registration callback to actually
retrieve the nvmem device this layout is probing on and populates
the dynamic cells *there* (instead of during the probe of the nvmem
device itself). This way I could drop some boilerplate which is no
longer necessary. It comes at a low cost: there are now two places were
sysfs cells can be added.

I am cleaning up all this stuff and then let you and Greg review the
v12.

Thanks,
Miquèl