2023-10-11 11:15:57

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 0/6] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code@77
00000000 54 57 |TW|
crc32@88
00000000 bb cd 51 98 |..Q.|
device-version@49
00000000 02 |.|
diag-version@80
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision@4c
00000000 44 31 |D1|
mac-address@2c
00000000 18 be 92 13 9a 00 |......|
manufacture-date@34
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer@72
00000000 44 4e 49 |DNI|
num-macs@6e
00000000 00 40 |.@|
onie-version@61
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name@50
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name@d
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number@19
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor@7b
00000000 44 4e 49 |DNI|

Current support does not include:
* The knowledge of the type of data (binary vs. ASCII), so by default
all cells are exposed in binary form.
* Write support.

Changes in v13:
--- >8 ---
THIS VERSION IS ONLY COMPILE TESTED !!
I want to move forward with this so here is a v13 with interesting
changes regarding the device model interaction (as requested by Greg)
together with additional smaller changes listed below. I will test this
on real hardware next week and report if there is anything wrong with
it. In the mean time I would appreciate additional feedback on the
ongoing discussions.
--- 8< ---
* Clarifying a NULL of_node situation was deemed irrelevant, so I
dropped the patch.
* Rename the layouts bus/devices: s/nvmem-layouts/nvmem-layout/.
* Fixed incoherent function declarations returning void vs. int (kernel
test robot).
* Fixed another robot report related to the fact that an error path was
only useful if CONFIG_NVMEM_SYSFS was enabled.
* Collected tags.
* We now register a struct nvmem_layout rather than a struct device as
part of the layout bus. I had to create a couple of additional fields
in the layout driver structure for that, but the final result looks
nicer.

Changes in v12:
* Fixed the issues reported by kernel test robot.
* Reworked even deeper the registration of layout devices and dropped
all the research and matching code that was previously needed as
suggested by Srinivas. This way, we no longer use the notifiers.

Changes in v11:
* The nvmem layouts are now regular devices and not platform devices
anymore. They are registered into the nvmem-layout bus (so there is a
new /sysfs/bus/nvmem-layouts entry that gets created. All the code for
this new bus is located under drivers/nvmem/layouts.c and is part of
the main core. The core device-driver logic applies without too much
additional code besides the registration of the bus and a bit of
glue. I see no need for more detailed structures for now but this can
be improved later as needed.

Changes in v10:
* All preparation patches have been picked-up by Srinivas.
* Rebased on top of v6.6-rc1.
* Fix an error path in the probe due to the recent additions.

Changes in v9:
* Hopefully fixed the creation of sysfs entries when describing the
cells using the legacy layout, as reported by Chen-Yu.
* Dropped the nvmem-specific device list and used the driver core list
instead as advised by Greg.

Changes in v8:
* Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
* Add a patch to return NULL when no layout is found (reported by Dan
Carpenter).
* Fixed the documentation as well as the cover letter regarding the
addition of addresses in the cell names.

Changes in v7:
* Rework the layouts registration mechanism to use the platform devices
logic.
* Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
them consist in suffixing '@<offset>' to the cell name to create the
sysfs files in order to be sure they are all unique.
* Update the doc.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.

Miquel Raynal (6):
of: device: Export of_device_make_bus_id()
nvmem: Move of_nvmem_layout_get_container() in another header
nvmem: Create a header for internal sharing
nvmem: core: Rework layouts to become regular devices
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 288 +++++++++++---------
drivers/nvmem/internals.h | 57 ++++
drivers/nvmem/layouts.c | 228 ++++++++++++++++
drivers/nvmem/layouts/onie-tlv.c | 23 +-
drivers/nvmem/layouts/sl28vpd.c | 23 +-
drivers/of/device.c | 41 +++
drivers/of/platform.c | 40 ---
include/linux/nvmem-consumer.h | 7 -
include/linux/nvmem-provider.h | 48 ++--
include/linux/of_device.h | 6 +
12 files changed, 583 insertions(+), 201 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
create mode 100644 drivers/nvmem/internals.h
create mode 100644 drivers/nvmem/layouts.c

--
2.34.1


2023-10-11 11:16:17

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 3/6] nvmem: Create a header for internal sharing

Before adding all the NVMEM layout bus infrastructure to the core, let's
move the main nvmem_device structure in an internal header, only
available to the core. This way all the additional code can be added in
a dedicated file in order to keep the current core file tidy.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 24 +-----------------------
drivers/nvmem/internals.h | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 36 insertions(+), 23 deletions(-)
create mode 100644 drivers/nvmem/internals.h

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 93b867b3cdf9..eefb5d0a0c91 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -19,29 +19,7 @@
#include <linux/of.h>
#include <linux/slab.h>

-struct nvmem_device {
- struct module *owner;
- struct device dev;
- int stride;
- int word_size;
- int id;
- struct kref refcnt;
- size_t size;
- bool read_only;
- bool root_only;
- int flags;
- enum nvmem_type type;
- struct bin_attribute eeprom;
- struct device *base_dev;
- struct list_head cells;
- const struct nvmem_keepout *keepout;
- unsigned int nkeepout;
- nvmem_reg_read_t reg_read;
- nvmem_reg_write_t reg_write;
- struct gpio_desc *wp_gpio;
- struct nvmem_layout *layout;
- void *priv;
-};
+#include "internals.h"

#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
new file mode 100644
index 000000000000..ce353831cd65
--- /dev/null
+++ b/drivers/nvmem/internals.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_NVMEM_INTERNALS_H
+#define _LINUX_NVMEM_INTERNALS_H
+
+#include <linux/device.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+
+struct nvmem_device {
+ struct module *owner;
+ struct device dev;
+ struct list_head node;
+ int stride;
+ int word_size;
+ int id;
+ struct kref refcnt;
+ size_t size;
+ bool read_only;
+ bool root_only;
+ int flags;
+ enum nvmem_type type;
+ struct bin_attribute eeprom;
+ struct device *base_dev;
+ struct list_head cells;
+ const struct nvmem_keepout *keepout;
+ unsigned int nkeepout;
+ nvmem_reg_read_t reg_read;
+ nvmem_reg_write_t reg_write;
+ struct gpio_desc *wp_gpio;
+ struct nvmem_layout *layout;
+ void *priv;
+};
+
+#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
--
2.34.1

2023-10-11 11:16:23

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 2/6] nvmem: Move of_nvmem_layout_get_container() in another header

nvmem-consumer.h is included by consumer devices, extracting data from
NVMEM devices whereas nvmem-provider.h is included by devices providing
NVMEM content.

The only users of of_nvmem_layout_get_container() outside of the core
are layout drivers, so better move its prototype to nvmem-provider.h.

While we do so, we also move the kdoc associated with the function to
the header rather than the .c file.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 8 --------
include/linux/nvmem-consumer.h | 7 -------
include/linux/nvmem-provider.h | 14 ++++++++++++++
3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index eaf6a3fe8ca6..93b867b3cdf9 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -841,14 +841,6 @@ static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
}

#if IS_ENABLED(CONFIG_OF)
-/**
- * of_nvmem_layout_get_container() - Get OF node to layout container.
- *
- * @nvmem: nvmem device.
- *
- * Return: a node pointer with refcount incremented or NULL if no
- * container exists. Use of_node_put() on it when done.
- */
struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
{
return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
index 4523e4e83319..960728b10a11 100644
--- a/include/linux/nvmem-consumer.h
+++ b/include/linux/nvmem-consumer.h
@@ -241,7 +241,6 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id);
struct nvmem_device *of_nvmem_device_get(struct device_node *np,
const char *name);
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
#else
static inline struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id)
@@ -254,12 +253,6 @@ static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np,
{
return ERR_PTR(-EOPNOTSUPP);
}
-
-static inline struct device_node *
-of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return NULL;
-}
#endif /* CONFIG_NVMEM && CONFIG_OF */

#endif /* ifndef _LINUX_NVMEM_CONSUMER_H */
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index dae26295e6be..2905f9e6fc2a 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -205,6 +205,16 @@ void nvmem_layout_unregister(struct nvmem_layout *layout);
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

+/**
+ * of_nvmem_layout_get_container() - Get OF node of layout container
+ *
+ * @nvmem: nvmem device
+ *
+ * Return: a node pointer with refcount incremented or NULL if no
+ * container exists. Use of_node_put() on it when done.
+ */
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
+
#else

static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c)
@@ -242,6 +252,10 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
return NULL;
}

+static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+{
+ return NULL;
+}
#endif /* CONFIG_NVMEM */

#define module_nvmem_layout_driver(__layout_driver) \
--
2.34.1

2023-10-11 11:16:38

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 6/6] nvmem: core: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.

Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.

Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.

Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.

Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
Tested-by: Chen-Yu Tsai <[email protected]>
---
drivers/nvmem/core.c | 134 +++++++++++++++++++++++++++++++++++++-
drivers/nvmem/internals.h | 1 +
2 files changed, 134 insertions(+), 1 deletion(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 0e364b8e9f99..f0e6d8a16380 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -299,6 +299,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}

+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ size_t cell_sz, read_len;
+ void *content;
+
+ entry = attr->private;
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell))
+ return PTR_ERR(cell);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -320,11 +357,21 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};

+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
NULL,
};

+static const struct attribute_group *nvmem_cells_groups[] = {
+ &nvmem_cells_group,
+ NULL,
+};
+
static struct bin_attribute bin_attr_nvmem_eeprom_compat = {
.attr = {
.name = "eeprom",
@@ -380,6 +427,68 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}

+static int nvmem_populate_sysfs_cells(struct nvmem_device *nvmem)
+{
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells) || nvmem->sysfs_cells_populated) {
+ nvmem_cells_group.bin_attrs = NULL;
+ goto unlock_mutex;
+ }
+
+ /* Allocate an array of attributes with a sentinel */
+ ncells = list_count_nodes(&nvmem->cells);
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL,
+ "%s@%x", entry->name,
+ entry->offset);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ attrs[i].private = entry;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+ ret = devm_device_add_groups(&nvmem->dev, nvmem_cells_groups);
+ if (ret)
+ goto unlock_mutex;
+
+ nvmem->sysfs_cells_populated = true;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */

static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -740,11 +849,25 @@ static int nvmem_add_cells_from_fixed_layout(struct nvmem_device *nvmem)

int nvmem_layout_register(struct nvmem_layout *layout)
{
+ int ret;
+
if (!layout->add_cells)
return -EINVAL;

/* Populate the cells */
- return layout->add_cells(&layout->nvmem->dev, layout->nvmem, layout);
+ ret = layout->add_cells(&layout->nvmem->dev, layout->nvmem, layout);
+ if (ret)
+ return ret;
+
+#ifdef CONFIG_NVMEM_SYSFS
+ ret = nvmem_populate_sysfs_cells(layout->nvmem);
+ if (ret) {
+ nvmem_device_remove_all_cells(layout->nvmem);
+ return ret;
+ }
+#endif
+
+ return 0;
}
EXPORT_SYMBOL_GPL(nvmem_layout_register);

@@ -900,11 +1023,20 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_destroy_layout;

+#ifdef CONFIG_NVMEM_SYSFS
+ rval = nvmem_populate_sysfs_cells(nvmem);
+ if (rval)
+ goto err_remove_dev;
+#endif

blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

+#ifdef CONFIG_NVMEM_SYSFS
+err_remove_dev:
+ device_del(&nvmem->dev);
+#endif
err_destroy_layout:
nvmem_destroy_layout(nvmem);
err_remove_cells:
diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
index c669c96e9052..88ee2b8aea8e 100644
--- a/drivers/nvmem/internals.h
+++ b/drivers/nvmem/internals.h
@@ -30,6 +30,7 @@ struct nvmem_device {
struct gpio_desc *wp_gpio;
struct nvmem_layout *layout;
void *priv;
+ bool sysfs_cells_populated;
};

#if IS_ENABLED(CONFIG_OF)
--
2.34.1

2023-10-11 11:16:43

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 5/6] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..7af70adf3690
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,21 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by the
+ NVMEM device. The name of the file is: <name>@<where>, with
+ <name> being the cell name and <where> its location in the NVMEM
+ device, in hexadecimal (without the '0x' prefix, to mimic device
+ tree node names). The length of the file is the size of the cell
+ (when known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without trailing
+ character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1

2023-10-11 11:16:50

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 1/6] of: device: Export of_device_make_bus_id()

This helper is really handy to create unique device names based on their
device tree path, we may need it outside of the OF core (in the NVMEM
subsystem) so let's export it. As this helper has nothing patform
specific, let's move it to of/device.c instead of of/platform.c so we
can add its prototype to of_device.h.

Signed-off-by: Miquel Raynal <[email protected]>
Acked-by: Rob Herring <[email protected]>
---
drivers/of/device.c | 41 +++++++++++++++++++++++++++++++++++++++
drivers/of/platform.c | 40 --------------------------------------
include/linux/of_device.h | 6 ++++++
3 files changed, 47 insertions(+), 40 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1ca42ad9dd15..6e9572c4af83 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -304,3 +304,44 @@ int of_device_uevent_modalias(const struct device *dev, struct kobj_uevent_env *
return 0;
}
EXPORT_SYMBOL_GPL(of_device_uevent_modalias);
+
+/**
+ * of_device_make_bus_id - Use the device node data to assign a unique name
+ * @dev: pointer to device structure that is linked to a device tree node
+ *
+ * This routine will first try using the translated bus address to
+ * derive a unique name. If it cannot, then it will prepend names from
+ * parent nodes until a unique name can be derived.
+ */
+void of_device_make_bus_id(struct device *dev)
+{
+ struct device_node *node = dev->of_node;
+ const __be32 *reg;
+ u64 addr;
+ u32 mask;
+
+ /* Construct the name, using parent nodes if necessary to ensure uniqueness */
+ while (node->parent) {
+ /*
+ * If the address can be translated, then that is as much
+ * uniqueness as we need. Make it the first component and return
+ */
+ reg = of_get_property(node, "reg", NULL);
+ if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
+ if (!of_property_read_u32(node, "mask", &mask))
+ dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
+ addr, ffs(mask) - 1, node, dev_name(dev));
+
+ else
+ dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
+ addr, node, dev_name(dev));
+ return;
+ }
+
+ /* format arguments only used if dev_name() resolves to NULL */
+ dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
+ kbasename(node->full_name), dev_name(dev));
+ node = node->parent;
+ }
+}
+EXPORT_SYMBOL_GPL(of_device_make_bus_id);
diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index f235ab55b91e..be32e28c6f55 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -97,46 +97,6 @@ static const struct of_device_id of_skipped_node_table[] = {
* mechanism for creating devices from device tree nodes.
*/

-/**
- * of_device_make_bus_id - Use the device node data to assign a unique name
- * @dev: pointer to device structure that is linked to a device tree node
- *
- * This routine will first try using the translated bus address to
- * derive a unique name. If it cannot, then it will prepend names from
- * parent nodes until a unique name can be derived.
- */
-static void of_device_make_bus_id(struct device *dev)
-{
- struct device_node *node = dev->of_node;
- const __be32 *reg;
- u64 addr;
- u32 mask;
-
- /* Construct the name, using parent nodes if necessary to ensure uniqueness */
- while (node->parent) {
- /*
- * If the address can be translated, then that is as much
- * uniqueness as we need. Make it the first component and return
- */
- reg = of_get_property(node, "reg", NULL);
- if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
- if (!of_property_read_u32(node, "mask", &mask))
- dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
- addr, ffs(mask) - 1, node, dev_name(dev));
-
- else
- dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
- addr, node, dev_name(dev));
- return;
- }
-
- /* format arguments only used if dev_name() resolves to NULL */
- dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
- kbasename(node->full_name), dev_name(dev));
- node = node->parent;
- }
-}
-
/**
* of_device_alloc - Allocate and initialize an of_device
* @np: device node to assign to device
diff --git a/include/linux/of_device.h b/include/linux/of_device.h
index 2c7a3d4bc775..a72661e47faa 100644
--- a/include/linux/of_device.h
+++ b/include/linux/of_device.h
@@ -40,6 +40,9 @@ static inline int of_dma_configure(struct device *dev,
{
return of_dma_configure_id(dev, np, force_dma, NULL);
}
+
+void of_device_make_bus_id(struct device *dev);
+
#else /* CONFIG_OF */

static inline int of_driver_match_device(struct device *dev,
@@ -82,6 +85,9 @@ static inline int of_dma_configure(struct device *dev,
{
return 0;
}
+
+static inline void of_device_make_bus_id(struct device *dev) {}
+
#endif /* CONFIG_OF */

#endif /* _LINUX_OF_DEVICE_H */
--
2.34.1

2023-10-11 11:16:56

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Even if we consider that the hardware description shall be
correct, we could still probe the storage device (especially if it
contains the rootfs).

One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.

In order to achieve that goal we need:
* To keep track of all nvmem devices
* To create a new bus for the nvmem-layouts with minimal logic to match
nvmem-layout devices with nvmem-layout drivers.
All this infrastructure code is created in the layouts.c file.

Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
---
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 130 ++++--------------
drivers/nvmem/internals.h | 21 +++
drivers/nvmem/layouts.c | 228 +++++++++++++++++++++++++++++++
drivers/nvmem/layouts/onie-tlv.c | 23 +++-
drivers/nvmem/layouts/sl28vpd.c | 23 +++-
include/linux/nvmem-provider.h | 34 ++---
7 files changed, 335 insertions(+), 126 deletions(-)
create mode 100644 drivers/nvmem/layouts.c

diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
index 423baf089515..77be96076ea6 100644
--- a/drivers/nvmem/Makefile
+++ b/drivers/nvmem/Makefile
@@ -4,7 +4,7 @@
#

obj-$(CONFIG_NVMEM) += nvmem_core.o
-nvmem_core-y := core.o
+nvmem_core-y := core.o layouts.o
obj-y += layouts/

# Devices
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index eefb5d0a0c91..0e364b8e9f99 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -55,9 +55,6 @@ static LIST_HEAD(nvmem_lookup_list);

static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);

-static DEFINE_SPINLOCK(nvmem_layout_lock);
-static LIST_HEAD(nvmem_layouts);
-
static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
void *val, size_t bytes)
{
@@ -741,91 +738,22 @@ static int nvmem_add_cells_from_fixed_layout(struct nvmem_device *nvmem)
return err;
}

-int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner)
+int nvmem_layout_register(struct nvmem_layout *layout)
{
- layout->owner = owner;
+ if (!layout->add_cells)
+ return -EINVAL;

- spin_lock(&nvmem_layout_lock);
- list_add(&layout->node, &nvmem_layouts);
- spin_unlock(&nvmem_layout_lock);
-
- blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_ADD, layout);
-
- return 0;
+ /* Populate the cells */
+ return layout->add_cells(&layout->nvmem->dev, layout->nvmem, layout);
}
-EXPORT_SYMBOL_GPL(__nvmem_layout_register);
+EXPORT_SYMBOL_GPL(nvmem_layout_register);

void nvmem_layout_unregister(struct nvmem_layout *layout)
{
- blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_REMOVE, layout);
-
- spin_lock(&nvmem_layout_lock);
- list_del(&layout->node);
- spin_unlock(&nvmem_layout_lock);
+ /* Keep the API even with an empty stub in case we need it later */
}
EXPORT_SYMBOL_GPL(nvmem_layout_unregister);

-static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
-{
- struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
-
- layout_np = of_nvmem_layout_get_container(nvmem);
- if (!layout_np)
- return NULL;
-
- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
- spin_lock(&nvmem_layout_lock);
-
- list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
- if (try_module_get(l->owner))
- layout = l;
-
- break;
- }
- }
-
- spin_unlock(&nvmem_layout_lock);
- of_node_put(layout_np);
-
- return layout;
-}
-
-static void nvmem_layout_put(struct nvmem_layout *layout)
-{
- if (layout)
- module_put(layout->owner);
-}
-
-static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
-{
- struct nvmem_layout *layout = nvmem->layout;
- int ret;
-
- if (layout && layout->add_cells) {
- ret = layout->add_cells(&nvmem->dev, nvmem, layout);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-#if IS_ENABLED(CONFIG_OF)
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
-}
-EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
-#endif
-
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout)
{
@@ -833,7 +761,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;

layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev.driver->of_match_table, layout_np);

return match ? match->data : NULL;
}
@@ -944,19 +872,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}

- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -975,7 +890,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- rval = nvmem_add_cells_from_layout(nvmem);
+ rval = nvmem_populate_layout(nvmem);
if (rval)
goto err_remove_cells;

@@ -983,16 +898,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)

rval = device_add(&nvmem->dev);
if (rval)
- goto err_remove_cells;
+ goto err_destroy_layout;
+

blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

+err_destroy_layout:
+ nvmem_destroy_layout(nvmem);
err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -1014,7 +930,7 @@ static void nvmem_device_release(struct kref *kref)
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);

nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
+ nvmem_destroy_layout(nvmem);
device_unregister(&nvmem->dev);
}

@@ -1400,7 +1316,10 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id)
of_node_put(cell_np);
if (!cell_entry) {
__nvmem_device_put(nvmem);
- return ERR_PTR(-ENOENT);
+ if (nvmem->layout)
+ return ERR_PTR(-EAGAIN);
+ else
+ return ERR_PTR(-ENOENT);
}

cell = nvmem_create_cell(cell_entry, id, cell_index);
@@ -2096,11 +2015,22 @@ EXPORT_SYMBOL_GPL(nvmem_dev_name);

static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ ret = nvmem_layout_bus_register();
+ if (ret)
+ bus_unregister(&nvmem_bus_type);
+
+ return ret;
}

static void __exit nvmem_exit(void)
{
+ nvmem_layout_bus_unregister();
bus_unregister(&nvmem_bus_type);
}

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
index ce353831cd65..c669c96e9052 100644
--- a/drivers/nvmem/internals.h
+++ b/drivers/nvmem/internals.h
@@ -32,4 +32,25 @@ struct nvmem_device {
void *priv;
};

+#if IS_ENABLED(CONFIG_OF)
+int nvmem_layout_bus_register(void);
+void nvmem_layout_bus_unregister(void);
+int nvmem_populate_layout(struct nvmem_device *nvmem);
+void nvmem_destroy_layout(struct nvmem_device *nvmem);
+#else /* CONFIG_OF */
+static inline int nvmem_layout_bus_register(void)
+{
+ return 0;
+}
+
+static inline void nvmem_layout_bus_unregister(void) {}
+
+static inline int nvmem_populate_layout(struct nvmem_device *nvmem)
+{
+ return 0;
+}
+
+static inline void nvmem_destroy_layout(struct nvmem_device *nvmem) { }
+#endif /* CONFIG_OF */
+
#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
diff --git a/drivers/nvmem/layouts.c b/drivers/nvmem/layouts.c
new file mode 100644
index 000000000000..8c73a8a15dd5
--- /dev/null
+++ b/drivers/nvmem/layouts.c
@@ -0,0 +1,228 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMEM layout bus handling
+ *
+ * Copyright (C) 2023 Bootlin
+ * Author: Miquel Raynal <[email protected]
+ */
+
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+
+#include "internals.h"
+
+#if IS_ENABLED(CONFIG_OF)
+#define to_nvmem_layout_driver(drv) \
+ (container_of((drv), struct nvmem_layout_driver, driver))
+#define to_nvmem_layout_device(_dev) \
+ container_of((_dev), struct nvmem_layout, dev)
+
+static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
+{
+ return of_driver_match_device(dev, drv);
+}
+
+static int nvmem_layout_bus_probe(struct device *dev)
+{
+ struct nvmem_layout_driver *drv = to_nvmem_layout_driver(dev->driver);
+ struct nvmem_layout *layout = to_nvmem_layout_device(dev);
+
+ if (!drv->probe || !drv->remove)
+ return -EINVAL;
+
+ return drv->probe(layout);
+}
+
+static void nvmem_layout_bus_remove(struct device *dev)
+{
+ struct nvmem_layout_driver *drv = to_nvmem_layout_driver(dev->driver);
+ struct nvmem_layout *layout = to_nvmem_layout_device(dev);
+
+ return drv->remove(layout);
+}
+
+static struct bus_type nvmem_layout_bus_type = {
+ .name = "nvmem-layout",
+ .match = nvmem_layout_bus_match,
+ .probe = nvmem_layout_bus_probe,
+ .remove = nvmem_layout_bus_remove,
+};
+
+static struct device nvmem_layout_bus = {
+ .init_name = "nvmem-layout",
+};
+
+int nvmem_layout_driver_register(struct nvmem_layout_driver *drv)
+{
+ drv->driver.bus = &nvmem_layout_bus_type;
+
+ return driver_register(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(nvmem_layout_driver_register);
+
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv)
+{
+ driver_unregister(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(nvmem_layout_driver_unregister);
+
+static void nvmem_layout_release_device(struct device *dev)
+{
+ struct nvmem_layout *layout = to_nvmem_layout_device(dev);
+
+ of_node_put(layout->dev.of_node);
+ kfree(layout);
+}
+
+static int nvmem_layout_create_device(struct nvmem_device *nvmem,
+ struct device_node *np)
+{
+ struct nvmem_layout *layout;
+ struct device *dev;
+ int ret;
+
+ layout = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ /* Create a bidirectional link */
+ layout->nvmem = nvmem;
+ nvmem->layout = layout;
+
+ /* Device model registration */
+ dev = &layout->dev;
+ device_initialize(dev);
+ dev->parent = &nvmem_layout_bus;
+ dev->bus = &nvmem_layout_bus_type;
+ dev->release = nvmem_layout_release_device;
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->dma_mask = &dev->coherent_dma_mask;
+ device_set_node(dev, of_fwnode_handle(of_node_get(np)));
+ of_device_make_bus_id(dev);
+ of_msi_configure(dev, dev->of_node);
+
+ ret = device_add(dev);
+ if (ret) {
+ put_device(dev);
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct of_device_id of_nvmem_layout_skip_table[] = {
+ { .compatible = "fixed-layout", },
+ {}
+};
+
+static int nvmem_layout_bus_populate(struct nvmem_device *nvmem,
+ struct device_node *layout_dn)
+{
+ int ret;
+
+ /* Make sure it has a compatible property */
+ if (!of_get_property(layout_dn, "compatible", NULL)) {
+ pr_debug("%s() - skipping %pOF, no compatible prop\n",
+ __func__, layout_dn);
+ return 0;
+ }
+
+ /* Fixed layouts are parsed manually somewhere else for now */
+ if (of_match_node(of_nvmem_layout_skip_table, layout_dn)) {
+ pr_debug("%s() - skipping %pOF node\n", __func__, layout_dn);
+ return 0;
+ }
+
+ if (of_node_check_flag(layout_dn, OF_POPULATED_BUS)) {
+ pr_debug("%s() - skipping %pOF, already populated\n",
+ __func__, layout_dn);
+
+ return 0;
+ }
+
+ /* NVMEM layout buses expect only a single device representing the layout */
+ ret = nvmem_layout_create_device(nvmem, layout_dn);
+ if (ret)
+ return ret;
+
+ of_node_set_flag(layout_dn, OF_POPULATED_BUS);
+
+ return 0;
+}
+
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+{
+ return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
+}
+EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
+
+/*
+ * Returns the number of devices populated, 0 if the operation was not relevant
+ * for this nvmem device, an error code otherwise.
+ */
+int nvmem_populate_layout(struct nvmem_device *nvmem)
+{
+ struct device_node *nvmem_dn, *layout_dn;
+ int ret;
+
+ layout_dn = of_nvmem_layout_get_container(nvmem);
+ if (!layout_dn)
+ return 0;
+
+ nvmem_dn = of_node_get(nvmem->dev.of_node);
+ if (!nvmem_dn) {
+ of_node_put(layout_dn);
+ return 0;
+ }
+
+ /* Ensure the layout driver is loaded */
+ of_request_module(layout_dn);
+
+ /* Populate the layout device */
+ device_links_supplier_sync_state_pause();
+ ret = nvmem_layout_bus_populate(nvmem, layout_dn);
+ device_links_supplier_sync_state_resume();
+
+ of_node_put(nvmem_dn);
+ of_node_put(layout_dn);
+ return ret;
+}
+
+void nvmem_destroy_layout(struct nvmem_device *nvmem)
+{
+ struct device *dev = &nvmem->layout->dev;
+
+ of_node_clear_flag(dev->of_node, OF_POPULATED_BUS);
+ put_device(dev);
+}
+
+int nvmem_layout_bus_register(void)
+{
+ int ret;
+
+ ret = device_register(&nvmem_layout_bus);
+ if (ret) {
+ put_device(&nvmem_layout_bus);
+ return ret;
+ }
+
+ ret = bus_register(&nvmem_layout_bus_type);
+ if (ret) {
+ device_unregister(&nvmem_layout_bus);
+ return ret;
+ }
+
+ return 0;
+}
+
+void nvmem_layout_bus_unregister(void)
+{
+ bus_unregister(&nvmem_layout_bus_type);
+ device_unregister(&nvmem_layout_bus);
+}
+#endif
diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
index 59fc87ccfcff..8d19346b9206 100644
--- a/drivers/nvmem/layouts/onie-tlv.c
+++ b/drivers/nvmem/layouts/onie-tlv.c
@@ -226,16 +226,31 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int onie_tlv_probe(struct nvmem_layout *layout)
+{
+ layout->add_cells = onie_tlv_parse_table;
+
+ return nvmem_layout_register(layout);
+}
+
+static void onie_tlv_remove(struct nvmem_layout *layout)
+{
+ nvmem_layout_unregister(layout);
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);

-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct nvmem_layout_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ },
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
};
module_nvmem_layout_driver(onie_tlv_layout);

diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
index 05671371f631..ab4ceaf1ea16 100644
--- a/drivers/nvmem/layouts/sl28vpd.c
+++ b/drivers/nvmem/layouts/sl28vpd.c
@@ -135,16 +135,31 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int sl28vpd_probe(struct nvmem_layout *layout)
+{
+ layout->add_cells = sl28vpd_add_cells;
+
+ return nvmem_layout_register(layout);
+}
+
+static void sl28vpd_remove(struct nvmem_layout *layout)
+{
+ nvmem_layout_unregister(layout);
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);

-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct nvmem_layout_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ },
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
};
module_nvmem_layout_driver(sl28vpd_layout);

diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index 2905f9e6fc2a..a0ea8326605a 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -9,6 +9,7 @@
#ifndef _LINUX_NVMEM_PROVIDER_H
#define _LINUX_NVMEM_PROVIDER_H

+#include <linux/device.h>
#include <linux/device/driver.h>
#include <linux/err.h>
#include <linux/errno.h>
@@ -154,15 +155,13 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
+ * @nvmem: The underlying NVMEM device
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
* @fixup_cell_info: Will be called before a cell is added. Can be
* used to modify the nvmem_cell_info.
- * @owner: Pointer to struct module.
- * @node: List node.
*
* A nvmem device can hold a well defined structure which can just be
* evaluated during runtime. For example a TLV list, or a list of "name=val"
@@ -170,17 +169,19 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device dev;
+ struct nvmem_device *nvmem;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
struct nvmem_layout *layout,
struct nvmem_cell_info *cell);
+};

- /* private */
- struct module *owner;
- struct list_head node;
+struct nvmem_layout_driver {
+ struct device_driver driver;
+ int (*probe)(struct nvmem_layout *layout);
+ void (*remove)(struct nvmem_layout *layout);
};

#if IS_ENABLED(CONFIG_NVMEM)
@@ -197,11 +198,15 @@ void nvmem_del_cell_table(struct nvmem_cell_table *table);
int nvmem_add_one_cell(struct nvmem_device *nvmem,
const struct nvmem_cell_info *info);

-int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner);
-#define nvmem_layout_register(layout) \
- __nvmem_layout_register(layout, THIS_MODULE)
+int nvmem_layout_register(struct nvmem_layout *layout);
void nvmem_layout_unregister(struct nvmem_layout *layout);

+int nvmem_layout_driver_register(struct nvmem_layout_driver *drv);
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv);
+#define module_nvmem_layout_driver(__nvmem_layout_driver) \
+ module_driver(__nvmem_layout_driver, nvmem_layout_driver_register, \
+ nvmem_layout_driver_unregister)
+
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

@@ -257,9 +262,4 @@ static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_dev
return NULL;
}
#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
--
2.34.1

2023-10-12 12:37:25

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v13 2/6] nvmem: Move of_nvmem_layout_get_container() in another header

Hi Miquel,

kernel test robot noticed the following build errors:

[auto build test ERROR on robh/for-next]
[also build test ERROR on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231011-191637
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231011111529.86440-3-miquel.raynal%40bootlin.com
patch subject: [PATCH v13 2/6] nvmem: Move of_nvmem_layout_get_container() in another header
config: um-randconfig-001-20231012 (https://download.01.org/0day-ci/archive/20231012/[email protected]/config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231012/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

/usr/bin/ld: drivers/nvmem/core.o: in function `nvmem_layout_get_match_data':
core.c:(.text+0x46d): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/core.o: in function `nvmem_register':
core.c:(.text+0x1805): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: core.c:(.text+0x1abd): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/layouts/sl28vpd.o: in function `sl28vpd_add_cells':
>> sl28vpd.c:(.text+0x1ba): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/layouts/onie-tlv.o: in function `onie_tlv_parse_table':
>> onie-tlv.c:(.text+0x182): undefined reference to `of_nvmem_layout_get_container'
collect2: error: ld returned 1 exit status

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-12 16:11:35

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

kernel test robot noticed the following build errors:

[auto build test ERROR on robh/for-next]
[also build test ERROR on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231011-191637
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231011111529.86440-5-miquel.raynal%40bootlin.com
patch subject: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices
config: um-randconfig-001-20231012 (https://download.01.org/0day-ci/archive/20231013/[email protected]/config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231013/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

/usr/bin/ld: drivers/nvmem/core.o: in function `nvmem_layout_get_match_data':
core.c:(.text+0x2ed): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/core.o: in function `nvmem_register':
core.c:(.text+0x193d): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/layouts/sl28vpd.o: in function `sl28vpd_add_cells':
sl28vpd.c:(.text+0x1fa): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/layouts/sl28vpd.o: in function `sl28vpd_layout_init':
>> sl28vpd.c:(.init.text+0x9): undefined reference to `nvmem_layout_driver_register'
/usr/bin/ld: drivers/nvmem/layouts/sl28vpd.o: in function `sl28vpd_layout_exit':
>> sl28vpd.c:(.exit.text+0x9): undefined reference to `nvmem_layout_driver_unregister'
/usr/bin/ld: drivers/nvmem/layouts/onie-tlv.o: in function `onie_tlv_parse_table':
onie-tlv.c:(.text+0x1c2): undefined reference to `of_nvmem_layout_get_container'
/usr/bin/ld: drivers/nvmem/layouts/onie-tlv.o: in function `onie_tlv_layout_init':
>> onie-tlv.c:(.init.text+0x9): undefined reference to `nvmem_layout_driver_register'
/usr/bin/ld: drivers/nvmem/layouts/onie-tlv.o: in function `onie_tlv_layout_exit':
>> onie-tlv.c:(.exit.text+0x9): undefined reference to `nvmem_layout_driver_unregister'
collect2: error: ld returned 1 exit status

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-11-22 22:03:45

by Marco Felsch

[permalink] [raw]
Subject: Re: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

thanks a lot for your effort on this. Please see my comments inline.

On 23-10-11, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Even if we consider that the hardware description shall be
> correct, we could still probe the storage device (especially if it
> contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> In order to achieve that goal we need:
> * To keep track of all nvmem devices
> * To create a new bus for the nvmem-layouts with minimal logic to match
> nvmem-layout devices with nvmem-layout drivers.
> All this infrastructure code is created in the layouts.c file.
>
> Signed-off-by: Miquel Raynal <[email protected]>
> Tested-by: Rafał Miłecki <[email protected]>
> ---
> drivers/nvmem/Makefile | 2 +-
> drivers/nvmem/core.c | 130 ++++--------------
> drivers/nvmem/internals.h | 21 +++
> drivers/nvmem/layouts.c | 228 +++++++++++++++++++++++++++++++
> drivers/nvmem/layouts/onie-tlv.c | 23 +++-
> drivers/nvmem/layouts/sl28vpd.c | 23 +++-
> include/linux/nvmem-provider.h | 34 ++---
> 7 files changed, 335 insertions(+), 126 deletions(-)
> create mode 100644 drivers/nvmem/layouts.c
>
> diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
> index 423baf089515..77be96076ea6 100644
> --- a/drivers/nvmem/Makefile
> +++ b/drivers/nvmem/Makefile
> @@ -4,7 +4,7 @@
> #
>
> obj-$(CONFIG_NVMEM) += nvmem_core.o
> -nvmem_core-y := core.o
> +nvmem_core-y := core.o layouts.o
> obj-y += layouts/
>
> # Devices
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index eefb5d0a0c91..0e364b8e9f99 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -55,9 +55,6 @@ static LIST_HEAD(nvmem_lookup_list);
>
> static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>
> -static DEFINE_SPINLOCK(nvmem_layout_lock);
> -static LIST_HEAD(nvmem_layouts);
> -
> static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
> void *val, size_t bytes)
> {
> @@ -741,91 +738,22 @@ static int nvmem_add_cells_from_fixed_layout(struct nvmem_device *nvmem)
> return err;
> }
>
> -int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner)
> +int nvmem_layout_register(struct nvmem_layout *layout)
> {
> - layout->owner = owner;
> + if (!layout->add_cells)
> + return -EINVAL;
>
> - spin_lock(&nvmem_layout_lock);
> - list_add(&layout->node, &nvmem_layouts);
> - spin_unlock(&nvmem_layout_lock);
> -
> - blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_ADD, layout);
> -
> - return 0;
> + /* Populate the cells */
> + return layout->add_cells(&layout->nvmem->dev, layout->nvmem, layout);
> }
> -EXPORT_SYMBOL_GPL(__nvmem_layout_register);
> +EXPORT_SYMBOL_GPL(nvmem_layout_register);
>
> void nvmem_layout_unregister(struct nvmem_layout *layout)
> {
> - blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_REMOVE, layout);
> -
> - spin_lock(&nvmem_layout_lock);
> - list_del(&layout->node);
> - spin_unlock(&nvmem_layout_lock);
> + /* Keep the API even with an empty stub in case we need it later */
> }
> EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
>
> -static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
> -{
> - struct device_node *layout_np;
> - struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
> -
> - layout_np = of_nvmem_layout_get_container(nvmem);
> - if (!layout_np)
> - return NULL;
> -
> - /*
> - * In case the nvmem device was built-in while the layout was built as a
> - * module, we shall manually request the layout driver loading otherwise
> - * we'll never have any match.
> - */
> - of_request_module(layout_np);
> -
> - spin_lock(&nvmem_layout_lock);
> -
> - list_for_each_entry(l, &nvmem_layouts, node) {
> - if (of_match_node(l->of_match_table, layout_np)) {
> - if (try_module_get(l->owner))
> - layout = l;
> -
> - break;
> - }
> - }
> -
> - spin_unlock(&nvmem_layout_lock);
> - of_node_put(layout_np);
> -
> - return layout;
> -}
> -
> -static void nvmem_layout_put(struct nvmem_layout *layout)
> -{
> - if (layout)
> - module_put(layout->owner);
> -}
> -
> -static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
> -{
> - struct nvmem_layout *layout = nvmem->layout;
> - int ret;
> -
> - if (layout && layout->add_cells) {
> - ret = layout->add_cells(&nvmem->dev, nvmem, layout);
> - if (ret)
> - return ret;
> - }
> -
> - return 0;
> -}
> -
> -#if IS_ENABLED(CONFIG_OF)
> -struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
> -{
> - return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
> -}
> -EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
> -#endif
> -
> const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> struct nvmem_layout *layout)
> {
> @@ -833,7 +761,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> const struct of_device_id *match;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> - match = of_match_node(layout->of_match_table, layout_np);
> + match = of_match_node(layout->dev.driver->of_match_table, layout_np);
>
> return match ? match->data : NULL;
> }
> @@ -944,19 +872,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> goto err_put_device;
> }
>
> - /*
> - * If the driver supplied a layout by config->layout, the module
> - * pointer will be NULL and nvmem_layout_put() will be a noop.
> - */
> - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> - if (IS_ERR(nvmem->layout)) {
> - rval = PTR_ERR(nvmem->layout);
> - nvmem->layout = NULL;
> -
> - if (rval == -EPROBE_DEFER)
> - goto err_teardown_compat;
> - }
> -
> if (config->cells) {
> rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> if (rval)
> @@ -975,7 +890,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> - rval = nvmem_add_cells_from_layout(nvmem);
> + rval = nvmem_populate_layout(nvmem);
> if (rval)
> goto err_remove_cells;
>
> @@ -983,16 +898,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
>
> rval = device_add(&nvmem->dev);
> if (rval)
> - goto err_remove_cells;
> + goto err_destroy_layout;
> +
>
> blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>
> return nvmem;
>
> +err_destroy_layout:
> + nvmem_destroy_layout(nvmem);
> err_remove_cells:
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> -err_teardown_compat:
> if (config->compat)
> nvmem_sysfs_remove_compat(nvmem, config);
> err_put_device:
> @@ -1014,7 +930,7 @@ static void nvmem_device_release(struct kref *kref)
> device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
>
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> + nvmem_destroy_layout(nvmem);
> device_unregister(&nvmem->dev);
> }
>
> @@ -1400,7 +1316,10 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id)
> of_node_put(cell_np);
> if (!cell_entry) {
> __nvmem_device_put(nvmem);
> - return ERR_PTR(-ENOENT);
> + if (nvmem->layout)
> + return ERR_PTR(-EAGAIN);
> + else
> + return ERR_PTR(-ENOENT);
> }
>
> cell = nvmem_create_cell(cell_entry, id, cell_index);
> @@ -2096,11 +2015,22 @@ EXPORT_SYMBOL_GPL(nvmem_dev_name);
>
> static int __init nvmem_init(void)
> {
> - return bus_register(&nvmem_bus_type);
> + int ret;
> +
> + ret = bus_register(&nvmem_bus_type);
> + if (ret)
> + return ret;
> +
> + ret = nvmem_layout_bus_register();
> + if (ret)
> + bus_unregister(&nvmem_bus_type);
> +
> + return ret;
> }
>
> static void __exit nvmem_exit(void)
> {
> + nvmem_layout_bus_unregister();
> bus_unregister(&nvmem_bus_type);
> }
>
> diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
> index ce353831cd65..c669c96e9052 100644
> --- a/drivers/nvmem/internals.h
> +++ b/drivers/nvmem/internals.h
> @@ -32,4 +32,25 @@ struct nvmem_device {
> void *priv;
> };
>
> +#if IS_ENABLED(CONFIG_OF)
> +int nvmem_layout_bus_register(void);
> +void nvmem_layout_bus_unregister(void);
> +int nvmem_populate_layout(struct nvmem_device *nvmem);
> +void nvmem_destroy_layout(struct nvmem_device *nvmem);
> +#else /* CONFIG_OF */
> +static inline int nvmem_layout_bus_register(void)
> +{
> + return 0;
> +}
> +
> +static inline void nvmem_layout_bus_unregister(void) {}
> +
> +static inline int nvmem_populate_layout(struct nvmem_device *nvmem)
> +{
> + return 0;
> +}
> +
> +static inline void nvmem_destroy_layout(struct nvmem_device *nvmem) { }
> +#endif /* CONFIG_OF */
> +
> #endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
> diff --git a/drivers/nvmem/layouts.c b/drivers/nvmem/layouts.c
> new file mode 100644
> index 000000000000..8c73a8a15dd5
> --- /dev/null
> +++ b/drivers/nvmem/layouts.c
> @@ -0,0 +1,228 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * NVMEM layout bus handling
> + *
> + * Copyright (C) 2023 Bootlin
> + * Author: Miquel Raynal <[email protected]
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/nvmem-consumer.h>
> +#include <linux/nvmem-provider.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/of_irq.h>
> +
> +#include "internals.h"
> +
> +#if IS_ENABLED(CONFIG_OF)

Do we really need to cover this? Most of_ functions do have stubs now on
the other hand we could force the user to have of enabled if everything
requires of. This can be done via Kconfig select.

> +#define to_nvmem_layout_driver(drv) \
> + (container_of((drv), struct nvmem_layout_driver, driver))
> +#define to_nvmem_layout_device(_dev) \
> + container_of((_dev), struct nvmem_layout, dev)
> +
> +static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
> +{
> + return of_driver_match_device(dev, drv);
> +}
> +
> +static int nvmem_layout_bus_probe(struct device *dev)
> +{
> + struct nvmem_layout_driver *drv = to_nvmem_layout_driver(dev->driver);
> + struct nvmem_layout *layout = to_nvmem_layout_device(dev);
> +
> + if (!drv->probe || !drv->remove)
> + return -EINVAL;
> +
> + return drv->probe(layout);
> +}
> +
> +static void nvmem_layout_bus_remove(struct device *dev)
> +{
> + struct nvmem_layout_driver *drv = to_nvmem_layout_driver(dev->driver);
> + struct nvmem_layout *layout = to_nvmem_layout_device(dev);
> +
> + return drv->remove(layout);
> +}
> +
> +static struct bus_type nvmem_layout_bus_type = {
> + .name = "nvmem-layout",
> + .match = nvmem_layout_bus_match,
> + .probe = nvmem_layout_bus_probe,
> + .remove = nvmem_layout_bus_remove,
> +};
> +
> +static struct device nvmem_layout_bus = {
> + .init_name = "nvmem-layout",
> +};

Do we need this dummy device here? Please see below..

> +int nvmem_layout_driver_register(struct nvmem_layout_driver *drv)
> +{
> + drv->driver.bus = &nvmem_layout_bus_type;
> +
> + return driver_register(&drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(nvmem_layout_driver_register);
> +
> +void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv)
> +{
> + driver_unregister(&drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(nvmem_layout_driver_unregister);
> +
> +static void nvmem_layout_release_device(struct device *dev)
> +{
> + struct nvmem_layout *layout = to_nvmem_layout_device(dev);
> +
> + of_node_put(layout->dev.of_node);
> + kfree(layout);
> +}
> +
> +static int nvmem_layout_create_device(struct nvmem_device *nvmem,
> + struct device_node *np)
> +{
> + struct nvmem_layout *layout;
> + struct device *dev;
> + int ret;
> +
> + layout = kzalloc(sizeof(*dev), GFP_KERNEL);
^
this seems wrong.

> + if (!layout)
> + return -ENOMEM;
> +
> + /* Create a bidirectional link */
> + layout->nvmem = nvmem;
> + nvmem->layout = layout;
> +
> + /* Device model registration */
> + dev = &layout->dev;
> + device_initialize(dev);
> + dev->parent = &nvmem_layout_bus;

We do set it as parent device here but it's basically a dummy device.
Why don't we set the nvmem device instead? This becomes crucial for PM
if I get it correct. The parent devie gets enabled if the child
(nvmem-layout dev) is accessed automatically. With the dummy device as
parent nothing will happen and your nvmem device (EEPROM or so) will
still be unpowered, right?

> + dev->bus = &nvmem_layout_bus_type;
> + dev->release = nvmem_layout_release_device;
> + dev->coherent_dma_mask = DMA_BIT_MASK(32);
> + dev->dma_mask = &dev->coherent_dma_mask;
> + device_set_node(dev, of_fwnode_handle(of_node_get(np)));
> + of_device_make_bus_id(dev);
> + of_msi_configure(dev, dev->of_node);
> +
> + ret = device_add(dev);
> + if (ret) {
> + put_device(dev);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static const struct of_device_id of_nvmem_layout_skip_table[] = {
> + { .compatible = "fixed-layout", },
> + {}
> +};
> +
> +static int nvmem_layout_bus_populate(struct nvmem_device *nvmem,
> + struct device_node *layout_dn)
> +{
> + int ret;
> +
> + /* Make sure it has a compatible property */
> + if (!of_get_property(layout_dn, "compatible", NULL)) {
> + pr_debug("%s() - skipping %pOF, no compatible prop\n",
> + __func__, layout_dn);
> + return 0;
> + }
> +
> + /* Fixed layouts are parsed manually somewhere else for now */
> + if (of_match_node(of_nvmem_layout_skip_table, layout_dn)) {
> + pr_debug("%s() - skipping %pOF node\n", __func__, layout_dn);
> + return 0;
> + }
> +
> + if (of_node_check_flag(layout_dn, OF_POPULATED_BUS)) {
> + pr_debug("%s() - skipping %pOF, already populated\n",
> + __func__, layout_dn);
> +
> + return 0;
> + }
> +
> + /* NVMEM layout buses expect only a single device representing the layout */
> + ret = nvmem_layout_create_device(nvmem, layout_dn);
> + if (ret)
> + return ret;
> +
> + of_node_set_flag(layout_dn, OF_POPULATED_BUS);
> +
> + return 0;
> +}
> +
> +struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
> +{
> + return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
> +}
> +EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
> +
> +/*
> + * Returns the number of devices populated, 0 if the operation was not relevant
> + * for this nvmem device, an error code otherwise.
> + */
> +int nvmem_populate_layout(struct nvmem_device *nvmem)
> +{
> + struct device_node *nvmem_dn, *layout_dn;
> + int ret;
> +
> + layout_dn = of_nvmem_layout_get_container(nvmem);
> + if (!layout_dn)
> + return 0;
> +
> + nvmem_dn = of_node_get(nvmem->dev.of_node);
> + if (!nvmem_dn) {
> + of_node_put(layout_dn);
> + return 0;
> + }

Why do we need to request the nvmem_dn node here? It's unused.

> +
> + /* Ensure the layout driver is loaded */
> + of_request_module(layout_dn);
> +
> + /* Populate the layout device */
> + device_links_supplier_sync_state_pause();
> + ret = nvmem_layout_bus_populate(nvmem, layout_dn);
> + device_links_supplier_sync_state_resume();
> +
> + of_node_put(nvmem_dn);
> + of_node_put(layout_dn);
> + return ret;
> +}
> +
> +void nvmem_destroy_layout(struct nvmem_device *nvmem)
> +{
> + struct device *dev = &nvmem->layout->dev;
> +
> + of_node_clear_flag(dev->of_node, OF_POPULATED_BUS);
> + put_device(dev);
> +}
> +
> +int nvmem_layout_bus_register(void)
> +{
> + int ret;
> +
> + ret = device_register(&nvmem_layout_bus);
> + if (ret) {
> + put_device(&nvmem_layout_bus);
> + return ret;
> + }

This seems to be not required. Just register the bus and we should be
fine.

> +
> + ret = bus_register(&nvmem_layout_bus_type);
> + if (ret) {
> + device_unregister(&nvmem_layout_bus);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +void nvmem_layout_bus_unregister(void)
> +{
> + bus_unregister(&nvmem_layout_bus_type);
> + device_unregister(&nvmem_layout_bus);

Can be dropped here as well.

> +}
> +#endif
> diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
> index 59fc87ccfcff..8d19346b9206 100644
> --- a/drivers/nvmem/layouts/onie-tlv.c
> +++ b/drivers/nvmem/layouts/onie-tlv.c
> @@ -226,16 +226,31 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
> return 0;
> }
>
> +static int onie_tlv_probe(struct nvmem_layout *layout)
> +{
> + layout->add_cells = onie_tlv_parse_table;

Nit: the add cells could be done here as well, same for the other
layout. Would save us one indirection.

> +
> + return nvmem_layout_register(layout);
> +}
> +
> +static void onie_tlv_remove(struct nvmem_layout *layout)
> +{
> + nvmem_layout_unregister(layout);
> +}
> +
> static const struct of_device_id onie_tlv_of_match_table[] = {
> { .compatible = "onie,tlv-layout", },
> {},
> };
> MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);
>
> -static struct nvmem_layout onie_tlv_layout = {
> - .name = "ONIE tlv layout",
> - .of_match_table = onie_tlv_of_match_table,
> - .add_cells = onie_tlv_parse_table,
> +static struct nvmem_layout_driver onie_tlv_layout = {
> + .driver = {
> + .name = "onie-tlv-layout",
> + .of_match_table = onie_tlv_of_match_table,
> + },
> + .probe = onie_tlv_probe,
> + .remove = onie_tlv_remove,
> };
> module_nvmem_layout_driver(onie_tlv_layout);
>
> diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
> index 05671371f631..ab4ceaf1ea16 100644
> --- a/drivers/nvmem/layouts/sl28vpd.c
> +++ b/drivers/nvmem/layouts/sl28vpd.c
> @@ -135,16 +135,31 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
> return 0;
> }
>
> +static int sl28vpd_probe(struct nvmem_layout *layout)
> +{
> + layout->add_cells = sl28vpd_add_cells;
> +
> + return nvmem_layout_register(layout);
> +}
> +
> +static void sl28vpd_remove(struct nvmem_layout *layout)
> +{
> + nvmem_layout_unregister(layout);
> +}
> +
> static const struct of_device_id sl28vpd_of_match_table[] = {
> { .compatible = "kontron,sl28-vpd" },
> {},
> };
> MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);
>
> -static struct nvmem_layout sl28vpd_layout = {
> - .name = "sl28-vpd",
> - .of_match_table = sl28vpd_of_match_table,
> - .add_cells = sl28vpd_add_cells,
> +static struct nvmem_layout_driver sl28vpd_layout = {
> + .driver = {
> + .name = "kontron-sl28vpd-layout",
> + .of_match_table = sl28vpd_of_match_table,
> + },
> + .probe = sl28vpd_probe,
> + .remove = sl28vpd_remove,
> };
> module_nvmem_layout_driver(sl28vpd_layout);
>
> diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
> index 2905f9e6fc2a..a0ea8326605a 100644
> --- a/include/linux/nvmem-provider.h
> +++ b/include/linux/nvmem-provider.h
> @@ -9,6 +9,7 @@
> #ifndef _LINUX_NVMEM_PROVIDER_H
> #define _LINUX_NVMEM_PROVIDER_H
>
> +#include <linux/device.h>
> #include <linux/device/driver.h>
> #include <linux/err.h>
> #include <linux/errno.h>
> @@ -154,15 +155,13 @@ struct nvmem_cell_table {
> /**
> * struct nvmem_layout - NVMEM layout definitions
> *
> - * @name: Layout name.
> - * @of_match_table: Open firmware match table.
> + * @dev: Device-model layout device.
> + * @nvmem: The underlying NVMEM device
> * @add_cells: Will be called if a nvmem device is found which
> * has this layout. The function will add layout
> * specific cells with nvmem_add_one_cell().
> * @fixup_cell_info: Will be called before a cell is added. Can be
> * used to modify the nvmem_cell_info.
> - * @owner: Pointer to struct module.
> - * @node: List node.
> *
> * A nvmem device can hold a well defined structure which can just be
> * evaluated during runtime. For example a TLV list, or a list of "name=val"
> @@ -170,17 +169,19 @@ struct nvmem_cell_table {
> * cells.
> */
> struct nvmem_layout {

Since this became a device now should we refelct this within the struct
name, e.g. nvmem_layout_dev, nvmem_ldev, nvm_ldev?

Regards,
Marco

> - const char *name;
> - const struct of_device_id *of_match_table;
> + struct device dev;
> + struct nvmem_device *nvmem;
> int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
> struct nvmem_layout *layout);
> void (*fixup_cell_info)(struct nvmem_device *nvmem,
> struct nvmem_layout *layout,
> struct nvmem_cell_info *cell);
> +};
>
> - /* private */
> - struct module *owner;
> - struct list_head node;
> +struct nvmem_layout_driver {
> + struct device_driver driver;
> + int (*probe)(struct nvmem_layout *layout);
> + void (*remove)(struct nvmem_layout *layout);
> };
>
> #if IS_ENABLED(CONFIG_NVMEM)
> @@ -197,11 +198,15 @@ void nvmem_del_cell_table(struct nvmem_cell_table *table);
> int nvmem_add_one_cell(struct nvmem_device *nvmem,
> const struct nvmem_cell_info *info);
>
> -int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner);
> -#define nvmem_layout_register(layout) \
> - __nvmem_layout_register(layout, THIS_MODULE)
> +int nvmem_layout_register(struct nvmem_layout *layout);
> void nvmem_layout_unregister(struct nvmem_layout *layout);
>
> +int nvmem_layout_driver_register(struct nvmem_layout_driver *drv);
> +void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv);
> +#define module_nvmem_layout_driver(__nvmem_layout_driver) \
> + module_driver(__nvmem_layout_driver, nvmem_layout_driver_register, \
> + nvmem_layout_driver_unregister)
> +
> const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> struct nvmem_layout *layout);
>
> @@ -257,9 +262,4 @@ static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_dev
> return NULL;
> }
> #endif /* CONFIG_NVMEM */
> -
> -#define module_nvmem_layout_driver(__layout_driver) \
> - module_driver(__layout_driver, nvmem_layout_register, \
> - nvmem_layout_unregister)
> -
> #endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
> --
> 2.34.1
>

2023-11-22 22:46:22

by Marco Felsch

[permalink] [raw]
Subject: Re: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

sorry for answering to my own mail, I forgot something I noticed later.

On 23-11-22, Marco Felsch wrote:
> Hi Miquel,
>
> thanks a lot for your effort on this. Please see my comments inline.
>
> On 23-10-11, Miquel Raynal wrote:
> > Current layout support was initially written without modules support in
> > mind. When the requirement for module support rose, the existing base
> > was improved to adopt modularization support, but kind of a design flaw
> > was introduced. With the existing implementation, when a storage device
> > registers into NVMEM, the core tries to hook a layout (if any) and
> > populates its cells immediately. This means, if the hardware description
> > expects a layout to be hooked up, but no driver was provided for that,
> > the storage medium will fail to probe and try later from
> > scratch. Even if we consider that the hardware description shall be
> > correct, we could still probe the storage device (especially if it
> > contains the rootfs).
> >
> > One way to overcome this situation is to consider the layouts as
> > devices, and leverage the existing notifier mechanism. When a new NVMEM
> > device is registered, we can:
> > - populate its nvmem-layout child, if any
> > - try to modprobe the relevant driver, if relevant

I'm not sure why we call of_request_module() the driver framework should
handle that right?

> > - try to hook the NVMEM device with a layout in the notifier

The last part is no longer true since you don't use the notifier
anymore.

> > And when a new layout is registered:
> > - try to hook all the existing NVMEM devices which are not yet hooked to
> > a layout with the new layout
> > This way, there is no strong order to enforce, any NVMEM device creation
> > or NVMEM layout driver insertion will be observed as a new event which
> > may lead to the creation of additional cells, without disturbing the
> > probes with costly (and sometimes endless) deferrals.
> >
> > In order to achieve that goal we need:
> > * To keep track of all nvmem devices
> > * To create a new bus for the nvmem-layouts with minimal logic to match
> > nvmem-layout devices with nvmem-layout drivers.
> > All this infrastructure code is created in the layouts.c file.
> >
> > Signed-off-by: Miquel Raynal <[email protected]>
> > Tested-by: Rafał Miłecki <[email protected]>

...

> > @@ -944,19 +872,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > goto err_put_device;
> > }
> >
> > - /*
> > - * If the driver supplied a layout by config->layout, the module
> > - * pointer will be NULL and nvmem_layout_put() will be a noop.
> > - */
> > - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> > - if (IS_ERR(nvmem->layout)) {
> > - rval = PTR_ERR(nvmem->layout);
> > - nvmem->layout = NULL;
> > -
> > - if (rval == -EPROBE_DEFER)
> > - goto err_teardown_compat;
> > - }

Since this logic will be gone and the layout became a device the fixup
hook for the layout is more than confusing. E.g. the imx-ocotp driver
uses the layout to register a fixup for a cell which is fine but the
hook should be moved from the layout[-dev] to the config. Please see
below.

> > -
> > if (config->cells) {
> > rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> > if (rval)
> > @@ -975,7 +890,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > if (rval)
> > goto err_remove_cells;
> >
> > - rval = nvmem_add_cells_from_layout(nvmem);
> > + rval = nvmem_populate_layout(nvmem);
> > if (rval)
> > goto err_remove_cells;

Also why do we populate the nvmem-layout device infront of the nvmem
device?

> >
> > @@ -983,16 +898,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> >
> > rval = device_add(&nvmem->dev);
> > if (rval)
> > - goto err_remove_cells;
> > + goto err_destroy_layout;
> > +
> >
> > blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
> >
> > return nvmem;
> >
> > +err_destroy_layout:
> > + nvmem_destroy_layout(nvmem);
> > err_remove_cells:
> > nvmem_device_remove_all_cells(nvmem);
> > - nvmem_layout_put(nvmem->layout);
> > -err_teardown_compat:
> > if (config->compat)
> > nvmem_sysfs_remove_compat(nvmem, config);
> > err_put_device:
> > @@ -1014,7 +930,7 @@ static void nvmem_device_release(struct kref *kref)
> > device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
> >
> > nvmem_device_remove_all_cells(nvmem);
> > - nvmem_layout_put(nvmem->layout);
> > + nvmem_destroy_layout(nvmem);
> > device_unregister(&nvmem->dev);
> > }
> >

...

> > diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
> > index 2905f9e6fc2a..a0ea8326605a 100644
> > --- a/include/linux/nvmem-provider.h
> > +++ b/include/linux/nvmem-provider.h
> > @@ -9,6 +9,7 @@
> > #ifndef _LINUX_NVMEM_PROVIDER_H
> > #define _LINUX_NVMEM_PROVIDER_H
> >
> > +#include <linux/device.h>
> > #include <linux/device/driver.h>
> > #include <linux/err.h>
> > #include <linux/errno.h>
> > @@ -154,15 +155,13 @@ struct nvmem_cell_table {
> > /**
> > * struct nvmem_layout - NVMEM layout definitions
> > *
> > - * @name: Layout name.
> > - * @of_match_table: Open firmware match table.
> > + * @dev: Device-model layout device.
> > + * @nvmem: The underlying NVMEM device
> > * @add_cells: Will be called if a nvmem device is found which
> > * has this layout. The function will add layout
> > * specific cells with nvmem_add_one_cell().
> > * @fixup_cell_info: Will be called before a cell is added. Can be
> > * used to modify the nvmem_cell_info.
> > - * @owner: Pointer to struct module.
> > - * @node: List node.
> > *
> > * A nvmem device can hold a well defined structure which can just be
> > * evaluated during runtime. For example a TLV list, or a list of "name=val"
> > @@ -170,17 +169,19 @@ struct nvmem_cell_table {
> > * cells.
> > */
> > struct nvmem_layout {
>
> Since this became a device now should we refelct this within the struct
> name, e.g. nvmem_layout_dev, nvmem_ldev, nvm_ldev?
>
> Regards,
> Marco
>
> > - const char *name;
> > - const struct of_device_id *of_match_table;
> > + struct device dev;
> > + struct nvmem_device *nvmem;
> > int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
> > struct nvmem_layout *layout);
> > void (*fixup_cell_info)(struct nvmem_device *nvmem,
> > struct nvmem_layout *layout,
> > struct nvmem_cell_info *cell);

I speak about this hook. This should be moved into the config, maybe
also renamed to fixup_dt_cell_info() or so to not confuse the users. If
we move that hook and remove the add_cells hook there are only two
members left for the cross-link.

Regards,
Marco

2023-11-24 19:22:05

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Hi Marco,

[email protected] wrote on Wed, 22 Nov 2023 23:02:40 +0100:

> Hi Miquel,
>
> thanks a lot for your effort on this. Please see my comments inline.

Thanks for your interesting feedback! I do agree with most of your
comments and will correct them for the next version.

> > +static int onie_tlv_probe(struct nvmem_layout *layout)
> > +{
> > + layout->add_cells = onie_tlv_parse_table;
>
> Nit: the add cells could be done here as well, same for the other
> layout. Would save us one indirection.

I prefer all the handling of the cells to be done in a generic place
like the core. In fact patch 5 adds something to this indirection.

...

> > /**
> > * struct nvmem_layout - NVMEM layout definitions
> > *
> > - * @name: Layout name.
> > - * @of_match_table: Open firmware match table.
> > + * @dev: Device-model layout device.
> > + * @nvmem: The underlying NVMEM device
> > * @add_cells: Will be called if a nvmem device is found which
> > * has this layout. The function will add layout
> > * specific cells with nvmem_add_one_cell().
> > * @fixup_cell_info: Will be called before a cell is added. Can be
> > * used to modify the nvmem_cell_info.
> > - * @owner: Pointer to struct module.
> > - * @node: List node.
> > *
> > * A nvmem device can hold a well defined structure which can just be
> > * evaluated during runtime. For example a TLV list, or a list of "name=val"
> > @@ -170,17 +169,19 @@ struct nvmem_cell_table {
> > * cells.
> > */
> > struct nvmem_layout {
>
> Since this became a device now should we refelct this within the struct
> name, e.g. nvmem_layout_dev, nvmem_ldev, nvm_ldev?

I'd say it is a matter of taste, in general I don't like much the _dev
suffix. We handle nvmem layout drivers and nvmem layouts, like we
have joystick drivers and joysticks, I don't feel the need to suffix
them. I would not oppose if someone would rename this structure though.

> Regards,
> Marco
>

I'm fine with all your other comments and will make my best to address
them.

Thanks,
Miquèl

2023-11-24 19:25:22

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v13 4/6] nvmem: core: Rework layouts to become regular devices

Hi Marco,

[email protected] wrote on Wed, 22 Nov 2023 23:45:53 +0100:

> Hi Miquel,
>
> sorry for answering to my own mail, I forgot something I noticed later.

No problem :)

> On 23-11-22, Marco Felsch wrote:
> > Hi Miquel,
> >
> > thanks a lot for your effort on this. Please see my comments inline.
> >
> > On 23-10-11, Miquel Raynal wrote:
> > > Current layout support was initially written without modules support in
> > > mind. When the requirement for module support rose, the existing base
> > > was improved to adopt modularization support, but kind of a design flaw
> > > was introduced. With the existing implementation, when a storage device
> > > registers into NVMEM, the core tries to hook a layout (if any) and
> > > populates its cells immediately. This means, if the hardware description
> > > expects a layout to be hooked up, but no driver was provided for that,
> > > the storage medium will fail to probe and try later from
> > > scratch. Even if we consider that the hardware description shall be
> > > correct, we could still probe the storage device (especially if it
> > > contains the rootfs).
> > >
> > > One way to overcome this situation is to consider the layouts as
> > > devices, and leverage the existing notifier mechanism. When a new NVMEM
> > > device is registered, we can:
> > > - populate its nvmem-layout child, if any
> > > - try to modprobe the relevant driver, if relevant
>
> I'm not sure why we call of_request_module() the driver framework should
> handle that right?

Actually that's right, it is no longer needed, we would expect udev to
do that now. Thanks for the pointer.

> > > - try to hook the NVMEM device with a layout in the notifier
>
> The last part is no longer true since you don't use the notifier
> anymore.

True, I've re-written this part.

> > > And when a new layout is registered:
> > > - try to hook all the existing NVMEM devices which are not yet hooked to
> > > a layout with the new layout
> > > This way, there is no strong order to enforce, any NVMEM device creation
> > > or NVMEM layout driver insertion will be observed as a new event which
> > > may lead to the creation of additional cells, without disturbing the
> > > probes with costly (and sometimes endless) deferrals.
> > >
> > > In order to achieve that goal we need:
> > > * To keep track of all nvmem devices
> > > * To create a new bus for the nvmem-layouts with minimal logic to match
> > > nvmem-layout devices with nvmem-layout drivers.
> > > All this infrastructure code is created in the layouts.c file.
> > >
> > > Signed-off-by: Miquel Raynal <[email protected]>
> > > Tested-by: Rafał Miłecki <[email protected]>
>
> ...
>
> > > @@ -944,19 +872,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > > goto err_put_device;
> > > }
> > >
> > > - /*
> > > - * If the driver supplied a layout by config->layout, the module
> > > - * pointer will be NULL and nvmem_layout_put() will be a noop.
> > > - */
> > > - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> > > - if (IS_ERR(nvmem->layout)) {
> > > - rval = PTR_ERR(nvmem->layout);
> > > - nvmem->layout = NULL;
> > > -
> > > - if (rval == -EPROBE_DEFER)
> > > - goto err_teardown_compat;
> > > - }
>
> Since this logic will be gone and the layout became a device the fixup
> hook for the layout is more than confusing. E.g. the imx-ocotp driver
> uses the layout to register a fixup for a cell which is fine but the
> hook should be moved from the layout[-dev] to the config. Please see
> below.

That is true.

>
> > > -
> > > if (config->cells) {
> > > rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> > > if (rval)
> > > @@ -975,7 +890,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> > > if (rval)
> > > goto err_remove_cells;
> > >
> > > - rval = nvmem_add_cells_from_layout(nvmem);
> > > + rval = nvmem_populate_layout(nvmem);
> > > if (rval)
> > > goto err_remove_cells;
>
> Also why do we populate the nvmem-layout device infront of the nvmem
> device?

I'm not sure I get the question, there is nothing abnormal that stands
out to my eyes.

...

> >
> > > - const char *name;
> > > - const struct of_device_id *of_match_table;
> > > + struct device dev;
> > > + struct nvmem_device *nvmem;
> > > int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
> > > struct nvmem_layout *layout);
> > > void (*fixup_cell_info)(struct nvmem_device *nvmem,
> > > struct nvmem_layout *layout,
> > > struct nvmem_cell_info *cell);
>
> I speak about this hook. This should be moved into the config, maybe
> also renamed to fixup_dt_cell_info() or so to not confuse the users. If
> we move that hook and remove the add_cells hook there are only two
> members left for the cross-link.

It's not a bad idea, I've included this change in my series (for v14,
sic). I like your rename as well. Thanks for the hint.

Thanks,
Miquèl