2023-10-05 16:21:19

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 0/7] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code@77
00000000 54 57 |TW|
crc32@88
00000000 bb cd 51 98 |..Q.|
device-version@49
00000000 02 |.|
diag-version@80
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision@4c
00000000 44 31 |D1|
mac-address@2c
00000000 18 be 92 13 9a 00 |......|
manufacture-date@34
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer@72
00000000 44 4e 49 |DNI|
num-macs@6e
00000000 00 40 |.@|
onie-version@61
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name@50
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name@d
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number@19
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor@7b
00000000 44 4e 49 |DNI|

Current support does not include:
* The knowledge of the type of data (binary vs. ASCII), so by default
all cells are exposed in binary form.
* Write support.

Changes in v12:
* Fixed the issues reported by kernel test robot.
* Reworked even deeper the registration of layout devices and dropped
all the research and matching code that was previously needed as
suggested by Srinivas. This way, we no longer use the notifiers.

Changes in v11:
* The nvmem layouts are now regular devices and not platform devices
anymore. They are registered into the nvmem-layout bus (so there is a
new /sysfs/bus/nvmem-layouts entry that gets created. All the code for
this new bus is located under drivers/nvmem/layouts.c and is part of
the main core. The core device-driver logic applies without too much
additional code besides the registration of the bus and a bit of
glue. I see no need for more detailed structures for now but this can
be improved later as needed.

Changes in v10:
* All preparation patches have been picked-up by Srinivas.
* Rebased on top of v6.6-rc1.
* Fix an error path in the probe due to the recent additions.

Changes in v9:
* Hopefully fixed the creation of sysfs entries when describing the
cells using the legacy layout, as reported by Chen-Yu.
* Dropped the nvmem-specific device list and used the driver core list
instead as advised by Greg.

Changes in v8:
* Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
* Add a patch to return NULL when no layout is found (reported by Dan
Carpenter).
* Fixed the documentation as well as the cover letter regarding the
addition of addresses in the cell names.

Changes in v7:
* Rework the layouts registration mechanism to use the platform devices
logic.
* Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
them consist in suffixing '@<offset>' to the cell name to create the
sysfs files in order to be sure they are all unique.
* Update the doc.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.

Miquel Raynal (7):
of: device: Export of_device_make_bus_id()
nvmem: Clarify the situation when there is no DT node available
nvmem: Move of_nvmem_layout_get_container() in another header
nvmem: Create a header for internal sharing
nvmem: core: Rework layouts to become regular devices
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 288 +++++++++++---------
drivers/nvmem/internals.h | 58 ++++
drivers/nvmem/layouts.c | 201 ++++++++++++++
drivers/nvmem/layouts/onie-tlv.c | 36 ++-
drivers/nvmem/layouts/sl28vpd.c | 36 ++-
drivers/of/device.c | 41 +++
drivers/of/platform.c | 40 ---
include/linux/nvmem-consumer.h | 7 -
include/linux/nvmem-provider.h | 39 ++-
include/linux/of_device.h | 6 +
12 files changed, 581 insertions(+), 194 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
create mode 100644 drivers/nvmem/internals.h
create mode 100644 drivers/nvmem/layouts.c

--
2.34.1


2023-10-05 16:23:52

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 6/7] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..7af70adf3690
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,21 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by the
+ NVMEM device. The name of the file is: <name>@<where>, with
+ <name> being the cell name and <where> its location in the NVMEM
+ device, in hexadecimal (without the '0x' prefix, to mimic device
+ tree node names). The length of the file is the size of the cell
+ (when known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without trailing
+ character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1

2023-10-05 16:24:00

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 1/7] of: device: Export of_device_make_bus_id()

This helper is really handy to create unique device names based on their
device tree path, we may need it outside of the OF core (in the NVMEM
subsystem) so let's export it. As this helper has nothing patform
specific, let's move it to of/device.c instead of of/platform.c so we
can add its prototype to of_device.h.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/of/device.c | 41 +++++++++++++++++++++++++++++++++++++++
drivers/of/platform.c | 40 --------------------------------------
include/linux/of_device.h | 6 ++++++
3 files changed, 47 insertions(+), 40 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1ca42ad9dd15..6e9572c4af83 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -304,3 +304,44 @@ int of_device_uevent_modalias(const struct device *dev, struct kobj_uevent_env *
return 0;
}
EXPORT_SYMBOL_GPL(of_device_uevent_modalias);
+
+/**
+ * of_device_make_bus_id - Use the device node data to assign a unique name
+ * @dev: pointer to device structure that is linked to a device tree node
+ *
+ * This routine will first try using the translated bus address to
+ * derive a unique name. If it cannot, then it will prepend names from
+ * parent nodes until a unique name can be derived.
+ */
+void of_device_make_bus_id(struct device *dev)
+{
+ struct device_node *node = dev->of_node;
+ const __be32 *reg;
+ u64 addr;
+ u32 mask;
+
+ /* Construct the name, using parent nodes if necessary to ensure uniqueness */
+ while (node->parent) {
+ /*
+ * If the address can be translated, then that is as much
+ * uniqueness as we need. Make it the first component and return
+ */
+ reg = of_get_property(node, "reg", NULL);
+ if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
+ if (!of_property_read_u32(node, "mask", &mask))
+ dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
+ addr, ffs(mask) - 1, node, dev_name(dev));
+
+ else
+ dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
+ addr, node, dev_name(dev));
+ return;
+ }
+
+ /* format arguments only used if dev_name() resolves to NULL */
+ dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
+ kbasename(node->full_name), dev_name(dev));
+ node = node->parent;
+ }
+}
+EXPORT_SYMBOL_GPL(of_device_make_bus_id);
diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index f235ab55b91e..be32e28c6f55 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -97,46 +97,6 @@ static const struct of_device_id of_skipped_node_table[] = {
* mechanism for creating devices from device tree nodes.
*/

-/**
- * of_device_make_bus_id - Use the device node data to assign a unique name
- * @dev: pointer to device structure that is linked to a device tree node
- *
- * This routine will first try using the translated bus address to
- * derive a unique name. If it cannot, then it will prepend names from
- * parent nodes until a unique name can be derived.
- */
-static void of_device_make_bus_id(struct device *dev)
-{
- struct device_node *node = dev->of_node;
- const __be32 *reg;
- u64 addr;
- u32 mask;
-
- /* Construct the name, using parent nodes if necessary to ensure uniqueness */
- while (node->parent) {
- /*
- * If the address can be translated, then that is as much
- * uniqueness as we need. Make it the first component and return
- */
- reg = of_get_property(node, "reg", NULL);
- if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
- if (!of_property_read_u32(node, "mask", &mask))
- dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
- addr, ffs(mask) - 1, node, dev_name(dev));
-
- else
- dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
- addr, node, dev_name(dev));
- return;
- }
-
- /* format arguments only used if dev_name() resolves to NULL */
- dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
- kbasename(node->full_name), dev_name(dev));
- node = node->parent;
- }
-}
-
/**
* of_device_alloc - Allocate and initialize an of_device
* @np: device node to assign to device
diff --git a/include/linux/of_device.h b/include/linux/of_device.h
index 2c7a3d4bc775..a72661e47faa 100644
--- a/include/linux/of_device.h
+++ b/include/linux/of_device.h
@@ -40,6 +40,9 @@ static inline int of_dma_configure(struct device *dev,
{
return of_dma_configure_id(dev, np, force_dma, NULL);
}
+
+void of_device_make_bus_id(struct device *dev);
+
#else /* CONFIG_OF */

static inline int of_driver_match_device(struct device *dev,
@@ -82,6 +85,9 @@ static inline int of_dma_configure(struct device *dev,
{
return 0;
}
+
+static inline void of_device_make_bus_id(struct device *dev) {}
+
#endif /* CONFIG_OF */

#endif /* _LINUX_OF_DEVICE_H */
--
2.34.1

2023-10-05 16:24:18

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 4/7] nvmem: Create a header for internal sharing

Before adding all the NVMEM layout bus infrastructure to the core, let's
move the main nvmem_device structure in an internal header, only
available to the core. This way all the additional code can be added in
a dedicated file in order to keep the current core file tidy.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 24 +-----------------------
drivers/nvmem/internals.h | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 36 insertions(+), 23 deletions(-)
create mode 100644 drivers/nvmem/internals.h

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index c63057a7a3b8..073fe4a73e37 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -19,29 +19,7 @@
#include <linux/of.h>
#include <linux/slab.h>

-struct nvmem_device {
- struct module *owner;
- struct device dev;
- int stride;
- int word_size;
- int id;
- struct kref refcnt;
- size_t size;
- bool read_only;
- bool root_only;
- int flags;
- enum nvmem_type type;
- struct bin_attribute eeprom;
- struct device *base_dev;
- struct list_head cells;
- const struct nvmem_keepout *keepout;
- unsigned int nkeepout;
- nvmem_reg_read_t reg_read;
- nvmem_reg_write_t reg_write;
- struct gpio_desc *wp_gpio;
- struct nvmem_layout *layout;
- void *priv;
-};
+#include "internals.h"

#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
new file mode 100644
index 000000000000..ce353831cd65
--- /dev/null
+++ b/drivers/nvmem/internals.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_NVMEM_INTERNALS_H
+#define _LINUX_NVMEM_INTERNALS_H
+
+#include <linux/device.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+
+struct nvmem_device {
+ struct module *owner;
+ struct device dev;
+ struct list_head node;
+ int stride;
+ int word_size;
+ int id;
+ struct kref refcnt;
+ size_t size;
+ bool read_only;
+ bool root_only;
+ int flags;
+ enum nvmem_type type;
+ struct bin_attribute eeprom;
+ struct device *base_dev;
+ struct list_head cells;
+ const struct nvmem_keepout *keepout;
+ unsigned int nkeepout;
+ nvmem_reg_read_t reg_read;
+ nvmem_reg_write_t reg_write;
+ struct gpio_desc *wp_gpio;
+ struct nvmem_layout *layout;
+ void *priv;
+};
+
+#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
--
2.34.1

2023-10-05 16:43:53

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 3/7] nvmem: Move of_nvmem_layout_get_container() in another header

nvmem-consumer.h is included by consumer devices, extracting data from
NVMEM devices whereas nvmem-provider.h is included by devices providing
NVMEM content.

The only users of of_nvmem_layout_get_container() outside of the core
are layout drivers, so better move its prototype to nvmem-provider.h.

While we do so, we also move the kdoc associated with the function to
the header rather than the .c file.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 8 --------
include/linux/nvmem-consumer.h | 7 -------
include/linux/nvmem-provider.h | 14 ++++++++++++++
3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 286efd3f5a31..c63057a7a3b8 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -844,14 +844,6 @@ static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
}

#if IS_ENABLED(CONFIG_OF)
-/**
- * of_nvmem_layout_get_container() - Get OF node to layout container.
- *
- * @nvmem: nvmem device.
- *
- * Return: a node pointer with refcount incremented or NULL if no
- * container exists. Use of_node_put() on it when done.
- */
struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
{
return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
index 4523e4e83319..960728b10a11 100644
--- a/include/linux/nvmem-consumer.h
+++ b/include/linux/nvmem-consumer.h
@@ -241,7 +241,6 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id);
struct nvmem_device *of_nvmem_device_get(struct device_node *np,
const char *name);
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
#else
static inline struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id)
@@ -254,12 +253,6 @@ static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np,
{
return ERR_PTR(-EOPNOTSUPP);
}
-
-static inline struct device_node *
-of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return NULL;
-}
#endif /* CONFIG_NVMEM && CONFIG_OF */

#endif /* ifndef _LINUX_NVMEM_CONSUMER_H */
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index dae26295e6be..2905f9e6fc2a 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -205,6 +205,16 @@ void nvmem_layout_unregister(struct nvmem_layout *layout);
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

+/**
+ * of_nvmem_layout_get_container() - Get OF node of layout container
+ *
+ * @nvmem: nvmem device
+ *
+ * Return: a node pointer with refcount incremented or NULL if no
+ * container exists. Use of_node_put() on it when done.
+ */
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
+
#else

static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c)
@@ -242,6 +252,10 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
return NULL;
}

+static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+{
+ return NULL;
+}
#endif /* CONFIG_NVMEM */

#define module_nvmem_layout_driver(__layout_driver) \
--
2.34.1

2023-10-05 16:58:05

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Technically, the layouts are more like a "plus" and, even we
consider that the hardware description shall be correct, we could still
probe the storage device (especially if it contains the rootfs).

One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.

In order to achieve that goal we need:
* To keep track of all nvmem devices
* To create a new bus for the nvmem-layouts with minimal logic to match
nvmem-layout devices with nvmem-layout drivers.
All this infrastructure code is created in the layouts.c file.

Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
---
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 126 +++++--------------
drivers/nvmem/internals.h | 22 ++++
drivers/nvmem/layouts.c | 201 +++++++++++++++++++++++++++++++
drivers/nvmem/layouts/onie-tlv.c | 36 +++++-
drivers/nvmem/layouts/sl28vpd.c | 36 +++++-
include/linux/nvmem-provider.h | 25 ++--
7 files changed, 331 insertions(+), 117 deletions(-)
create mode 100644 drivers/nvmem/layouts.c

diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
index 423baf089515..77be96076ea6 100644
--- a/drivers/nvmem/Makefile
+++ b/drivers/nvmem/Makefile
@@ -4,7 +4,7 @@
#

obj-$(CONFIG_NVMEM) += nvmem_core.o
-nvmem_core-y := core.o
+nvmem_core-y := core.o layouts.o
obj-y += layouts/

# Devices
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 073fe4a73e37..6c6b0bac24f5 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -55,9 +55,6 @@ static LIST_HEAD(nvmem_lookup_list);

static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);

-static DEFINE_SPINLOCK(nvmem_layout_lock);
-static LIST_HEAD(nvmem_layouts);
-
static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
void *val, size_t bytes)
{
@@ -744,91 +741,29 @@ static int nvmem_add_cells_from_fixed_layout(struct nvmem_device *nvmem)
return err;
}

-int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner)
+int nvmem_layout_register(struct nvmem_layout *layout)
{
- layout->owner = owner;
+ struct nvmem_device *nvmem = dev_get_platdata(layout->dev);

- spin_lock(&nvmem_layout_lock);
- list_add(&layout->node, &nvmem_layouts);
- spin_unlock(&nvmem_layout_lock);
+ if (!layout->add_cells)
+ return -EINVAL;

- blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_ADD, layout);
+ /* Link internally the nvmem device to its layout */
+ nvmem->layout = layout;

- return 0;
+ /* Populate the cells */
+ return nvmem->layout->add_cells(&nvmem->dev, nvmem, nvmem->layout);
}
-EXPORT_SYMBOL_GPL(__nvmem_layout_register);
+EXPORT_SYMBOL_GPL(nvmem_layout_register);

void nvmem_layout_unregister(struct nvmem_layout *layout)
{
- blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_REMOVE, layout);
+ struct nvmem_device *nvmem = dev_get_platdata(layout->dev);

- spin_lock(&nvmem_layout_lock);
- list_del(&layout->node);
- spin_unlock(&nvmem_layout_lock);
+ nvmem->layout = NULL;
}
EXPORT_SYMBOL_GPL(nvmem_layout_unregister);

-static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
-{
- struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
-
- layout_np = of_nvmem_layout_get_container(nvmem);
- if (!layout_np)
- return NULL;
-
- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
- spin_lock(&nvmem_layout_lock);
-
- list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
- if (try_module_get(l->owner))
- layout = l;
-
- break;
- }
- }
-
- spin_unlock(&nvmem_layout_lock);
- of_node_put(layout_np);
-
- return layout;
-}
-
-static void nvmem_layout_put(struct nvmem_layout *layout)
-{
- if (layout)
- module_put(layout->owner);
-}
-
-static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
-{
- struct nvmem_layout *layout = nvmem->layout;
- int ret;
-
- if (layout && layout->add_cells) {
- ret = layout->add_cells(&nvmem->dev, nvmem, layout);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-#if IS_ENABLED(CONFIG_OF)
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
-}
-EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
-#endif
-
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout)
{
@@ -836,7 +771,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;

layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev->driver->of_match_table, layout_np);

return match ? match->data : NULL;
}
@@ -947,19 +882,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}

- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -978,7 +900,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- rval = nvmem_add_cells_from_layout(nvmem);
+ rval = nvmem_populate_layout(nvmem);
if (rval)
goto err_remove_cells;

@@ -986,16 +908,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)

rval = device_add(&nvmem->dev);
if (rval)
- goto err_remove_cells;
+ goto err_destroy_layout;
+

blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

+err_destroy_layout:
+ nvmem_destroy_layout(nvmem);
err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -1017,7 +940,7 @@ static void nvmem_device_release(struct kref *kref)
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);

nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
+ nvmem_destroy_layout(nvmem);
device_unregister(&nvmem->dev);
}

@@ -2099,11 +2022,22 @@ EXPORT_SYMBOL_GPL(nvmem_dev_name);

static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ ret = nvmem_layout_bus_register();
+ if (ret)
+ bus_unregister(&nvmem_bus_type);
+
+ return ret;
}

static void __exit nvmem_exit(void)
{
+ nvmem_layout_bus_unregister();
bus_unregister(&nvmem_bus_type);
}

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
index ce353831cd65..10a317d46fb6 100644
--- a/drivers/nvmem/internals.h
+++ b/drivers/nvmem/internals.h
@@ -28,8 +28,30 @@ struct nvmem_device {
nvmem_reg_read_t reg_read;
nvmem_reg_write_t reg_write;
struct gpio_desc *wp_gpio;
+ struct device *layout_dev;
struct nvmem_layout *layout;
void *priv;
};

+#if IS_ENABLED(CONFIG_OF)
+int nvmem_layout_bus_register(void);
+void nvmem_layout_bus_unregister(void);
+int nvmem_populate_layout(struct nvmem_device *nvmem);
+void nvmem_destroy_layout(struct nvmem_device *nvmem);
+#else /* CONFIG_OF */
+static inline int nvmem_layout_bus_register(void)
+{
+ return 0;
+}
+
+static inline void nvmem_layout_bus_unregister(void) {}
+
+static inline int nvmem_populate_layout(struct nvmem_device *nvmem)
+{
+ return 0;
+}
+
+static inline int nvmem_destroy_layout(struct nvmem_device *nvmem) { }
+#endif /* CONFIG_OF */
+
#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
diff --git a/drivers/nvmem/layouts.c b/drivers/nvmem/layouts.c
new file mode 100644
index 000000000000..5f2ec4213469
--- /dev/null
+++ b/drivers/nvmem/layouts.c
@@ -0,0 +1,201 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMEM layout bus handling
+ *
+ * Copyright (C) 2023 Bootlin
+ * Author: Miquel Raynal <[email protected]
+ */
+
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+
+#include "internals.h"
+
+#if IS_ENABLED(CONFIG_OF)
+static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
+{
+ return of_driver_match_device(dev, drv);
+}
+
+static struct bus_type nvmem_layout_bus_type = {
+ .name = "nvmem-layouts",
+ .match = nvmem_layout_bus_match,
+};
+
+static struct device nvmem_layout_bus = {
+ .init_name = "nvmem-layouts",
+};
+
+int nvmem_layout_driver_register(struct nvmem_layout_driver *drv)
+{
+ drv->driver.bus = &nvmem_layout_bus_type;
+
+ return driver_register(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(nvmem_layout_driver_register);
+
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv)
+{
+ driver_unregister(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(nvmem_layout_driver_unregister);
+
+static void nvmem_layout_device_release(struct device *dev)
+{
+ of_node_put(dev->of_node);
+ kfree(dev);
+}
+
+static int nvmem_layout_create_device(struct nvmem_device *nvmem,
+ struct device_node *np)
+{
+ struct device *dev;
+ int ret;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ device_initialize(dev);
+ dev->parent = &nvmem_layout_bus;
+ dev->bus = &nvmem_layout_bus_type;
+ dev->release = nvmem_layout_device_release;
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->dma_mask = &dev->coherent_dma_mask;
+ dev->platform_data = nvmem;
+ device_set_node(dev, of_fwnode_handle(of_node_get(np)));
+ of_device_make_bus_id(dev);
+ of_msi_configure(dev, dev->of_node);
+
+ ret = device_add(dev);
+ if (ret) {
+ put_device(dev);
+ return ret;
+ }
+
+ nvmem->layout_dev = dev;
+
+ return 0;
+}
+
+static const struct of_device_id of_nvmem_layout_skip_table[] = {
+ { .compatible = "fixed-layout", },
+ {}
+};
+
+static int nvmem_layout_bus_populate(struct nvmem_device *nvmem,
+ struct device_node *layout_dn)
+{
+ int ret;
+
+ /* Make sure it has a compatible property */
+ if (!of_get_property(layout_dn, "compatible", NULL)) {
+ pr_debug("%s() - skipping %pOF, no compatible prop\n",
+ __func__, layout_dn);
+ return 0;
+ }
+
+ /* Fixed layouts are parsed manually somewhere else for now */
+ if (of_match_node(of_nvmem_layout_skip_table, layout_dn)) {
+ pr_debug("%s() - skipping %pOF node\n", __func__, layout_dn);
+ return 0;
+ }
+
+ if (of_node_check_flag(layout_dn, OF_POPULATED_BUS)) {
+ pr_debug("%s() - skipping %pOF, already populated\n",
+ __func__, layout_dn);
+ return 0;
+ }
+
+ /* NVMEM layout buses expect only a single device representing the layout */
+ ret = nvmem_layout_create_device(nvmem, layout_dn);
+ if (ret)
+ return ret;
+
+ of_node_set_flag(layout_dn, OF_POPULATED_BUS);
+
+ return 0;
+}
+
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+{
+ return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
+}
+EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
+
+/*
+ * Returns the number of devices populated, 0 if the operation was not relevant
+ * for this nvmem device, an error code otherwise.
+ */
+int nvmem_populate_layout(struct nvmem_device *nvmem)
+{
+ struct device_node *nvmem_dn, *layout_dn;
+ int ret;
+
+ layout_dn = of_nvmem_layout_get_container(nvmem);
+ if (!layout_dn)
+ return 0;
+
+ nvmem_dn = of_node_get(nvmem->dev.of_node);
+ if (!nvmem_dn) {
+ of_node_put(layout_dn);
+ return 0;
+ }
+
+ /* Ensure the layout driver is loaded */
+ of_request_module(layout_dn);
+
+ /* Populate the layout device */
+ device_links_supplier_sync_state_pause();
+ ret = nvmem_layout_bus_populate(nvmem, layout_dn);
+ device_links_supplier_sync_state_resume();
+
+ of_node_put(nvmem_dn);
+ of_node_put(layout_dn);
+ return ret;
+}
+
+void nvmem_destroy_layout(struct nvmem_device *nvmem)
+{
+ struct device_node *layout_dn;
+
+ layout_dn = of_nvmem_layout_get_container(nvmem);
+ if (!layout_dn)
+ return;
+
+ of_node_clear_flag(layout_dn, OF_POPULATED_BUS);
+ put_device(nvmem->layout_dev);
+
+ of_node_put(layout_dn);
+}
+
+int nvmem_layout_bus_register(void)
+{
+ int ret;
+
+ ret = device_register(&nvmem_layout_bus);
+ if (ret) {
+ put_device(&nvmem_layout_bus);
+ return ret;
+ }
+
+ ret = bus_register(&nvmem_layout_bus_type);
+ if (ret) {
+ device_unregister(&nvmem_layout_bus);
+ return ret;
+ }
+
+ return 0;
+}
+
+void nvmem_layout_bus_unregister(void)
+{
+ bus_unregister(&nvmem_layout_bus_type);
+ device_unregister(&nvmem_layout_bus);
+}
+#endif
diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
index 59fc87ccfcff..191b2540d347 100644
--- a/drivers/nvmem/layouts/onie-tlv.c
+++ b/drivers/nvmem/layouts/onie-tlv.c
@@ -226,16 +226,44 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int onie_tlv_probe(struct device *dev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = onie_tlv_parse_table;
+ layout->dev = dev;
+
+ dev_set_drvdata(dev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int onie_tlv_remove(struct device *dev)
+{
+ struct nvmem_layout *layout = dev_get_drvdata(dev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);

-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct nvmem_layout_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
+ },
};
module_nvmem_layout_driver(onie_tlv_layout);

diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
index 05671371f631..330badebfcf6 100644
--- a/drivers/nvmem/layouts/sl28vpd.c
+++ b/drivers/nvmem/layouts/sl28vpd.c
@@ -135,16 +135,44 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int sl28vpd_probe(struct device *dev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = sl28vpd_add_cells;
+ layout->dev = dev;
+
+ dev_set_drvdata(dev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int sl28vpd_remove(struct device *dev)
+{
+ struct nvmem_layout *layout = dev_get_drvdata(dev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);

-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct nvmem_layout_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
+ },
};
module_nvmem_layout_driver(sl28vpd_layout);

diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index 2905f9e6fc2a..10537abea008 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -154,8 +154,7 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
@@ -170,8 +169,7 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device *dev;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
@@ -183,6 +181,10 @@ struct nvmem_layout {
struct list_head node;
};

+struct nvmem_layout_driver {
+ struct device_driver driver;
+};
+
#if IS_ENABLED(CONFIG_NVMEM)

struct nvmem_device *nvmem_register(const struct nvmem_config *cfg);
@@ -197,11 +199,15 @@ void nvmem_del_cell_table(struct nvmem_cell_table *table);
int nvmem_add_one_cell(struct nvmem_device *nvmem,
const struct nvmem_cell_info *info);

-int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner);
-#define nvmem_layout_register(layout) \
- __nvmem_layout_register(layout, THIS_MODULE)
+int nvmem_layout_register(struct nvmem_layout *layout);
void nvmem_layout_unregister(struct nvmem_layout *layout);

+int nvmem_layout_driver_register(struct nvmem_layout_driver *drv);
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv);
+#define module_nvmem_layout_driver(__nvmem_layout_driver) \
+ module_driver(__nvmem_layout_driver, nvmem_layout_driver_register, \
+ nvmem_layout_driver_unregister)
+
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

@@ -257,9 +263,4 @@ static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_dev
return NULL;
}
#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
--
2.34.1

2023-10-05 16:59:45

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available

At a first look it might seem that the presence of the of_node pointer
in the nvmem device does not matter much, but in practice, after looking
deep into the DT core, nvmem_add_cells_from_dt() will simply and always
return NULL if this field is not provided. As most mtd devices don't
populate this field (this could evolve later), it means none of their
children cells will be populated unless no_of_node is explicitly set to
false. In order to clarify the logic, let's add clear check at the
beginning of this helper.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index eaf6a3fe8ca6..286efd3f5a31 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -743,6 +743,9 @@ static int nvmem_add_cells_from_dt(struct nvmem_device *nvmem, struct device_nod

static int nvmem_add_cells_from_legacy_of(struct nvmem_device *nvmem)
{
+ if (!nvmem->dev.of_node)
+ return 0;
+
return nvmem_add_cells_from_dt(nvmem, nvmem->dev.of_node);
}

--
2.34.1

2023-10-06 11:49:25

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available

On 2023-10-05 17:59, Miquel Raynal wrote:
> At a first look it might seem that the presence of the of_node pointer
> in the nvmem device does not matter much, but in practice, after
> looking
> deep into the DT core, nvmem_add_cells_from_dt() will simply and always
> return NULL if this field is not provided. As most mtd devices don't
> populate this field (this could evolve later), it means none of their
> children cells will be populated unless no_of_node is explicitly set to
> false. In order to clarify the logic, let's add clear check at the
> beginning of this helper.

I'm somehow confused by above explanation and code too. I read it
carefully 5 times but I can't see what exactly this change helps with.

At first look at nvmem_add_cells_from_legacy_of() I can see it uses
"of_node" so I don't really agree with "it might seem that the presence
of the of_node pointer in the nvmem device does not matter much".

You really don't need to look deep into DT core (actually you don't have
to look into it at all) to understand that nvmem_add_cells_from_dt()
will return 0 (nitpicking: not NULL) for a NULL pointer. It's all made
of for_each_child_of_node(). Obviously it does nothing if there is
nothing to loop over.

Given that for_each_child_of_node() is NULL-safe I think code from this
patch is redundant.

Later you mention "no_of_node" which I agree to be a very non-intuitive
config option. As pointed in another thread I already sent:
[PATCH] Revert "nvmem: add new config option"
https://lore.kernel.org/lkml/[email protected]/t/

Maybe with above patch finally things will get more clear and we don't
need this PATCH after all?


> Signed-off-by: Miquel Raynal <[email protected]>
> ---
> drivers/nvmem/core.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index eaf6a3fe8ca6..286efd3f5a31 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -743,6 +743,9 @@ static int nvmem_add_cells_from_dt(struct
> nvmem_device *nvmem, struct device_nod
>
> static int nvmem_add_cells_from_legacy_of(struct nvmem_device *nvmem)
> {
> + if (!nvmem->dev.of_node)
> + return 0;
> +
> return nvmem_add_cells_from_dt(nvmem, nvmem->dev.of_node);
> }

--
Rafał Miłecki

2023-10-06 12:00:03

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

On 2023-10-05 17:59, Miquel Raynal wrote:
> +static struct bus_type nvmem_layout_bus_type = {
> + .name = "nvmem-layouts",
> + .match = nvmem_layout_bus_match,
> +};
> +
> +static struct device nvmem_layout_bus = {
> + .init_name = "nvmem-layouts",
> +};

Nitpicking: would it be more consistent and still make sense to use
singular form "nvmem-layout"?

By looking at my /sys/bus/ I can see there:
1. cpu (not cpus)
2. gpio (not gpios)
3. node (not nodes)
4. nvmem (not nvmems)
etc.

--
Rafał Miłecki

2023-10-06 16:32:42

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available

Hi Rafał,

[email protected] wrote on Fri, 06 Oct 2023 13:41:52 +0200:

> On 2023-10-05 17:59, Miquel Raynal wrote:
> > At a first look it might seem that the presence of the of_node pointer
> > in the nvmem device does not matter much, but in practice, after > looking
> > deep into the DT core, nvmem_add_cells_from_dt() will simply and always
> > return NULL if this field is not provided. As most mtd devices don't
> > populate this field (this could evolve later), it means none of their
> > children cells will be populated unless no_of_node is explicitly set to
> > false. In order to clarify the logic, let's add clear check at the
> > beginning of this helper.
>
> I'm somehow confused by above explanation and code too. I read it
> carefully 5 times but I can't see what exactly this change helps with.
>
> At first look at nvmem_add_cells_from_legacy_of() I can see it uses
> "of_node" so I don't really agree with "it might seem that the presence
> of the of_node pointer in the nvmem device does not matter much".
>
> You really don't need to look deep into DT core (actually you don't have
> to look into it at all) to understand that nvmem_add_cells_from_dt()
> will return 0 (nitpicking: not NULL) for a NULL pointer. It's all made
> of for_each_child_of_node(). Obviously it does nothing if there is
> nothing to loop over.

That was not obvious to me as I thought it would start from /, which I
think some other function do when you don't provide a start node.

> Given that for_each_child_of_node() is NULL-safe I think code from this
> patch is redundant.

I didn't say it was not safe, just not explicit.

> Later you mention "no_of_node" which I agree to be a very non-intuitive
> config option. As pointed in another thread I already sent:
> [PATCH] Revert "nvmem: add new config option"
> https://lore.kernel.org/lkml/[email protected]/t/

I actually wanted to find again that patch and could not get my hands on
it, but it is probably a much better fix than my other mtd patch, I
agree with you.

> Maybe with above patch finally things will get more clear and we don't
> need this PATCH after all?

Yes. Srinivas, what are your plans for the above patch?

Thanks,
Miquèl

2023-10-06 16:34:11

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Rafał,

[email protected] wrote on Fri, 06 Oct 2023 13:49:49 +0200:

> On 2023-10-05 17:59, Miquel Raynal wrote:
> > +static struct bus_type nvmem_layout_bus_type = {
> > + .name = "nvmem-layouts",
> > + .match = nvmem_layout_bus_match,
> > +};
> > +
> > +static struct device nvmem_layout_bus = {
> > + .init_name = "nvmem-layouts",
> > +};
>
> Nitpicking: would it be more consistent and still make sense to use
> singular form "nvmem-layout"?
>
> By looking at my /sys/bus/ I can see there:
> 1. cpu (not cpus)
> 2. gpio (not gpios)
> 3. node (not nodes)
> 4. nvmem (not nvmems)
> etc.
>

Probably, yes. I will wait for more feedback on this series but I'm
fine with the renaming you proposed, makes sense.

Thanks,
Miquèl

2023-10-06 17:03:07

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v12 1/7] of: device: Export of_device_make_bus_id()


On Thu, 05 Oct 2023 17:59:01 +0200, Miquel Raynal wrote:
> This helper is really handy to create unique device names based on their
> device tree path, we may need it outside of the OF core (in the NVMEM
> subsystem) so let's export it. As this helper has nothing patform
> specific, let's move it to of/device.c instead of of/platform.c so we
> can add its prototype to of_device.h.
>
> Signed-off-by: Miquel Raynal <[email protected]>
> ---
> drivers/of/device.c | 41 +++++++++++++++++++++++++++++++++++++++
> drivers/of/platform.c | 40 --------------------------------------
> include/linux/of_device.h | 6 ++++++
> 3 files changed, 47 insertions(+), 40 deletions(-)
>

Acked-by: Rob Herring <[email protected]>

2023-10-07 16:32:02

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

On Thu, Oct 05, 2023 at 05:59:05PM +0200, Miquel Raynal wrote:
> --- a/drivers/nvmem/internals.h
> +++ b/drivers/nvmem/internals.h
> @@ -28,8 +28,30 @@ struct nvmem_device {
> nvmem_reg_read_t reg_read;
> nvmem_reg_write_t reg_write;
> struct gpio_desc *wp_gpio;
> + struct device *layout_dev;
> struct nvmem_layout *layout;
> void *priv;
> };

Wait, is this now 2 struct device in the same structure? Which one is
the "real" owner of this structure? Why is a pointer to layout_dev
needed here as a "struct device" and not a real "struct
nvmem_layout_device" or whatever it's called?

> struct nvmem_layout {
> - const char *name;
> - const struct of_device_id *of_match_table;
> + struct device *dev;

Shouldn't this be a "real" struct device and not just a pointer? If
not, what does this point to? Who owns the reference to it?

thanks,

greg k-h

2023-10-07 16:58:38

by Rafał Miłecki

[permalink] [raw]
Subject: Re: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available

One comment below

On 2023-10-06 18:32, Miquel Raynal wrote:
> [email protected] wrote on Fri, 06 Oct 2023 13:41:52 +0200:
>
>> On 2023-10-05 17:59, Miquel Raynal wrote:
>> > At a first look it might seem that the presence of the of_node pointer
>> > in the nvmem device does not matter much, but in practice, after > looking
>> > deep into the DT core, nvmem_add_cells_from_dt() will simply and always
>> > return NULL if this field is not provided. As most mtd devices don't
>> > populate this field (this could evolve later), it means none of their
>> > children cells will be populated unless no_of_node is explicitly set to
>> > false. In order to clarify the logic, let's add clear check at the
>> > beginning of this helper.
>>
>> I'm somehow confused by above explanation and code too. I read it
>> carefully 5 times but I can't see what exactly this change helps with.
>>
>> At first look at nvmem_add_cells_from_legacy_of() I can see it uses
>> "of_node" so I don't really agree with "it might seem that the
>> presence
>> of the of_node pointer in the nvmem device does not matter much".
>>
>> You really don't need to look deep into DT core (actually you don't
>> have
>> to look into it at all) to understand that nvmem_add_cells_from_dt()
>> will return 0 (nitpicking: not NULL) for a NULL pointer. It's all made
>> of for_each_child_of_node(). Obviously it does nothing if there is
>> nothing to loop over.
>
> That was not obvious to me as I thought it would start from /, which I
> think some other function do when you don't provide a start node.

What about documenting that function instead of adding redundant code?


>> Given that for_each_child_of_node() is NULL-safe I think code from
>> this
>> patch is redundant.
>
> I didn't say it was not safe, just not explicit.

--
Rafał Miłecki

2023-10-08 13:39:52

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available

Hi Rafał,

[email protected] wrote on Sat, 07 Oct 2023 18:09:06 +0200:

> One comment below
>
> On 2023-10-06 18:32, Miquel Raynal wrote:
> > [email protected] wrote on Fri, 06 Oct 2023 13:41:52 +0200:
> >
> >> On 2023-10-05 17:59, Miquel Raynal wrote:
> >> > At a first look it might seem that the presence of the of_node pointer
> >> > in the nvmem device does not matter much, but in practice, after > looking
> >> > deep into the DT core, nvmem_add_cells_from_dt() will simply and always
> >> > return NULL if this field is not provided. As most mtd devices don't
> >> > populate this field (this could evolve later), it means none of their
> >> > children cells will be populated unless no_of_node is explicitly set to
> >> > false. In order to clarify the logic, let's add clear check at the
> >> > beginning of this helper.
> >> >> I'm somehow confused by above explanation and code too. I read it
> >> carefully 5 times but I can't see what exactly this change helps with.
> >> >> At first look at nvmem_add_cells_from_legacy_of() I can see it uses
> >> "of_node" so I don't really agree with "it might seem that the >> presence
> >> of the of_node pointer in the nvmem device does not matter much".
> >> >> You really don't need to look deep into DT core (actually you don't >> have
> >> to look into it at all) to understand that nvmem_add_cells_from_dt()
> >> will return 0 (nitpicking: not NULL) for a NULL pointer. It's all made
> >> of for_each_child_of_node(). Obviously it does nothing if there is
> >> nothing to loop over.
> >
> > That was not obvious to me as I thought it would start from /, which I
> > think some other function do when you don't provide a start node.
>
> What about documenting that function instead of adding redundant code?

Yeah would work as well. But I will just get rid of this, with your
other patch that solves the fact that of_node will be there with mtd
devices, it's no longer relevant.

Thanks,
Miquèl

2023-10-08 13:43:13

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

kernel test robot noticed the following build errors:

[auto build test ERROR on robh/for-next]
[also build test ERROR on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc4 next-20231006]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231006-000111
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231005155907.2701706-6-miquel.raynal%40bootlin.com
patch subject: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices
config: sh-se7705_defconfig (https://download.01.org/0day-ci/archive/20231008/[email protected]/config)
compiler: sh4-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231008/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

In file included from drivers/nvmem/core.c:22:
drivers/nvmem/internals.h: In function 'nvmem_destroy_layout':
>> drivers/nvmem/internals.h:54:47: error: no return statement in function returning non-void [-Werror=return-type]
54 | static inline int nvmem_destroy_layout(struct nvmem_device *nvmem) { }
| ^~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +54 drivers/nvmem/internals.h

53
> 54 static inline int nvmem_destroy_layout(struct nvmem_device *nvmem) { }
55 #endif /* CONFIG_OF */
56

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-09 09:45:30

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices



On 05/10/2023 16:59, Miquel Raynal wrote:
> Current layout support was initially written without modules support in
> mind. When the requirement for module support rose, the existing base
> was improved to adopt modularization support, but kind of a design flaw
> was introduced. With the existing implementation, when a storage device
> registers into NVMEM, the core tries to hook a layout (if any) and
> populates its cells immediately. This means, if the hardware description
> expects a layout to be hooked up, but no driver was provided for that,
> the storage medium will fail to probe and try later from
> scratch. Technically, the layouts are more like a "plus" and, even we

This is not true, As layouts are kind of resources for nvmem providers,
Ideally the provider driver should defer if there is no matching layout
available.

Expressing this as a weak dependency is going to be an issue,

1. With creating the sysfs entries and user notifications
2. nvmem consumers will be in a confused state with provider registered
but without cells added yet.

--srini
> consider that the hardware description shall be correct, we could still
> probe the storage device (especially if it contains the rootfs).
>
> One way to overcome this situation is to consider the layouts as
> devices, and leverage the existing notifier mechanism. When a new NVMEM
> device is registered, we can:
> - populate its nvmem-layout child, if any
> - try to modprobe the relevant driver, if relevant
> - try to hook the NVMEM device with a layout in the notifier
> And when a new layout is registered:
> - try to hook all the existing NVMEM devices which are not yet hooked to
> a layout with the new layout
> This way, there is no strong order to enforce, any NVMEM device creation
> or NVMEM layout driver insertion will be observed as a new event which
> may lead to the creation of additional cells, without disturbing the
> probes with costly (and sometimes endless) deferrals.
>
> In order to achieve that goal we need:
> * To keep track of all nvmem devices
> * To create a new bus for the nvmem-layouts with minimal logic to match
> nvmem-layout devices with nvmem-layout drivers.
> All this infrastructure code is created in the layouts.c file.
>
> Signed-off-by: Miquel Raynal <[email protected]>
> Tested-by: Rafał Miłecki <[email protected]>
> ---
> drivers/nvmem/Makefile | 2 +-
> drivers/nvmem/core.c | 126 +++++--------------
> drivers/nvmem/internals.h | 22 ++++
> drivers/nvmem/layouts.c | 201 +++++++++++++++++++++++++++++++
> drivers/nvmem/layouts/onie-tlv.c | 36 +++++-
> drivers/nvmem/layouts/sl28vpd.c | 36 +++++-
> include/linux/nvmem-provider.h | 25 ++--
> 7 files changed, 331 insertions(+), 117 deletions(-)
> create mode 100644 drivers/nvmem/layouts.c
>
> diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
> index 423baf089515..77be96076ea6 100644
> --- a/drivers/nvmem/Makefile
> +++ b/drivers/nvmem/Makefile
> @@ -4,7 +4,7 @@
> #
>
> obj-$(CONFIG_NVMEM) += nvmem_core.o
> -nvmem_core-y := core.o
> +nvmem_core-y := core.o layouts.o
> obj-y += layouts/
>
> # Devices
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
> index 073fe4a73e37..6c6b0bac24f5 100644
> --- a/drivers/nvmem/core.c
> +++ b/drivers/nvmem/core.c
> @@ -55,9 +55,6 @@ static LIST_HEAD(nvmem_lookup_list);
>
> static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);
>
> -static DEFINE_SPINLOCK(nvmem_layout_lock);
> -static LIST_HEAD(nvmem_layouts);
> -
> static int __nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
> void *val, size_t bytes)
> {
> @@ -744,91 +741,29 @@ static int nvmem_add_cells_from_fixed_layout(struct nvmem_device *nvmem)
> return err;
> }
>
> -int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner)
> +int nvmem_layout_register(struct nvmem_layout *layout)
> {
> - layout->owner = owner;
> + struct nvmem_device *nvmem = dev_get_platdata(layout->dev);
>
> - spin_lock(&nvmem_layout_lock);
> - list_add(&layout->node, &nvmem_layouts);
> - spin_unlock(&nvmem_layout_lock);
> + if (!layout->add_cells)
> + return -EINVAL;
>
> - blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_ADD, layout);
> + /* Link internally the nvmem device to its layout */
> + nvmem->layout = layout;
>
> - return 0;
> + /* Populate the cells */
> + return nvmem->layout->add_cells(&nvmem->dev, nvmem, nvmem->layout);
> }
> -EXPORT_SYMBOL_GPL(__nvmem_layout_register);
> +EXPORT_SYMBOL_GPL(nvmem_layout_register);
>
> void nvmem_layout_unregister(struct nvmem_layout *layout)
> {
> - blocking_notifier_call_chain(&nvmem_notifier, NVMEM_LAYOUT_REMOVE, layout);
> + struct nvmem_device *nvmem = dev_get_platdata(layout->dev);
>
> - spin_lock(&nvmem_layout_lock);
> - list_del(&layout->node);
> - spin_unlock(&nvmem_layout_lock);
> + nvmem->layout = NULL;
> }
> EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
>
> -static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
> -{
> - struct device_node *layout_np;
> - struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
> -
> - layout_np = of_nvmem_layout_get_container(nvmem);
> - if (!layout_np)
> - return NULL;
> -
> - /*
> - * In case the nvmem device was built-in while the layout was built as a
> - * module, we shall manually request the layout driver loading otherwise
> - * we'll never have any match.
> - */
> - of_request_module(layout_np);
> -
> - spin_lock(&nvmem_layout_lock);
> -
> - list_for_each_entry(l, &nvmem_layouts, node) {
> - if (of_match_node(l->of_match_table, layout_np)) {
> - if (try_module_get(l->owner))
> - layout = l;
> -
> - break;
> - }
> - }
> -
> - spin_unlock(&nvmem_layout_lock);
> - of_node_put(layout_np);
> -
> - return layout;
> -}
> -
> -static void nvmem_layout_put(struct nvmem_layout *layout)
> -{
> - if (layout)
> - module_put(layout->owner);
> -}
> -
> -static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
> -{
> - struct nvmem_layout *layout = nvmem->layout;
> - int ret;
> -
> - if (layout && layout->add_cells) {
> - ret = layout->add_cells(&nvmem->dev, nvmem, layout);
> - if (ret)
> - return ret;
> - }
> -
> - return 0;
> -}
> -
> -#if IS_ENABLED(CONFIG_OF)
> -struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
> -{
> - return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
> -}
> -EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
> -#endif
> -
> const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> struct nvmem_layout *layout)
> {
> @@ -836,7 +771,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> const struct of_device_id *match;
>
> layout_np = of_nvmem_layout_get_container(nvmem);
> - match = of_match_node(layout->of_match_table, layout_np);
> + match = of_match_node(layout->dev->driver->of_match_table, layout_np);
>
> return match ? match->data : NULL;
> }
> @@ -947,19 +882,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> goto err_put_device;
> }
>
> - /*
> - * If the driver supplied a layout by config->layout, the module
> - * pointer will be NULL and nvmem_layout_put() will be a noop.
> - */
> - nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
> - if (IS_ERR(nvmem->layout)) {
> - rval = PTR_ERR(nvmem->layout);
> - nvmem->layout = NULL;
> -
> - if (rval == -EPROBE_DEFER)
> - goto err_teardown_compat;
> - }
> -
> if (config->cells) {
> rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
> if (rval)
> @@ -978,7 +900,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
> if (rval)
> goto err_remove_cells;
>
> - rval = nvmem_add_cells_from_layout(nvmem);
> + rval = nvmem_populate_layout(nvmem);
> if (rval)
> goto err_remove_cells;
>
> @@ -986,16 +908,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
>
> rval = device_add(&nvmem->dev);
> if (rval)
> - goto err_remove_cells;
> + goto err_destroy_layout;
> +
>
> blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
>
> return nvmem;
>
> +err_destroy_layout:
> + nvmem_destroy_layout(nvmem);
> err_remove_cells:
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> -err_teardown_compat:
> if (config->compat)
> nvmem_sysfs_remove_compat(nvmem, config);
> err_put_device:
> @@ -1017,7 +940,7 @@ static void nvmem_device_release(struct kref *kref)
> device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
>
> nvmem_device_remove_all_cells(nvmem);
> - nvmem_layout_put(nvmem->layout);
> + nvmem_destroy_layout(nvmem);
> device_unregister(&nvmem->dev);
> }
>
> @@ -2099,11 +2022,22 @@ EXPORT_SYMBOL_GPL(nvmem_dev_name);
>
> static int __init nvmem_init(void)
> {
> - return bus_register(&nvmem_bus_type);
> + int ret;
> +
> + ret = bus_register(&nvmem_bus_type);
> + if (ret)
> + return ret;
> +
> + ret = nvmem_layout_bus_register();
> + if (ret)
> + bus_unregister(&nvmem_bus_type);
> +
> + return ret;
> }
>
> static void __exit nvmem_exit(void)
> {
> + nvmem_layout_bus_unregister();
> bus_unregister(&nvmem_bus_type);
> }
>
> diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
> index ce353831cd65..10a317d46fb6 100644
> --- a/drivers/nvmem/internals.h
> +++ b/drivers/nvmem/internals.h
> @@ -28,8 +28,30 @@ struct nvmem_device {
> nvmem_reg_read_t reg_read;
> nvmem_reg_write_t reg_write;
> struct gpio_desc *wp_gpio;
> + struct device *layout_dev;
> struct nvmem_layout *layout;
> void *priv;
> };
>
> +#if IS_ENABLED(CONFIG_OF)
> +int nvmem_layout_bus_register(void);
> +void nvmem_layout_bus_unregister(void);
> +int nvmem_populate_layout(struct nvmem_device *nvmem);
> +void nvmem_destroy_layout(struct nvmem_device *nvmem);
> +#else /* CONFIG_OF */
> +static inline int nvmem_layout_bus_register(void)
> +{
> + return 0;
> +}
> +
> +static inline void nvmem_layout_bus_unregister(void) {}
> +
> +static inline int nvmem_populate_layout(struct nvmem_device *nvmem)
> +{
> + return 0;
> +}
> +
> +static inline int nvmem_destroy_layout(struct nvmem_device *nvmem) { }
> +#endif /* CONFIG_OF */
> +
> #endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
> diff --git a/drivers/nvmem/layouts.c b/drivers/nvmem/layouts.c
> new file mode 100644
> index 000000000000..5f2ec4213469
> --- /dev/null
> +++ b/drivers/nvmem/layouts.c
> @@ -0,0 +1,201 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * NVMEM layout bus handling
> + *
> + * Copyright (C) 2023 Bootlin
> + * Author: Miquel Raynal <[email protected]
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/nvmem-consumer.h>
> +#include <linux/nvmem-provider.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/of_irq.h>
> +
> +#include "internals.h"
> +
> +#if IS_ENABLED(CONFIG_OF)
> +static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
> +{
> + return of_driver_match_device(dev, drv);
> +}
> +
> +static struct bus_type nvmem_layout_bus_type = {
> + .name = "nvmem-layouts",
> + .match = nvmem_layout_bus_match,
> +};
> +
> +static struct device nvmem_layout_bus = {
> + .init_name = "nvmem-layouts",
> +};
> +
> +int nvmem_layout_driver_register(struct nvmem_layout_driver *drv)
> +{
> + drv->driver.bus = &nvmem_layout_bus_type;
> +
> + return driver_register(&drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(nvmem_layout_driver_register);
> +
> +void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv)
> +{
> + driver_unregister(&drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(nvmem_layout_driver_unregister);
> +
> +static void nvmem_layout_device_release(struct device *dev)
> +{
> + of_node_put(dev->of_node);
> + kfree(dev);
> +}
> +
> +static int nvmem_layout_create_device(struct nvmem_device *nvmem,
> + struct device_node *np)
> +{
> + struct device *dev;
> + int ret;
> +
> + dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> + if (!dev)
> + return -ENOMEM;
> +
> + device_initialize(dev);
> + dev->parent = &nvmem_layout_bus;
> + dev->bus = &nvmem_layout_bus_type;
> + dev->release = nvmem_layout_device_release;
> + dev->coherent_dma_mask = DMA_BIT_MASK(32);
> + dev->dma_mask = &dev->coherent_dma_mask;
> + dev->platform_data = nvmem;
> + device_set_node(dev, of_fwnode_handle(of_node_get(np)));
> + of_device_make_bus_id(dev);
> + of_msi_configure(dev, dev->of_node);
> +
> + ret = device_add(dev);
> + if (ret) {
> + put_device(dev);
> + return ret;
> + }
> +
> + nvmem->layout_dev = dev;
> +
> + return 0;
> +}
> +
> +static const struct of_device_id of_nvmem_layout_skip_table[] = {
> + { .compatible = "fixed-layout", },
> + {}
> +};
> +
> +static int nvmem_layout_bus_populate(struct nvmem_device *nvmem,
> + struct device_node *layout_dn)
> +{
> + int ret;
> +
> + /* Make sure it has a compatible property */
> + if (!of_get_property(layout_dn, "compatible", NULL)) {
> + pr_debug("%s() - skipping %pOF, no compatible prop\n",
> + __func__, layout_dn);
> + return 0;
> + }
> +
> + /* Fixed layouts are parsed manually somewhere else for now */
> + if (of_match_node(of_nvmem_layout_skip_table, layout_dn)) {
> + pr_debug("%s() - skipping %pOF node\n", __func__, layout_dn);
> + return 0;
> + }
> +
> + if (of_node_check_flag(layout_dn, OF_POPULATED_BUS)) {
> + pr_debug("%s() - skipping %pOF, already populated\n",
> + __func__, layout_dn);
> + return 0;
> + }
> +
> + /* NVMEM layout buses expect only a single device representing the layout */
> + ret = nvmem_layout_create_device(nvmem, layout_dn);
> + if (ret)
> + return ret;
> +
> + of_node_set_flag(layout_dn, OF_POPULATED_BUS);
> +
> + return 0;
> +}
> +
> +struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
> +{
> + return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
> +}
> +EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
> +
> +/*
> + * Returns the number of devices populated, 0 if the operation was not relevant
> + * for this nvmem device, an error code otherwise.
> + */
> +int nvmem_populate_layout(struct nvmem_device *nvmem)
> +{
> + struct device_node *nvmem_dn, *layout_dn;
> + int ret;
> +
> + layout_dn = of_nvmem_layout_get_container(nvmem);
> + if (!layout_dn)
> + return 0;
> +
> + nvmem_dn = of_node_get(nvmem->dev.of_node);
> + if (!nvmem_dn) {
> + of_node_put(layout_dn);
> + return 0;
> + }
> +
> + /* Ensure the layout driver is loaded */
> + of_request_module(layout_dn);
> +
> + /* Populate the layout device */
> + device_links_supplier_sync_state_pause();
> + ret = nvmem_layout_bus_populate(nvmem, layout_dn);
> + device_links_supplier_sync_state_resume();
> +
> + of_node_put(nvmem_dn);
> + of_node_put(layout_dn);
> + return ret;
> +}
> +
> +void nvmem_destroy_layout(struct nvmem_device *nvmem)
> +{
> + struct device_node *layout_dn;
> +
> + layout_dn = of_nvmem_layout_get_container(nvmem);
> + if (!layout_dn)
> + return;
> +
> + of_node_clear_flag(layout_dn, OF_POPULATED_BUS);
> + put_device(nvmem->layout_dev);
> +
> + of_node_put(layout_dn);
> +}
> +
> +int nvmem_layout_bus_register(void)
> +{
> + int ret;
> +
> + ret = device_register(&nvmem_layout_bus);
> + if (ret) {
> + put_device(&nvmem_layout_bus);
> + return ret;
> + }
> +
> + ret = bus_register(&nvmem_layout_bus_type);
> + if (ret) {
> + device_unregister(&nvmem_layout_bus);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +void nvmem_layout_bus_unregister(void)
> +{
> + bus_unregister(&nvmem_layout_bus_type);
> + device_unregister(&nvmem_layout_bus);
> +}
> +#endif
> diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
> index 59fc87ccfcff..191b2540d347 100644
> --- a/drivers/nvmem/layouts/onie-tlv.c
> +++ b/drivers/nvmem/layouts/onie-tlv.c
> @@ -226,16 +226,44 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
> return 0;
> }
>
> +static int onie_tlv_probe(struct device *dev)
> +{
> + struct nvmem_layout *layout;
> +
> + layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
> + if (!layout)
> + return -ENOMEM;
> +
> + layout->add_cells = onie_tlv_parse_table;
> + layout->dev = dev;
> +
> + dev_set_drvdata(dev, layout);
> +
> + return nvmem_layout_register(layout);
> +}
> +
> +static int onie_tlv_remove(struct device *dev)
> +{
> + struct nvmem_layout *layout = dev_get_drvdata(dev);
> +
> + nvmem_layout_unregister(layout);
> +
> + return 0;
> +}
> +
> static const struct of_device_id onie_tlv_of_match_table[] = {
> { .compatible = "onie,tlv-layout", },
> {},
> };
> MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);
>
> -static struct nvmem_layout onie_tlv_layout = {
> - .name = "ONIE tlv layout",
> - .of_match_table = onie_tlv_of_match_table,
> - .add_cells = onie_tlv_parse_table,
> +static struct nvmem_layout_driver onie_tlv_layout = {
> + .driver = {
> + .name = "onie-tlv-layout",
> + .of_match_table = onie_tlv_of_match_table,
> + .probe = onie_tlv_probe,
> + .remove = onie_tlv_remove,
> + },
> };
> module_nvmem_layout_driver(onie_tlv_layout);
>
> diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
> index 05671371f631..330badebfcf6 100644
> --- a/drivers/nvmem/layouts/sl28vpd.c
> +++ b/drivers/nvmem/layouts/sl28vpd.c
> @@ -135,16 +135,44 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
> return 0;
> }
>
> +static int sl28vpd_probe(struct device *dev)
> +{
> + struct nvmem_layout *layout;
> +
> + layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
> + if (!layout)
> + return -ENOMEM;
> +
> + layout->add_cells = sl28vpd_add_cells;
> + layout->dev = dev;
> +
> + dev_set_drvdata(dev, layout);
> +
> + return nvmem_layout_register(layout);
> +}
> +
> +static int sl28vpd_remove(struct device *dev)
> +{
> + struct nvmem_layout *layout = dev_get_drvdata(dev);
> +
> + nvmem_layout_unregister(layout);
> +
> + return 0;
> +}
> +
> static const struct of_device_id sl28vpd_of_match_table[] = {
> { .compatible = "kontron,sl28-vpd" },
> {},
> };
> MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);
>
> -static struct nvmem_layout sl28vpd_layout = {
> - .name = "sl28-vpd",
> - .of_match_table = sl28vpd_of_match_table,
> - .add_cells = sl28vpd_add_cells,
> +static struct nvmem_layout_driver sl28vpd_layout = {
> + .driver = {
> + .name = "kontron-sl28vpd-layout",
> + .of_match_table = sl28vpd_of_match_table,
> + .probe = sl28vpd_probe,
> + .remove = sl28vpd_remove,
> + },
> };
> module_nvmem_layout_driver(sl28vpd_layout);
>
> diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
> index 2905f9e6fc2a..10537abea008 100644
> --- a/include/linux/nvmem-provider.h
> +++ b/include/linux/nvmem-provider.h
> @@ -154,8 +154,7 @@ struct nvmem_cell_table {
> /**
> * struct nvmem_layout - NVMEM layout definitions
> *
> - * @name: Layout name.
> - * @of_match_table: Open firmware match table.
> + * @dev: Device-model layout device.
> * @add_cells: Will be called if a nvmem device is found which
> * has this layout. The function will add layout
> * specific cells with nvmem_add_one_cell().
> @@ -170,8 +169,7 @@ struct nvmem_cell_table {
> * cells.
> */
> struct nvmem_layout {
> - const char *name;
> - const struct of_device_id *of_match_table;
> + struct device *dev;
> int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
> struct nvmem_layout *layout);
> void (*fixup_cell_info)(struct nvmem_device *nvmem,
> @@ -183,6 +181,10 @@ struct nvmem_layout {
> struct list_head node;
> };
>
> +struct nvmem_layout_driver {
> + struct device_driver driver;
> +};
> +
> #if IS_ENABLED(CONFIG_NVMEM)
>
> struct nvmem_device *nvmem_register(const struct nvmem_config *cfg);
> @@ -197,11 +199,15 @@ void nvmem_del_cell_table(struct nvmem_cell_table *table);
> int nvmem_add_one_cell(struct nvmem_device *nvmem,
> const struct nvmem_cell_info *info);
>
> -int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner);
> -#define nvmem_layout_register(layout) \
> - __nvmem_layout_register(layout, THIS_MODULE)
> +int nvmem_layout_register(struct nvmem_layout *layout);
> void nvmem_layout_unregister(struct nvmem_layout *layout);
>
> +int nvmem_layout_driver_register(struct nvmem_layout_driver *drv);
> +void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv);
> +#define module_nvmem_layout_driver(__nvmem_layout_driver) \
> + module_driver(__nvmem_layout_driver, nvmem_layout_driver_register, \
> + nvmem_layout_driver_unregister)
> +
> const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
> struct nvmem_layout *layout);
>
> @@ -257,9 +263,4 @@ static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_dev
> return NULL;
> }
> #endif /* CONFIG_NVMEM */
> -
> -#define module_nvmem_layout_driver(__layout_driver) \
> - module_driver(__layout_driver, nvmem_layout_register, \
> - nvmem_layout_unregister)
> -
> #endif /* ifndef _LINUX_NVMEM_PROVIDER_H */

2023-10-09 09:45:59

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v12 2/7] nvmem: Clarify the situation when there is no DT node available



On 06/10/2023 17:32, Miquel Raynal wrote:
> Hi Rafał,
>
> [email protected] wrote on Fri, 06 Oct 2023 13:41:52 +0200:
>
>> On 2023-10-05 17:59, Miquel Raynal wrote:
>>> At a first look it might seem that the presence of the of_node pointer
>>> in the nvmem device does not matter much, but in practice, after > looking
>>> deep into the DT core, nvmem_add_cells_from_dt() will simply and always
>>> return NULL if this field is not provided. As most mtd devices don't
>>> populate this field (this could evolve later), it means none of their
>>> children cells will be populated unless no_of_node is explicitly set to
>>> false. In order to clarify the logic, let's add clear check at the
>>> beginning of this helper.
>>
>> I'm somehow confused by above explanation and code too. I read it
>> carefully 5 times but I can't see what exactly this change helps with.
>>
>> At first look at nvmem_add_cells_from_legacy_of() I can see it uses
>> "of_node" so I don't really agree with "it might seem that the presence
>> of the of_node pointer in the nvmem device does not matter much".
>>
>> You really don't need to look deep into DT core (actually you don't have
>> to look into it at all) to understand that nvmem_add_cells_from_dt()
>> will return 0 (nitpicking: not NULL) for a NULL pointer. It's all made
>> of for_each_child_of_node(). Obviously it does nothing if there is
>> nothing to loop over.
>
> That was not obvious to me as I thought it would start from /, which I
> think some other function do when you don't provide a start node.
>
>> Given that for_each_child_of_node() is NULL-safe I think code from this
>> patch is redundant.
>
> I didn't say it was not safe, just not explicit.
>
>> Later you mention "no_of_node" which I agree to be a very non-intuitive
>> config option. As pointed in another thread I already sent:
>> [PATCH] Revert "nvmem: add new config option"
>> https://lore.kernel.org/lkml/[email protected]/t/
>
> I actually wanted to find again that patch and could not get my hands on
> it, but it is probably a much better fix than my other mtd patch, I
> agree with you.
>
>> Maybe with above patch finally things will get more clear and we don't
>> need this PATCH after all?
>
> Yes. Srinivas, what are your plans for the above patch?

for_each_child_of_node is null safe, so this patch is really not adding
much value TBH.

--srini
>
> Thanks,
> Miquèl

2023-10-11 07:39:14

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Srinivas,

[email protected] wrote on Mon, 9 Oct 2023 10:44:45 +0100:

> On 05/10/2023 16:59, Miquel Raynal wrote:
> > Current layout support was initially written without modules support in
> > mind. When the requirement for module support rose, the existing base
> > was improved to adopt modularization support, but kind of a design flaw
> > was introduced. With the existing implementation, when a storage device
> > registers into NVMEM, the core tries to hook a layout (if any) and
> > populates its cells immediately. This means, if the hardware description
> > expects a layout to be hooked up, but no driver was provided for that,
> > the storage medium will fail to probe and try later from
> > scratch. Technically, the layouts are more like a "plus" and, even we
>
> This is not true, As layouts are kind of resources for nvmem providers, Ideally the provider driver should defer if there is no matching layout available.

That is not possible as layouts are now devices, the device will be
populated but you cannot know when it will be actually probed?

> Expressing this as a weak dependency is going to be an issue,
>
> 1. With creating the sysfs entries and user notifications

For me, this is not an issue. Greg?

> 2. nvmem consumers will be in a confused state with provider registered but without cells added yet.

Wow, I feel like we are moving backwards.

Consumers don't know about the nvmem devices, they just care about a
cell. If the cell isn't there, the consumer decides what it wants
to do with that.

We initially discussed that we would not EPROBE_DEFER if the layouts
were not yet available because the NVMEM device may be created from a
device that is the main storage and while you don't have your rootfs,
you don't have access to your modules. And anyway it's probably a bad
idea to allow endless probe deferrals on your main storage device.

If the cells are not available at that time, it's not a huge deal? The
consumers will have to wait a bit more (or take any other action, this
is device dependent).

> --srini
> > consider that the hardware description shall be correct, we could still
> > probe the storage device (especially if it contains the rootfs).
> >
> > One way to overcome this situation is to consider the layouts as
> > devices, and leverage the existing notifier mechanism. When a new NVMEM
> > device is registered, we can:
> > - populate its nvmem-layout child, if any
> > - try to modprobe the relevant driver, if relevant
> > - try to hook the NVMEM device with a layout in the notifier
> > And when a new layout is registered:
> > - try to hook all the existing NVMEM devices which are not yet hooked to
> > a layout with the new layout
> > This way, there is no strong order to enforce, any NVMEM device creation
> > or NVMEM layout driver insertion will be observed as a new event which
> > may lead to the creation of additional cells, without disturbing the
> > probes with costly (and sometimes endless) deferrals.
> >
> > In order to achieve that goal we need:
> > * To keep track of all nvmem devices
> > * To create a new bus for the nvmem-layouts with minimal logic to match
> > nvmem-layout devices with nvmem-layout drivers.
> > All this infrastructure code is created in the layouts.c file.
> >
> > Signed-off-by: Miquel Raynal <[email protected]>
> > Tested-by: Rafał Miłecki <[email protected]>
> > ---

Thanks,
Miquèl

2023-10-11 10:02:57

by Srinivas Kandagatla

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

On 11/10/2023 08:38, Miquel Raynal wrote:
> Hi Srinivas,
>
> [email protected] wrote on Mon, 9 Oct 2023 10:44:45 +0100:
>
>> On 05/10/2023 16:59, Miquel Raynal wrote:
>>> Current layout support was initially written without modules support in
>>> mind. When the requirement for module support rose, the existing base
>>> was improved to adopt modularization support, but kind of a design flaw
>>> was introduced. With the existing implementation, when a storage device
>>> registers into NVMEM, the core tries to hook a layout (if any) and
>>> populates its cells immediately. This means, if the hardware description
>>> expects a layout to be hooked up, but no driver was provided for that,
>>> the storage medium will fail to probe and try later from
>>> scratch. Technically, the layouts are more like a "plus" and, even we
>>
>> This is not true, As layouts are kind of resources for nvmem providers, Ideally the provider driver should defer if there is no matching layout available.
>
> That is not possible as layouts are now devices, the device will be
> populated but you cannot know when it will be actually probed?
>
>> Expressing this as a weak dependency is going to be an issue,
>>
>> 1. With creating the sysfs entries and user notifications
>
> For me, this is not an issue. Greg?
>
>> 2. nvmem consumers will be in a confused state with provider registered but without cells added yet.
>
> Wow, I feel like we are moving backwards.
>
> Consumers don't know about the nvmem devices, they just care about a
> cell. If the cell isn't there, the consumer decides what it wants
> to do with that.
>
> We initially discussed that we would not EPROBE_DEFER if the layouts
> were not yet available because the NVMEM device may be created from a
> device that is the main storage and while you don't have your rootfs,

Does it not sound like we are not expressing the dependencies between
nvmem provider and layout drivers correctly?


> you don't have access to your modules. And anyway it's probably a bad
> idea to allow endless probe deferrals on your main storage device.
>
> If the cells are not available at that time, it's not a huge deal? The
> consumers will have to wait a bit more (or take any other action, this
> is device dependent).

In this case the nvmem consumers will get an -ENOENT error, which is
very confusing TBH.


thanks,
Srini

>
>> --srini
>>> consider that the hardware description shall be correct, we could still
>>> probe the storage device (especially if it contains the rootfs).
>>>
>>> One way to overcome this situation is to consider the layouts as
>>> devices, and leverage the existing notifier mechanism. When a new NVMEM
>>> device is registered, we can:
>>> - populate its nvmem-layout child, if any
>>> - try to modprobe the relevant driver, if relevant
>>> - try to hook the NVMEM device with a layout in the notifier
>>> And when a new layout is registered:
>>> - try to hook all the existing NVMEM devices which are not yet hooked to
>>> a layout with the new layout
>>> This way, there is no strong order to enforce, any NVMEM device creation
>>> or NVMEM layout driver insertion will be observed as a new event which
>>> may lead to the creation of additional cells, without disturbing the
>>> probes with costly (and sometimes endless) deferrals.
>>>
>>> In order to achieve that goal we need:
>>> * To keep track of all nvmem devices
>>> * To create a new bus for the nvmem-layouts with minimal logic to match
>>> nvmem-layout devices with nvmem-layout drivers.
>>> All this infrastructure code is created in the layouts.c file.
>>>
>>> Signed-off-by: Miquel Raynal <[email protected]>
>>> Tested-by: Rafał Miłecki <[email protected]>
>>> ---
>
> Thanks,
> Miquèl

2023-10-11 10:33:57

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Greg,

[email protected] wrote on Sat, 7 Oct 2023 18:31:00 +0200:

> On Thu, Oct 05, 2023 at 05:59:05PM +0200, Miquel Raynal wrote:
> > --- a/drivers/nvmem/internals.h
> > +++ b/drivers/nvmem/internals.h
> > @@ -28,8 +28,30 @@ struct nvmem_device {
> > nvmem_reg_read_t reg_read;
> > nvmem_reg_write_t reg_write;
> > struct gpio_desc *wp_gpio;
> > + struct device *layout_dev;
> > struct nvmem_layout *layout;
> > void *priv;
> > };
>
> Wait, is this now 2 struct device in the same structure? Which one is
> the "real" owner of this structure? Why is a pointer to layout_dev
> needed here as a "struct device" and not a real "struct
> nvmem_layout_device" or whatever it's called?
>
> > struct nvmem_layout {
> > - const char *name;
> > - const struct of_device_id *of_match_table;
> > + struct device *dev;
>
> Shouldn't this be a "real" struct device and not just a pointer? If
> not, what does this point to? Who owns the reference to it?

Good point, I've initially tried to create the simplest possible bus,
but you're right it will be nicer if the layout device structure
carries the 'struct device'. I've added a bit of infrstracture but it
looks better, thanks for the suggestion.

Thanks,
Miquèl

2023-10-11 10:59:20

by Miquel Raynal

[permalink] [raw]
Subject: Re: [PATCH v12 5/7] nvmem: core: Rework layouts to become regular devices

Hi Srinivas,

> > you don't have access to your modules. And anyway it's probably a bad
> > idea to allow endless probe deferrals on your main storage device.
> >
> > If the cells are not available at that time, it's not a huge deal? The
> > consumers will have to wait a bit more (or take any other action, this
> > is device dependent).
>
> In this case the nvmem consumers will get an -ENOENT error, which is very confusing TBH.

Maybe we can solve that situation like that (based on my current
series):

--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -1448,7 +1448,10 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id)
of_node_put(cell_np);
if (!cell_entry) {
__nvmem_device_put(nvmem);
- return ERR_PTR(-ENOENT);
+ if (nvmem->layout)
+ return ERR_PTR(-EAGAIN);
+ else
+ return ERR_PTR(-ENOENT);
}

cell = nvmem_create_cell(cell_entry, id, cell_index);


So this way when a (DT) consumer requests a cell:
- the cell is ready and it gets it
- the cell is not ready and...
- the cell comes from a layout -> we return EAGAIN, which
means the cell is not yet ready and this must be retried later
(the caller may return EPROBE_DEFER in this case).
- the cell is simply missing/not existing/not available, this is a
real error.

What do you think?

Thanks,
Miquèl