2023-10-04 22:22:50

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 0/7] NVMEM cells in sysfs

Hello,

As part of a previous effort, support for dynamic NVMEM layouts was
brought into mainline, helping a lot in getting information from NVMEM
devices at non-static locations. One common example of NVMEM cell is the
MAC address that must be used. Sometimes the cell content is mainly (or
only) useful to the kernel, and sometimes it is not. Users might also
want to know the content of cells such as: the manufacturing place and
date, the hardware version, the unique ID, etc. Two possibilities in
this case: either the users re-implement their own parser to go through
the whole device and search for the information they want, or the kernel
can expose the content of the cells if deemed relevant. This second
approach sounds way more relevant than the first one to avoid useless
code duplication, so here is a series bringing NVMEM cells content to
the user through sysfs.

Here is a real life example with a Marvell Armada 7040 TN48m switch:

$ nvmem=/sys/bus/nvmem/devices/1-00563/
$ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
country-code@77
00000000 54 57 |TW|
crc32@88
00000000 bb cd 51 98 |..Q.|
device-version@49
00000000 02 |.|
diag-version@80
00000000 56 31 2e 30 2e 30 |V1.0.0|
label-revision@4c
00000000 44 31 |D1|
mac-address@2c
00000000 18 be 92 13 9a 00 |......|
manufacture-date@34
00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
manufacturer@72
00000000 44 4e 49 |DNI|
num-macs@6e
00000000 00 40 |.@|
onie-version@61
00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
platform-name@50
00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
product-name@d
00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
serial-number@19
00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
vendor@7b
00000000 44 4e 49 |DNI|

Current support does not include:
* The knowledge of the type of data (binary vs. ASCII), so by default
all cells are exposed in binary form.
* Write support.

Changes in v11:
* The nvmem layouts are now regular devices and not platform devices
anymore. They are registered into the nvmem-layout bus (so there is a
new /sysfs/bus/nvmem-layouts entry that gets created. All the code for
this new bus is located under drivers/nvmem/layouts.c and is part of
the main core. The core device-driver logic applies without too much
additional code besides the registration of the bus and a bit of
glue. I see no need for more detailed structures for now but this can
be improved later as needed.

Changes in v10:
* All preparation patches have been picked-up by Srinivas.
* Rebased on top of v6.6-rc1.
* Fix an error path in the probe due to the recent additions.

Changes in v9:
* Hopefully fixed the creation of sysfs entries when describing the
cells using the legacy layout, as reported by Chen-Yu.
* Dropped the nvmem-specific device list and used the driver core list
instead as advised by Greg.

Changes in v8:
* Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
* Add a patch to return NULL when no layout is found (reported by Dan
Carpenter).
* Fixed the documentation as well as the cover letter regarding the
addition of addresses in the cell names.

Changes in v7:
* Rework the layouts registration mechanism to use the platform devices
logic.
* Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
them consist in suffixing '@<offset>' to the cell name to create the
sysfs files in order to be sure they are all unique.
* Update the doc.

Changes in v6:
* ABI documentation style fixes reported by Randy Dunlap:
s|cells/ folder|"cells" folder|
Missing period at the end of the final note.
s|Ex::|Example::|
* Remove spurious patch from the previous resubmission.

Resending v5:
* I forgot the mailing list in my former submission, both are absolutely
identical otherwise.

Changes in v5:
* Rebased on last -rc1, fixing a conflict and skipping the first two
patches already taken by Greg.
* Collected tags from Greg.
* Split the nvmem patch into two, one which just moves the cells
creation and the other which adds the cells.

Changes in v4:
* Use a core helper to count the number of cells in a list.
* Provide sysfs attributes a private member which is the entry itself to
avoid the need for looking up the nvmem device and then looping over
all the cells to find the right one.

Changes in v3:
* Patch 1 is new: fix a style issue which bothered me when reading the
core.
* Patch 2 is new: Don't error out when an attribute group does not
contain any attributes, it's easier for developers to handle "empty"
directories this way. It avoids strange/bad solutions to be
implemented and does not cost much.
* Drop the is_visible hook as it is no longer needed.
* Stop allocating an empty attribute array to comply with the sysfs core
checks (this check has been altered in the first commits).
* Fix a missing tab in the ABI doc.

Changes in v2:
* Do not mention the cells might become writable in the future in the
ABI documentation.
* Fix a wrong return value reported by Dan and kernel test robot.
* Implement .is_bin_visible().
* Avoid overwriting the list of attribute groups, but keep the cells
attribute group writable as we need to populate it at run time.
* Improve the commit messages.
* Give a real life example in the cover letter.

Miquel Raynal (7):
of: device: Export of_device_make_bus_id()
nvmem: Clarify the situation when there is no DT node available
nvmem: Move of_nvmem_layout_get_container() in another header
nvmem: Create a header for internal sharing
nvmem: core: Rework layouts to become regular devices
ABI: sysfs-nvmem-cells: Expose cells through sysfs
nvmem: core: Expose cells through sysfs

Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 308 +++++++++++++++-----
drivers/nvmem/internals.h | 40 +++
drivers/nvmem/layouts.c | 171 +++++++++++
drivers/nvmem/layouts/onie-tlv.c | 37 ++-
drivers/nvmem/layouts/sl28vpd.c | 37 ++-
drivers/of/device.c | 41 +++
drivers/of/platform.c | 40 ---
include/linux/nvmem-consumer.h | 7 -
include/linux/nvmem-provider.h | 38 ++-
include/linux/of_device.h | 6 +
12 files changed, 614 insertions(+), 134 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
create mode 100644 drivers/nvmem/internals.h
create mode 100644 drivers/nvmem/layouts.c

--
2.34.1


2023-10-04 22:22:54

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 1/7] of: device: Export of_device_make_bus_id()

This helper is really handy to create unique device names based on their
device tree path, we may need it outside of the OF core (in the NVMEM
subsystem) so let's export it. As this helper has nothing patform
specific, let's move it to of/device.c instead of of/platform.c so we
can add its prototype to of_device.h.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/of/device.c | 41 +++++++++++++++++++++++++++++++++++++++
drivers/of/platform.c | 40 --------------------------------------
include/linux/of_device.h | 6 ++++++
3 files changed, 47 insertions(+), 40 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 1ca42ad9dd15..6e9572c4af83 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -304,3 +304,44 @@ int of_device_uevent_modalias(const struct device *dev, struct kobj_uevent_env *
return 0;
}
EXPORT_SYMBOL_GPL(of_device_uevent_modalias);
+
+/**
+ * of_device_make_bus_id - Use the device node data to assign a unique name
+ * @dev: pointer to device structure that is linked to a device tree node
+ *
+ * This routine will first try using the translated bus address to
+ * derive a unique name. If it cannot, then it will prepend names from
+ * parent nodes until a unique name can be derived.
+ */
+void of_device_make_bus_id(struct device *dev)
+{
+ struct device_node *node = dev->of_node;
+ const __be32 *reg;
+ u64 addr;
+ u32 mask;
+
+ /* Construct the name, using parent nodes if necessary to ensure uniqueness */
+ while (node->parent) {
+ /*
+ * If the address can be translated, then that is as much
+ * uniqueness as we need. Make it the first component and return
+ */
+ reg = of_get_property(node, "reg", NULL);
+ if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
+ if (!of_property_read_u32(node, "mask", &mask))
+ dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
+ addr, ffs(mask) - 1, node, dev_name(dev));
+
+ else
+ dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
+ addr, node, dev_name(dev));
+ return;
+ }
+
+ /* format arguments only used if dev_name() resolves to NULL */
+ dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
+ kbasename(node->full_name), dev_name(dev));
+ node = node->parent;
+ }
+}
+EXPORT_SYMBOL_GPL(of_device_make_bus_id);
diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index f235ab55b91e..be32e28c6f55 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -97,46 +97,6 @@ static const struct of_device_id of_skipped_node_table[] = {
* mechanism for creating devices from device tree nodes.
*/

-/**
- * of_device_make_bus_id - Use the device node data to assign a unique name
- * @dev: pointer to device structure that is linked to a device tree node
- *
- * This routine will first try using the translated bus address to
- * derive a unique name. If it cannot, then it will prepend names from
- * parent nodes until a unique name can be derived.
- */
-static void of_device_make_bus_id(struct device *dev)
-{
- struct device_node *node = dev->of_node;
- const __be32 *reg;
- u64 addr;
- u32 mask;
-
- /* Construct the name, using parent nodes if necessary to ensure uniqueness */
- while (node->parent) {
- /*
- * If the address can be translated, then that is as much
- * uniqueness as we need. Make it the first component and return
- */
- reg = of_get_property(node, "reg", NULL);
- if (reg && (addr = of_translate_address(node, reg)) != OF_BAD_ADDR) {
- if (!of_property_read_u32(node, "mask", &mask))
- dev_set_name(dev, dev_name(dev) ? "%llx.%x.%pOFn:%s" : "%llx.%x.%pOFn",
- addr, ffs(mask) - 1, node, dev_name(dev));
-
- else
- dev_set_name(dev, dev_name(dev) ? "%llx.%pOFn:%s" : "%llx.%pOFn",
- addr, node, dev_name(dev));
- return;
- }
-
- /* format arguments only used if dev_name() resolves to NULL */
- dev_set_name(dev, dev_name(dev) ? "%s:%s" : "%s",
- kbasename(node->full_name), dev_name(dev));
- node = node->parent;
- }
-}
-
/**
* of_device_alloc - Allocate and initialize an of_device
* @np: device node to assign to device
diff --git a/include/linux/of_device.h b/include/linux/of_device.h
index 2c7a3d4bc775..a72661e47faa 100644
--- a/include/linux/of_device.h
+++ b/include/linux/of_device.h
@@ -40,6 +40,9 @@ static inline int of_dma_configure(struct device *dev,
{
return of_dma_configure_id(dev, np, force_dma, NULL);
}
+
+void of_device_make_bus_id(struct device *dev);
+
#else /* CONFIG_OF */

static inline int of_driver_match_device(struct device *dev,
@@ -82,6 +85,9 @@ static inline int of_dma_configure(struct device *dev,
{
return 0;
}
+
+static inline void of_device_make_bus_id(struct device *dev) {}
+
#endif /* CONFIG_OF */

#endif /* _LINUX_OF_DEVICE_H */
--
2.34.1

2023-10-04 22:23:08

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 4/7] nvmem: Create a header for internal sharing

Before adding all the NVMEM layout bus infrastructure to the core, let's
move the main nvmem_device structure in an internal header, only
available to the core. This way all the additional code can be added in
a dedicated file in order to keep the current core file tidy.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 24 +-----------------------
drivers/nvmem/internals.h | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 36 insertions(+), 23 deletions(-)
create mode 100644 drivers/nvmem/internals.h

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index c63057a7a3b8..073fe4a73e37 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -19,29 +19,7 @@
#include <linux/of.h>
#include <linux/slab.h>

-struct nvmem_device {
- struct module *owner;
- struct device dev;
- int stride;
- int word_size;
- int id;
- struct kref refcnt;
- size_t size;
- bool read_only;
- bool root_only;
- int flags;
- enum nvmem_type type;
- struct bin_attribute eeprom;
- struct device *base_dev;
- struct list_head cells;
- const struct nvmem_keepout *keepout;
- unsigned int nkeepout;
- nvmem_reg_read_t reg_read;
- nvmem_reg_write_t reg_write;
- struct gpio_desc *wp_gpio;
- struct nvmem_layout *layout;
- void *priv;
-};
+#include "internals.h"

#define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
new file mode 100644
index 000000000000..ce353831cd65
--- /dev/null
+++ b/drivers/nvmem/internals.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_NVMEM_INTERNALS_H
+#define _LINUX_NVMEM_INTERNALS_H
+
+#include <linux/device.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+
+struct nvmem_device {
+ struct module *owner;
+ struct device dev;
+ struct list_head node;
+ int stride;
+ int word_size;
+ int id;
+ struct kref refcnt;
+ size_t size;
+ bool read_only;
+ bool root_only;
+ int flags;
+ enum nvmem_type type;
+ struct bin_attribute eeprom;
+ struct device *base_dev;
+ struct list_head cells;
+ const struct nvmem_keepout *keepout;
+ unsigned int nkeepout;
+ nvmem_reg_read_t reg_read;
+ nvmem_reg_write_t reg_write;
+ struct gpio_desc *wp_gpio;
+ struct nvmem_layout *layout;
+ void *priv;
+};
+
+#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
--
2.34.1

2023-10-04 22:23:17

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 7/7] nvmem: core: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell, regardless of its position/size in the underlying
device. Unfortunately, these information are not accessible by users,
unless by fully re-implementing the parser logic in userland.

Let's expose the cells and their content through sysfs to avoid these
situations. Of course the relevant NVMEM sysfs Kconfig option must be
enabled for this support to be available.

Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
group member will be filled at runtime only when relevant and will
remain empty otherwise. In this case, as the cells attribute group will
be empty, it will not lead to any additional folder/file creation.

Exposed cells are read-only. There is, in practice, everything in the
core to support a write path, but as I don't see any need for that, I
prefer to keep the interface simple (and probably safer). The interface
is documented as being in the "testing" state which means we can later
add a write attribute if though relevant.

Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
---
drivers/nvmem/core.c | 116 ++++++++++++++++++++++++++++++++++++++
drivers/nvmem/internals.h | 1 +
2 files changed, 117 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 1f311c899ae1..bb29cfe11334 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -303,6 +303,43 @@ static umode_t nvmem_bin_attr_is_visible(struct kobject *kobj,
return nvmem_bin_attr_get_umode(nvmem);
}

+static struct nvmem_cell *nvmem_create_cell(struct nvmem_cell_entry *entry,
+ const char *id, int index);
+
+static ssize_t nvmem_cell_attr_read(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ struct nvmem_cell_entry *entry;
+ struct nvmem_cell *cell = NULL;
+ size_t cell_sz, read_len;
+ void *content;
+
+ entry = attr->private;
+ cell = nvmem_create_cell(entry, entry->name, 0);
+ if (IS_ERR(cell))
+ return PTR_ERR(cell);
+
+ if (!cell)
+ return -EINVAL;
+
+ content = nvmem_cell_read(cell, &cell_sz);
+ if (IS_ERR(content)) {
+ read_len = PTR_ERR(content);
+ goto destroy_cell;
+ }
+
+ read_len = min_t(unsigned int, cell_sz - pos, count);
+ memcpy(buf, content + pos, read_len);
+ kfree(content);
+
+destroy_cell:
+ kfree_const(cell->id);
+ kfree(cell);
+
+ return read_len;
+}
+
/* default read/write permissions */
static struct bin_attribute bin_attr_rw_nvmem = {
.attr = {
@@ -324,11 +361,21 @@ static const struct attribute_group nvmem_bin_group = {
.is_bin_visible = nvmem_bin_attr_is_visible,
};

+/* Cell attributes will be dynamically allocated */
+static struct attribute_group nvmem_cells_group = {
+ .name = "cells",
+};
+
static const struct attribute_group *nvmem_dev_groups[] = {
&nvmem_bin_group,
NULL,
};

+static const struct attribute_group *nvmem_cells_groups[] = {
+ &nvmem_cells_group,
+ NULL,
+};
+
static struct bin_attribute bin_attr_nvmem_eeprom_compat = {
.attr = {
.name = "eeprom",
@@ -384,6 +431,69 @@ static void nvmem_sysfs_remove_compat(struct nvmem_device *nvmem,
device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
}

+static int nvmem_dev_populate_sysfs_cells(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+ struct bin_attribute **cells_attrs, *attrs;
+ struct nvmem_cell_entry *entry;
+ unsigned int ncells = 0, i = 0;
+ int ret = 0;
+
+ mutex_lock(&nvmem_mutex);
+
+ if (list_empty(&nvmem->cells) || nvmem->sysfs_cells_populated) {
+ nvmem_cells_group.bin_attrs = NULL;
+ goto unlock_mutex;
+ }
+
+ /* Allocate an array of attributes with a sentinel */
+ ncells = list_count_nodes(&nvmem->cells);
+ cells_attrs = devm_kcalloc(&nvmem->dev, ncells + 1,
+ sizeof(struct bin_attribute *), GFP_KERNEL);
+ if (!cells_attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ attrs = devm_kcalloc(&nvmem->dev, ncells, sizeof(struct bin_attribute), GFP_KERNEL);
+ if (!attrs) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ /* Initialize each attribute to take the name and size of the cell */
+ list_for_each_entry(entry, &nvmem->cells, node) {
+ sysfs_bin_attr_init(&attrs[i]);
+ attrs[i].attr.name = devm_kasprintf(&nvmem->dev, GFP_KERNEL,
+ "%s@%x", entry->name,
+ entry->offset);
+ attrs[i].attr.mode = 0444;
+ attrs[i].size = entry->bytes;
+ attrs[i].read = &nvmem_cell_attr_read;
+ attrs[i].private = entry;
+ if (!attrs[i].attr.name) {
+ ret = -ENOMEM;
+ goto unlock_mutex;
+ }
+
+ cells_attrs[i] = &attrs[i];
+ i++;
+ }
+
+ nvmem_cells_group.bin_attrs = cells_attrs;
+
+ ret = devm_device_add_groups(&nvmem->dev, nvmem_cells_groups);
+ if (ret)
+ goto unlock_mutex;
+
+ nvmem->sysfs_cells_populated = true;
+
+unlock_mutex:
+ mutex_unlock(&nvmem_mutex);
+
+ return ret;
+}
+
#else /* CONFIG_NVMEM_SYSFS */

static int nvmem_sysfs_setup_compat(struct nvmem_device *nvmem,
@@ -2151,6 +2261,12 @@ static int nvmem_notifier_call(struct notifier_block *notifier,
if (ret)
return notifier_from_errno(ret);

+#ifdef CONFIG_NVMEM_SYSFS
+ ret = nvmem_for_each_dev(nvmem_dev_populate_sysfs_cells);
+ if (ret)
+ return notifier_from_errno(ret);
+#endif
+
return NOTIFY_OK;
}

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
index eb73b59d1fd9..baa1c173be1c 100644
--- a/drivers/nvmem/internals.h
+++ b/drivers/nvmem/internals.h
@@ -30,6 +30,7 @@ struct nvmem_device {
struct gpio_desc *wp_gpio;
struct nvmem_layout *layout;
void *priv;
+ bool sysfs_cells_populated;
};

int nvmem_layout_bus_register(void);
--
2.34.1

2023-10-04 22:23:18

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 6/7] ABI: sysfs-nvmem-cells: Expose cells through sysfs

The binary content of nvmem devices is available to the user so in the
easiest cases, finding the content of a cell is rather easy as it is
just a matter of looking at a known and fixed offset. However, nvmem
layouts have been recently introduced to cope with more advanced
situations, where the offset and size of the cells is not known in
advance or is dynamic. When using layouts, more advanced parsers are
used by the kernel in order to give direct access to the content of each
cell regardless of their position/size in the underlying device, but
these information were not accessible to the user.

By exposing the nvmem cells to the user through a dedicated cell/ folder
containing one file per cell, we provide a straightforward access to
useful user information without the need for re-writing a userland
parser. Content of nvmem cells is usually: product names, manufacturing
date, MAC addresses, etc,

Signed-off-by: Miquel Raynal <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
Documentation/ABI/testing/sysfs-nvmem-cells | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells

diff --git a/Documentation/ABI/testing/sysfs-nvmem-cells b/Documentation/ABI/testing/sysfs-nvmem-cells
new file mode 100644
index 000000000000..7af70adf3690
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-nvmem-cells
@@ -0,0 +1,21 @@
+What: /sys/bus/nvmem/devices/.../cells/<cell-name>
+Date: May 2023
+KernelVersion: 6.5
+Contact: Miquel Raynal <[email protected]>
+Description:
+ The "cells" folder contains one file per cell exposed by the
+ NVMEM device. The name of the file is: <name>@<where>, with
+ <name> being the cell name and <where> its location in the NVMEM
+ device, in hexadecimal (without the '0x' prefix, to mimic device
+ tree node names). The length of the file is the size of the cell
+ (when known). The content of the file is the binary content of
+ the cell (may sometimes be ASCII, likely without trailing
+ character).
+ Note: This file is only present if CONFIG_NVMEM_SYSFS
+ is enabled.
+
+ Example::
+
+ hexdump -C /sys/bus/nvmem/devices/1-00563/cells/product-name@d
+ 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
+ 0000000a
--
2.34.1

2023-10-04 22:23:30

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 3/7] nvmem: Move of_nvmem_layout_get_container() in another header

nvmem-consumer.h is included by consumer devices, extracting data from
NVMEM devices whereas nvmem-provider.h is included by devices providing
NVMEM content.

The only users of of_nvmem_layout_get_container() outside of the core
are layout drivers, so better move its prototype to nvmem-provider.h.

While we do so, we also move the kdoc associated with the function to
the header rather than the .c file.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 8 --------
include/linux/nvmem-consumer.h | 7 -------
include/linux/nvmem-provider.h | 14 ++++++++++++++
3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 286efd3f5a31..c63057a7a3b8 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -844,14 +844,6 @@ static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
}

#if IS_ENABLED(CONFIG_OF)
-/**
- * of_nvmem_layout_get_container() - Get OF node to layout container.
- *
- * @nvmem: nvmem device.
- *
- * Return: a node pointer with refcount incremented or NULL if no
- * container exists. Use of_node_put() on it when done.
- */
struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
{
return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
index 4523e4e83319..960728b10a11 100644
--- a/include/linux/nvmem-consumer.h
+++ b/include/linux/nvmem-consumer.h
@@ -241,7 +241,6 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id);
struct nvmem_device *of_nvmem_device_get(struct device_node *np,
const char *name);
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
#else
static inline struct nvmem_cell *of_nvmem_cell_get(struct device_node *np,
const char *id)
@@ -254,12 +253,6 @@ static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np,
{
return ERR_PTR(-EOPNOTSUPP);
}
-
-static inline struct device_node *
-of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return NULL;
-}
#endif /* CONFIG_NVMEM && CONFIG_OF */

#endif /* ifndef _LINUX_NVMEM_CONSUMER_H */
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index dae26295e6be..d260738ad03c 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -205,6 +205,16 @@ void nvmem_layout_unregister(struct nvmem_layout *layout);
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

+/**
+ * of_nvmem_layout_get_container() - Get OF node of layout container
+ *
+ * @nvmem: nvmem device
+ *
+ * Return: a node pointer with refcount incremented or NULL if no
+ * container exists. Use of_node_put() on it when done.
+ */
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
+
#else

static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c)
@@ -242,6 +252,10 @@ nvmem_layout_get_match_data(struct nvmem_device *nvmem,
return NULL;
}

+static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
+{
+ return NULL;
+}
#endif /* CONFIG_NVMEM */

#define module_nvmem_layout_driver(__layout_driver) \
--
2.34.1

2023-10-04 22:23:43

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 2/7] nvmem: Clarify the situation when there is no DT node available

At a first look it might seem that the presence of the of_node pointer
in the nvmem device does not matter much, but in practice, after looking
deep into the DT core, nvmem_add_cells_from_dt() will simply and always
return NULL if this field is not provided. As most mtd devices don't
populate this field (this could evolve later), it means none of their
children cells will be populated unless no_of_node is explicitly set to
false. In order to clarify the logic, let's add clear check at the
beginning of this helper.

Signed-off-by: Miquel Raynal <[email protected]>
---
drivers/nvmem/core.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index eaf6a3fe8ca6..286efd3f5a31 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -743,6 +743,9 @@ static int nvmem_add_cells_from_dt(struct nvmem_device *nvmem, struct device_nod

static int nvmem_add_cells_from_legacy_of(struct nvmem_device *nvmem)
{
+ if (!nvmem->dev.of_node)
+ return 0;
+
return nvmem_add_cells_from_dt(nvmem, nvmem->dev.of_node);
}

--
2.34.1

2023-10-04 22:23:51

by Miquel Raynal

[permalink] [raw]
Subject: [PATCH v11 5/7] nvmem: core: Rework layouts to become regular devices

Current layout support was initially written without modules support in
mind. When the requirement for module support rose, the existing base
was improved to adopt modularization support, but kind of a design flaw
was introduced. With the existing implementation, when a storage device
registers into NVMEM, the core tries to hook a layout (if any) and
populates its cells immediately. This means, if the hardware description
expects a layout to be hooked up, but no driver was provided for that,
the storage medium will fail to probe and try later from
scratch. Technically, the layouts are more like a "plus" and, even we
consider that the hardware description shall be correct, we could still
probe the storage device (especially if it contains the rootfs).

One way to overcome this situation is to consider the layouts as
devices, and leverage the existing notifier mechanism. When a new NVMEM
device is registered, we can:
- populate its nvmem-layout child, if any
- try to modprobe the relevant driver, if relevant
- try to hook the NVMEM device with a layout in the notifier
And when a new layout is registered:
- try to hook all the existing NVMEM devices which are not yet hooked to
a layout with the new layout
This way, there is no strong order to enforce, any NVMEM device creation
or NVMEM layout driver insertion will be observed as a new event which
may lead to the creation of additional cells, without disturbing the
probes with costly (and sometimes endless) deferrals.

In order to achieve that goal we need:
* To keep track of all nvmem devices
* To create a new bus for the nvmem-layouts with minimal logic to match
nvmem-layout devices with nvmem-layout drivers.
All this infrastructure code is created in the layouts.c file.

Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
---
drivers/nvmem/Makefile | 2 +-
drivers/nvmem/core.c | 157 +++++++++++++++++++++-------
drivers/nvmem/internals.h | 4 +
drivers/nvmem/layouts.c | 171 +++++++++++++++++++++++++++++++
drivers/nvmem/layouts/onie-tlv.c | 37 ++++++-
drivers/nvmem/layouts/sl28vpd.c | 37 ++++++-
include/linux/nvmem-provider.h | 24 +++--
7 files changed, 376 insertions(+), 56 deletions(-)
create mode 100644 drivers/nvmem/layouts.c

diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
index 423baf089515..77be96076ea6 100644
--- a/drivers/nvmem/Makefile
+++ b/drivers/nvmem/Makefile
@@ -4,7 +4,7 @@
#

obj-$(CONFIG_NVMEM) += nvmem_core.o
-nvmem_core-y := core.o
+nvmem_core-y := core.o layouts.o
obj-y += layouts/

# Devices
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 073fe4a73e37..1f311c899ae1 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -53,6 +53,7 @@ static LIST_HEAD(nvmem_cell_tables);
static DEFINE_MUTEX(nvmem_lookup_mutex);
static LIST_HEAD(nvmem_lookup_list);

+struct notifier_block nvmem_nb;
static BLOCKING_NOTIFIER_HEAD(nvmem_notifier);

static DEFINE_SPINLOCK(nvmem_layout_lock);
@@ -771,23 +772,16 @@ EXPORT_SYMBOL_GPL(nvmem_layout_unregister);
static struct nvmem_layout *nvmem_layout_get(struct nvmem_device *nvmem)
{
struct device_node *layout_np;
- struct nvmem_layout *l, *layout = ERR_PTR(-EPROBE_DEFER);
+ struct nvmem_layout *l, *layout = NULL;

layout_np = of_nvmem_layout_get_container(nvmem);
if (!layout_np)
return NULL;

- /*
- * In case the nvmem device was built-in while the layout was built as a
- * module, we shall manually request the layout driver loading otherwise
- * we'll never have any match.
- */
- of_request_module(layout_np);
-
spin_lock(&nvmem_layout_lock);

list_for_each_entry(l, &nvmem_layouts, node) {
- if (of_match_node(l->of_match_table, layout_np)) {
+ if (of_match_node(l->dev->driver->of_match_table, layout_np)) {
if (try_module_get(l->owner))
layout = l;

@@ -821,14 +815,6 @@ static int nvmem_add_cells_from_layout(struct nvmem_device *nvmem)
return 0;
}

-#if IS_ENABLED(CONFIG_OF)
-struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
-{
- return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
-}
-EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
-#endif
-
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout)
{
@@ -836,7 +822,7 @@ const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
const struct of_device_id *match;

layout_np = of_nvmem_layout_get_container(nvmem);
- match = of_match_node(layout->of_match_table, layout_np);
+ match = of_match_node(layout->dev->driver->of_match_table, layout_np);

return match ? match->data : NULL;
}
@@ -947,19 +933,6 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
goto err_put_device;
}

- /*
- * If the driver supplied a layout by config->layout, the module
- * pointer will be NULL and nvmem_layout_put() will be a noop.
- */
- nvmem->layout = config->layout ?: nvmem_layout_get(nvmem);
- if (IS_ERR(nvmem->layout)) {
- rval = PTR_ERR(nvmem->layout);
- nvmem->layout = NULL;
-
- if (rval == -EPROBE_DEFER)
- goto err_teardown_compat;
- }
-
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
@@ -978,24 +951,23 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;

- rval = nvmem_add_cells_from_layout(nvmem);
- if (rval)
- goto err_remove_cells;
-
dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);

rval = device_add(&nvmem->dev);
if (rval)
goto err_remove_cells;

+ /* Populate the layout bus */
+ rval = nvmem_populate_layout(nvmem);
+ if (rval)
+ goto err_remove_cells;
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);

return nvmem;

err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
- nvmem_layout_put(nvmem->layout);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
err_put_device:
@@ -2097,13 +2069,122 @@ const char *nvmem_dev_name(struct nvmem_device *nvmem)
}
EXPORT_SYMBOL_GPL(nvmem_dev_name);

+static void nvmem_try_loading_layout_driver(struct nvmem_device *nvmem)
+{
+ struct device_node *layout_np;
+
+ layout_np = of_nvmem_layout_get_container(nvmem);
+ if (layout_np) {
+ of_request_module(layout_np);
+ of_node_put(layout_np);
+ }
+}
+
+static int nvmem_match_available_layout(struct nvmem_device *nvmem)
+{
+ int ret;
+
+ if (nvmem->layout)
+ return 0;
+
+ nvmem->layout = nvmem_layout_get(nvmem);
+ if (!nvmem->layout)
+ return 0;
+
+ ret = nvmem_add_cells_from_layout(nvmem);
+ if (ret) {
+ nvmem_layout_put(nvmem->layout);
+ nvmem->layout = NULL;
+ return ret;
+ }
+
+ return 0;
+}
+
+static int nvmem_dev_match_available_layout(struct device *dev, void *data)
+{
+ struct nvmem_device *nvmem = to_nvmem_device(dev);
+
+ return nvmem_match_available_layout(nvmem);
+}
+
+static int nvmem_for_each_dev(int (*fn)(struct device *dev, void *data))
+{
+ return bus_for_each_dev(&nvmem_bus_type, NULL, NULL, fn);
+}
+
+/*
+ * When an NVMEM device is registered, try to match against a layout and
+ * populate the cells. When an NVMEM layout is probed, ensure all NVMEM devices
+ * which could use it properly expose their cells.
+ */
+static int nvmem_notifier_call(struct notifier_block *notifier,
+ unsigned long event_flags, void *context)
+{
+ struct nvmem_device *nvmem = NULL;
+ int ret;
+
+ switch (event_flags) {
+ case NVMEM_ADD:
+ nvmem = context;
+ break;
+ case NVMEM_LAYOUT_ADD:
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ if (nvmem) {
+ /*
+ * In case the nvmem device was built-in while the layout was
+ * built as a module, manually request loading the layout driver.
+ */
+ nvmem_try_loading_layout_driver(nvmem);
+
+ /* Populate the cells of the new nvmem device from its layout, if any */
+ ret = nvmem_match_available_layout(nvmem);
+ } else {
+ /* NVMEM devices might be "waiting" for this layout */
+ ret = nvmem_for_each_dev(nvmem_dev_match_available_layout);
+ }
+
+ if (ret)
+ return notifier_from_errno(ret);
+
+ return NOTIFY_OK;
+}
+
static int __init nvmem_init(void)
{
- return bus_register(&nvmem_bus_type);
+ int ret;
+
+ ret = bus_register(&nvmem_bus_type);
+ if (ret)
+ return ret;
+
+ ret = nvmem_layout_bus_register();
+ if (ret)
+ goto unregister_nvmem_bus;
+
+ nvmem_nb.notifier_call = &nvmem_notifier_call;
+ ret = nvmem_register_notifier(&nvmem_nb);
+ if (ret)
+ goto unregister_nvmem_layout_bus;
+
+ return 0;
+
+unregister_nvmem_layout_bus:
+ nvmem_layout_bus_unregister();
+unregister_nvmem_bus:
+ bus_unregister(&nvmem_bus_type);
+
+ return ret;
}

static void __exit nvmem_exit(void)
{
+ nvmem_unregister_notifier(&nvmem_nb);
+ nvmem_layout_bus_unregister();
bus_unregister(&nvmem_bus_type);
}

diff --git a/drivers/nvmem/internals.h b/drivers/nvmem/internals.h
index ce353831cd65..eb73b59d1fd9 100644
--- a/drivers/nvmem/internals.h
+++ b/drivers/nvmem/internals.h
@@ -32,4 +32,8 @@ struct nvmem_device {
void *priv;
};

+int nvmem_layout_bus_register(void);
+void nvmem_layout_bus_unregister(void);
+int nvmem_populate_layout(struct nvmem_device *nvmem);
+
#endif /* ifndef _LINUX_NVMEM_INTERNALS_H */
diff --git a/drivers/nvmem/layouts.c b/drivers/nvmem/layouts.c
new file mode 100644
index 000000000000..3b11ec70edec
--- /dev/null
+++ b/drivers/nvmem/layouts.c
@@ -0,0 +1,171 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NVMEM layout bus handling
+ *
+ * Copyright (C) 2023 Bootlin
+ * Author: Miquel Raynal <[email protected]
+ */
+
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/nvmem-consumer.h>
+#include <linux/nvmem-provider.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_irq.h>
+
+#include "internals.h"
+
+#if CONFIG_OF
+static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
+{
+ return of_driver_match_device(dev, drv);
+}
+
+static struct bus_type nvmem_layout_bus_type = {
+ .name = "nvmem-layouts",
+ .match = nvmem_layout_bus_match,
+};
+
+static struct device nvmem_layout_bus = {
+ .init_name = "nvmem-layouts",
+};
+
+int __nvmem_layout_driver_register(struct nvmem_layout_driver *drv,
+ struct module *owner)
+{
+ drv->driver.owner = owner;
+ drv->driver.bus = &nvmem_layout_bus_type;
+
+ return driver_register(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(__nvmem_layout_driver_register);
+
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv)
+{
+ driver_unregister(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(nvmem_layout_driver_unregister);
+
+static void nvmem_layout_device_release(struct device *dev)
+{
+ of_node_put(dev->of_node);
+ kfree(dev);
+}
+
+static struct device *of_nvmem_layout_create_device(struct device_node *np)
+{
+ struct device *dev;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ return NULL;
+
+ device_initialize(dev);
+ dev->parent = &nvmem_layout_bus;
+ dev->bus = &nvmem_layout_bus_type;
+ dev->release = nvmem_layout_device_release;
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->dma_mask = &dev->coherent_dma_mask;
+ device_set_node(dev, of_fwnode_handle(of_node_get(np)));
+ of_device_make_bus_id(dev);
+ of_msi_configure(dev, dev->of_node);
+
+ if (device_add(dev)) {
+ put_device(dev);
+ return NULL;
+ }
+
+ return dev;
+}
+
+static const struct of_device_id of_nvmem_layout_skip_table[] = {
+ { .compatible = "fixed-layout", },
+ {}
+};
+
+static int of_nvmem_layout_bus_populate(struct device_node *layout_dn)
+{
+ /* Make sure it has a compatible property */
+ if (!of_get_property(layout_dn, "compatible", NULL)) {
+ pr_debug("%s() - skipping %pOF, no compatible prop\n",
+ __func__, layout_dn);
+ return 0;
+ }
+
+ /* Fixed layouts are parsed manually somewhere else for now */
+ if (of_match_node(of_nvmem_layout_skip_table, layout_dn)) {
+ pr_debug("%s() - skipping %pOF node\n", __func__, layout_dn);
+ return 0;
+ }
+
+ if (of_node_check_flag(layout_dn, OF_POPULATED_BUS)) {
+ pr_debug("%s() - skipping %pOF, already populated\n",
+ __func__, layout_dn);
+ return 0;
+ }
+
+ /* NVMEM layout buses expect only a single device representing the layout */
+ of_nvmem_layout_create_device(layout_dn);
+ of_node_set_flag(layout_dn, OF_POPULATED_BUS);
+
+ return 0;
+}
+
+struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+{
+ return of_get_child_by_name(nvmem->dev.of_node, "nvmem-layout");
+}
+EXPORT_SYMBOL_GPL(of_nvmem_layout_get_container);
+
+int nvmem_populate_layout(struct nvmem_device *nvmem)
+{
+ struct device_node *nvmem_dn, *layout_dn;
+ int ret;
+
+ nvmem_dn = of_node_get(nvmem->dev.of_node);
+ if (!nvmem_dn)
+ return 0;
+
+ layout_dn = of_nvmem_layout_get_container(nvmem);
+ if (!layout_dn) {
+ of_node_put(nvmem_dn);
+ return 0;
+ }
+
+ device_links_supplier_sync_state_pause();
+ ret = of_nvmem_layout_bus_populate(layout_dn);
+ device_links_supplier_sync_state_resume();
+
+ of_node_set_flag(nvmem_dn, OF_POPULATED_BUS);
+
+ of_node_put(layout_dn);
+ of_node_put(nvmem_dn);
+ return ret;
+}
+
+int nvmem_layout_bus_register(void)
+{
+ int ret;
+
+ ret = device_register(&nvmem_layout_bus);
+ if (ret) {
+ put_device(&nvmem_layout_bus);
+ return ret;
+ }
+
+ ret = bus_register(&nvmem_layout_bus_type);
+ if (ret) {
+ device_unregister(&nvmem_layout_bus);
+ return ret;
+ }
+
+ return 0;
+}
+
+void nvmem_layout_bus_unregister(void)
+{
+ bus_unregister(&nvmem_layout_bus_type);
+ device_unregister(&nvmem_layout_bus);
+}
+#endif
diff --git a/drivers/nvmem/layouts/onie-tlv.c b/drivers/nvmem/layouts/onie-tlv.c
index 59fc87ccfcff..9c269e389b28 100644
--- a/drivers/nvmem/layouts/onie-tlv.c
+++ b/drivers/nvmem/layouts/onie-tlv.c
@@ -13,6 +13,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>

#define ONIE_TLV_MAX_LEN 2048
#define ONIE_TLV_CRC_FIELD_SZ 6
@@ -226,16 +227,44 @@ static int onie_tlv_parse_table(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int onie_tlv_probe(struct device *dev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = onie_tlv_parse_table;
+ layout->dev = dev;
+
+ dev_set_drvdata(dev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int onie_tlv_remove(struct device *dev)
+{
+ struct nvmem_layout *layout = dev_get_drvdata(dev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id onie_tlv_of_match_table[] = {
{ .compatible = "onie,tlv-layout", },
{},
};
MODULE_DEVICE_TABLE(of, onie_tlv_of_match_table);

-static struct nvmem_layout onie_tlv_layout = {
- .name = "ONIE tlv layout",
- .of_match_table = onie_tlv_of_match_table,
- .add_cells = onie_tlv_parse_table,
+static struct nvmem_layout_driver onie_tlv_layout = {
+ .driver = {
+ .name = "onie-tlv-layout",
+ .of_match_table = onie_tlv_of_match_table,
+ .probe = onie_tlv_probe,
+ .remove = onie_tlv_remove,
+ },
};
module_nvmem_layout_driver(onie_tlv_layout);

diff --git a/drivers/nvmem/layouts/sl28vpd.c b/drivers/nvmem/layouts/sl28vpd.c
index 05671371f631..6857b1472288 100644
--- a/drivers/nvmem/layouts/sl28vpd.c
+++ b/drivers/nvmem/layouts/sl28vpd.c
@@ -5,6 +5,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/nvmem-provider.h>
#include <linux/of.h>
+#include <linux/platform_device.h>
#include <uapi/linux/if_ether.h>

#define SL28VPD_MAGIC 'V'
@@ -135,16 +136,44 @@ static int sl28vpd_add_cells(struct device *dev, struct nvmem_device *nvmem,
return 0;
}

+static int sl28vpd_probe(struct device *dev)
+{
+ struct nvmem_layout *layout;
+
+ layout = devm_kzalloc(dev, sizeof(*layout), GFP_KERNEL);
+ if (!layout)
+ return -ENOMEM;
+
+ layout->add_cells = sl28vpd_add_cells;
+ layout->dev = dev;
+
+ dev_set_drvdata(dev, layout);
+
+ return nvmem_layout_register(layout);
+}
+
+static int sl28vpd_remove(struct device *dev)
+{
+ struct nvmem_layout *layout = dev_get_drvdata(dev);
+
+ nvmem_layout_unregister(layout);
+
+ return 0;
+}
+
static const struct of_device_id sl28vpd_of_match_table[] = {
{ .compatible = "kontron,sl28-vpd" },
{},
};
MODULE_DEVICE_TABLE(of, sl28vpd_of_match_table);

-static struct nvmem_layout sl28vpd_layout = {
- .name = "sl28-vpd",
- .of_match_table = sl28vpd_of_match_table,
- .add_cells = sl28vpd_add_cells,
+static struct nvmem_layout_driver sl28vpd_layout = {
+ .driver = {
+ .name = "kontron-sl28vpd-layout",
+ .of_match_table = sl28vpd_of_match_table,
+ .probe = sl28vpd_probe,
+ .remove = sl28vpd_remove,
+ },
};
module_nvmem_layout_driver(sl28vpd_layout);

diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index d260738ad03c..a1b982f4092e 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -154,8 +154,7 @@ struct nvmem_cell_table {
/**
* struct nvmem_layout - NVMEM layout definitions
*
- * @name: Layout name.
- * @of_match_table: Open firmware match table.
+ * @dev: Device-model layout device.
* @add_cells: Will be called if a nvmem device is found which
* has this layout. The function will add layout
* specific cells with nvmem_add_one_cell().
@@ -170,8 +169,7 @@ struct nvmem_cell_table {
* cells.
*/
struct nvmem_layout {
- const char *name;
- const struct of_device_id *of_match_table;
+ struct device *dev;
int (*add_cells)(struct device *dev, struct nvmem_device *nvmem,
struct nvmem_layout *layout);
void (*fixup_cell_info)(struct nvmem_device *nvmem,
@@ -183,6 +181,10 @@ struct nvmem_layout {
struct list_head node;
};

+struct nvmem_layout_driver {
+ struct device_driver driver;
+};
+
#if IS_ENABLED(CONFIG_NVMEM)

struct nvmem_device *nvmem_register(const struct nvmem_config *cfg);
@@ -202,6 +204,15 @@ int __nvmem_layout_register(struct nvmem_layout *layout, struct module *owner);
__nvmem_layout_register(layout, THIS_MODULE)
void nvmem_layout_unregister(struct nvmem_layout *layout);

+#define nvmem_layout_driver_register(drv) \
+ __nvmem_layout_driver_register(drv, THIS_MODULE)
+int __nvmem_layout_driver_register(struct nvmem_layout_driver *drv,
+ struct module *owner);
+void nvmem_layout_driver_unregister(struct nvmem_layout_driver *drv);
+#define module_nvmem_layout_driver(__nvmem_layout_driver) \
+ module_driver(__nvmem_layout_driver, nvmem_layout_driver_register, \
+ nvmem_layout_driver_unregister)
+
const void *nvmem_layout_get_match_data(struct nvmem_device *nvmem,
struct nvmem_layout *layout);

@@ -257,9 +268,4 @@ static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_dev
return NULL;
}
#endif /* CONFIG_NVMEM */
-
-#define module_nvmem_layout_driver(__layout_driver) \
- module_driver(__layout_driver, nvmem_layout_register, \
- nvmem_layout_unregister)
-
#endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
--
2.34.1

2023-10-05 14:19:59

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v11 3/7] nvmem: Move of_nvmem_layout_get_container() in another header

Hi Miquel,

kernel test robot noticed the following build errors:

[auto build test ERROR on robh/for-next]
[also build test ERROR on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc4 next-20231004]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231005-062417
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231004222236.411248-4-miquel.raynal%40bootlin.com
patch subject: [PATCH v11 3/7] nvmem: Move of_nvmem_layout_get_container() in another header
config: i386-tinyconfig (https://download.01.org/0day-ci/archive/20231005/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231005/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All error/warnings (new ones prefixed by >>):

In file included from include/linux/rtc.h:18,
from include/linux/efi.h:20,
from arch/x86/kernel/asm-offsets_32.c:6,
from arch/x86/kernel/asm-offsets.c:29:
>> include/linux/nvmem-provider.h:256:1: error: expected identifier or '(' before '{' token
256 | {
| ^
>> include/linux/nvmem-provider.h:255:35: warning: 'of_nvmem_layout_get_container' declared 'static' but never defined [-Wunused-function]
255 | static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make[3]: *** [scripts/Makefile.build:116: arch/x86/kernel/asm-offsets.s] Error 1
make[3]: Target 'prepare' not remade because of errors.
make[2]: *** [Makefile:1202: prepare0] Error 2
make[2]: Target 'prepare' not remade because of errors.
make[1]: *** [Makefile:234: __sub-make] Error 2
make[1]: Target 'prepare' not remade because of errors.
make: *** [Makefile:234: __sub-make] Error 2
make: Target 'prepare' not remade because of errors.


vim +256 include/linux/nvmem-provider.h

254
> 255 static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
> 256 {
257 return NULL;
258 }
259 #endif /* CONFIG_NVMEM */
260

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-05 14:28:13

by Chen-Yu Tsai

[permalink] [raw]
Subject: Re: [PATCH v11 0/7] NVMEM cells in sysfs

On Thu, Oct 5, 2023 at 6:22 AM Miquel Raynal <[email protected]> wrote:
>
> Hello,
>
> As part of a previous effort, support for dynamic NVMEM layouts was
> brought into mainline, helping a lot in getting information from NVMEM
> devices at non-static locations. One common example of NVMEM cell is the
> MAC address that must be used. Sometimes the cell content is mainly (or
> only) useful to the kernel, and sometimes it is not. Users might also
> want to know the content of cells such as: the manufacturing place and
> date, the hardware version, the unique ID, etc. Two possibilities in
> this case: either the users re-implement their own parser to go through
> the whole device and search for the information they want, or the kernel
> can expose the content of the cells if deemed relevant. This second
> approach sounds way more relevant than the first one to avoid useless
> code duplication, so here is a series bringing NVMEM cells content to
> the user through sysfs.
>
> Here is a real life example with a Marvell Armada 7040 TN48m switch:
>
> $ nvmem=/sys/bus/nvmem/devices/1-00563/
> $ for i in `ls -1 $nvmem/cells/*`; do basename $i; hexdump -C $i | head -n1; done
> country-code@77
> 00000000 54 57 |TW|
> crc32@88
> 00000000 bb cd 51 98 |..Q.|
> device-version@49
> 00000000 02 |.|
> diag-version@80
> 00000000 56 31 2e 30 2e 30 |V1.0.0|
> label-revision@4c
> 00000000 44 31 |D1|
> mac-address@2c
> 00000000 18 be 92 13 9a 00 |......|
> manufacture-date@34
> 00000000 30 32 2f 32 34 2f 32 30 32 31 20 31 38 3a 35 39 |02/24/2021 18:59|
> manufacturer@72
> 00000000 44 4e 49 |DNI|
> num-macs@6e
> 00000000 00 40 |.@|
> onie-version@61
> 00000000 32 30 32 30 2e 31 31 2d 56 30 31 |2020.11-V01|
> platform-name@50
> 00000000 38 38 46 37 30 34 30 2f 38 38 46 36 38 32 30 |88F7040/88F6820|
> product-name@d
> 00000000 54 4e 34 38 4d 2d 50 2d 44 4e |TN48M-P-DN|
> serial-number@19
> 00000000 54 4e 34 38 31 50 32 54 57 32 30 34 32 30 33 32 |TN481P2TW2042032|
> vendor@7b
> 00000000 44 4e 49 |DNI|
>
> Current support does not include:
> * The knowledge of the type of data (binary vs. ASCII), so by default
> all cells are exposed in binary form.
> * Write support.
>
> Changes in v11:
> * The nvmem layouts are now regular devices and not platform devices
> anymore. They are registered into the nvmem-layout bus (so there is a
> new /sysfs/bus/nvmem-layouts entry that gets created. All the code for
> this new bus is located under drivers/nvmem/layouts.c and is part of
> the main core. The core device-driver logic applies without too much
> additional code besides the registration of the bus and a bit of
> glue. I see no need for more detailed structures for now but this can
> be improved later as needed.
>
> Changes in v10:
> * All preparation patches have been picked-up by Srinivas.
> * Rebased on top of v6.6-rc1.
> * Fix an error path in the probe due to the recent additions.
>
> Changes in v9:
> * Hopefully fixed the creation of sysfs entries when describing the
> cells using the legacy layout, as reported by Chen-Yu.
> * Dropped the nvmem-specific device list and used the driver core list
> instead as advised by Greg.
>
> Changes in v8:
> * Fix a compilation warning whith !CONFIG_NVMEM_SYSFS.
> * Add a patch to return NULL when no layout is found (reported by Dan
> Carpenter).
> * Fixed the documentation as well as the cover letter regarding the
> addition of addresses in the cell names.
>
> Changes in v7:
> * Rework the layouts registration mechanism to use the platform devices
> logic.
> * Fix the two issues reported by Daniel Golle and Chen-Yu Tsai, one of
> them consist in suffixing '@<offset>' to the cell name to create the
> sysfs files in order to be sure they are all unique.
> * Update the doc.
>
> Changes in v6:
> * ABI documentation style fixes reported by Randy Dunlap:
> s|cells/ folder|"cells" folder|
> Missing period at the end of the final note.
> s|Ex::|Example::|
> * Remove spurious patch from the previous resubmission.
>
> Resending v5:
> * I forgot the mailing list in my former submission, both are absolutely
> identical otherwise.
>
> Changes in v5:
> * Rebased on last -rc1, fixing a conflict and skipping the first two
> patches already taken by Greg.
> * Collected tags from Greg.
> * Split the nvmem patch into two, one which just moves the cells
> creation and the other which adds the cells.
>
> Changes in v4:
> * Use a core helper to count the number of cells in a list.
> * Provide sysfs attributes a private member which is the entry itself to
> avoid the need for looking up the nvmem device and then looping over
> all the cells to find the right one.
>
> Changes in v3:
> * Patch 1 is new: fix a style issue which bothered me when reading the
> core.
> * Patch 2 is new: Don't error out when an attribute group does not
> contain any attributes, it's easier for developers to handle "empty"
> directories this way. It avoids strange/bad solutions to be
> implemented and does not cost much.
> * Drop the is_visible hook as it is no longer needed.
> * Stop allocating an empty attribute array to comply with the sysfs core
> checks (this check has been altered in the first commits).
> * Fix a missing tab in the ABI doc.
>
> Changes in v2:
> * Do not mention the cells might become writable in the future in the
> ABI documentation.
> * Fix a wrong return value reported by Dan and kernel test robot.
> * Implement .is_bin_visible().
> * Avoid overwriting the list of attribute groups, but keep the cells
> attribute group writable as we need to populate it at run time.
> * Improve the commit messages.
> * Give a real life example in the cover letter.
>
> Miquel Raynal (7):
> of: device: Export of_device_make_bus_id()
> nvmem: Clarify the situation when there is no DT node available
> nvmem: Move of_nvmem_layout_get_container() in another header
> nvmem: Create a header for internal sharing
> nvmem: core: Rework layouts to become regular devices
> ABI: sysfs-nvmem-cells: Expose cells through sysfs
> nvmem: core: Expose cells through sysfs

Tested-by: Chen-Yu Tsai <[email protected]>

on a Juniper (MT8183) Chromebook. Note that this device uses the legacy
layout format.

> Documentation/ABI/testing/sysfs-nvmem-cells | 21 ++
> drivers/nvmem/Makefile | 2 +-
> drivers/nvmem/core.c | 308 +++++++++++++++-----
> drivers/nvmem/internals.h | 40 +++
> drivers/nvmem/layouts.c | 171 +++++++++++
> drivers/nvmem/layouts/onie-tlv.c | 37 ++-
> drivers/nvmem/layouts/sl28vpd.c | 37 ++-
> drivers/of/device.c | 41 +++
> drivers/of/platform.c | 40 ---
> include/linux/nvmem-consumer.h | 7 -
> include/linux/nvmem-provider.h | 38 ++-
> include/linux/of_device.h | 6 +
> 12 files changed, 614 insertions(+), 134 deletions(-)
> create mode 100644 Documentation/ABI/testing/sysfs-nvmem-cells
> create mode 100644 drivers/nvmem/internals.h
> create mode 100644 drivers/nvmem/layouts.c
>
> --
> 2.34.1
>

2023-10-05 14:32:32

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v11 5/7] nvmem: core: Rework layouts to become regular devices

Hi Miquel,

kernel test robot noticed the following build warnings:

[auto build test WARNING on robh/for-next]
[also build test WARNING on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc4 next-20231005]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231005-062417
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231004222236.411248-6-miquel.raynal%40bootlin.com
patch subject: [PATCH v11 5/7] nvmem: core: Rework layouts to become regular devices
config: sh-defconfig (https://download.01.org/0day-ci/archive/20231005/[email protected]/config)
compiler: sh4-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231005/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> drivers/nvmem/layouts.c:19:5: warning: "CONFIG_OF" is not defined, evaluates to 0 [-Wundef]
19 | #if CONFIG_OF
| ^~~~~~~~~


vim +/CONFIG_OF +19 drivers/nvmem/layouts.c

18
> 19 #if CONFIG_OF
20 static int nvmem_layout_bus_match(struct device *dev, struct device_driver *drv)
21 {
22 return of_driver_match_device(dev, drv);
23 }
24

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-05 16:33:49

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v11 3/7] nvmem: Move of_nvmem_layout_get_container() in another header

Hi Miquel,

kernel test robot noticed the following build errors:

[auto build test ERROR on robh/for-next]
[also build test ERROR on char-misc/char-misc-testing char-misc/char-misc-next char-misc/char-misc-linus linus/master v6.6-rc4 next-20231005]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Miquel-Raynal/of-device-Export-of_device_make_bus_id/20231005-062417
base: https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git for-next
patch link: https://lore.kernel.org/r/20231004222236.411248-4-miquel.raynal%40bootlin.com
patch subject: [PATCH v11 3/7] nvmem: Move of_nvmem_layout_get_container() in another header
config: um-allnoconfig (https://download.01.org/0day-ci/archive/20231005/[email protected]/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231005/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

In file included from block/partitions/msdos.c:31:
In file included from block/partitions/check.h:2:
In file included from include/linux/pagemap.h:11:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
547 | val = __raw_readb(PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
560 | val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
| ^
In file included from block/partitions/msdos.c:31:
In file included from block/partitions/check.h:2:
In file included from include/linux/pagemap.h:11:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
573 | val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
| ^
In file included from block/partitions/msdos.c:31:
In file included from block/partitions/check.h:2:
In file included from include/linux/pagemap.h:11:
In file included from include/linux/highmem.h:12:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
584 | __raw_writeb(value, PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
594 | __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
604 | __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:692:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
692 | readsb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:700:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
700 | readsw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:708:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
708 | readsl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:717:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
717 | writesb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:726:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
726 | writesw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:735:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
735 | writesl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
In file included from block/partitions/msdos.c:32:
In file included from block/partitions/efi.h:19:
In file included from include/linux/efi.h:20:
In file included from include/linux/rtc.h:18:
>> include/linux/nvmem-provider.h:256:1: error: expected identifier or '('
256 | {
| ^
12 warnings and 1 error generated.
--
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:17:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
547 | val = __raw_readb(PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
560 | val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:17:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
573 | val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
| ~~~~~~~~~~ ^
include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:17:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from arch/um/include/asm/hardirq.h:5:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/um/include/asm/io.h:24:
include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
584 | __raw_writeb(value, PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
594 | __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
604 | __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:692:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
692 | readsb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:700:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
700 | readsw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:708:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
708 | readsl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:717:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
717 | writesb(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:726:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
726 | writesw(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
include/asm-generic/io.h:735:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
735 | writesl(PCI_IOBASE + addr, buffer, count);
| ~~~~~~~~~~ ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:18:
>> include/linux/nvmem-provider.h:256:1: error: expected identifier or '('
256 | {
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:97:11: warning: array index 3 is past the end of the array (that has type 'unsigned long[1]') [-Warray-bounds]
97 | return (set->sig[3] | set->sig[2] |
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:97:25: warning: array index 2 is past the end of the array (that has type 'unsigned long[1]') [-Warray-bounds]
97 | return (set->sig[3] | set->sig[2] |
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:98:4: warning: array index 1 is past the end of the array (that has type 'unsigned long[1]') [-Warray-bounds]
98 | set->sig[1] | set->sig[0]) == 0;
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:100:11: warning: array index 1 is past the end of the array (that has type 'unsigned long[1]') [-Warray-bounds]
100 | return (set->sig[1] | set->sig[0]) == 0;
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:113:11: warning: array index 3 is past the end of the array (that has type 'const unsigned long[1]') [-Warray-bounds]
113 | return (set1->sig[3] == set2->sig[3]) &&
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:113:27: warning: array index 3 is past the end of the array (that has type 'const unsigned long[1]') [-Warray-bounds]
113 | return (set1->sig[3] == set2->sig[3]) &&
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:
include/linux/signal.h:114:5: warning: array index 2 is past the end of the array (that has type 'const unsigned long[1]') [-Warray-bounds]
114 | (set1->sig[2] == set2->sig[2]) &&
| ^ ~
arch/x86/include/asm/signal.h:24:2: note: array 'sig' declared here
24 | unsigned long sig[_NSIG_WORDS];
| ^
In file included from kernel/time/alarmtimer.c:18:
In file included from include/linux/rtc.h:38:
In file included from include/linux/seq_file.h:12:
In file included from include/linux/fs.h:33:
In file included from include/linux/percpu-rwsem.h:7:
In file included from include/linux/rcuwait.h:6:
In file included from include/linux/sched/signal.h:6:


vim +256 include/linux/nvmem-provider.h

254
255 static inline struct device_node *of_nvmem_layout_get_container(struct nvmem_device *nvmem);
> 256 {
257 return NULL;
258 }
259 #endif /* CONFIG_NVMEM */
260

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki