2022-04-27 11:45:04

by Clément Léger

[permalink] [raw]
Subject: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

This series adds foundation work to support the lan9662 PCIe card. This
card is meant to be used an ethernet switch with 2 x RJ45 ports and
2 x 2.5G SFPs. The lan966x SoCs can be used in two different ways:

- It can run Linux by itself, on ARM64 cores included in the SoC. This
use-case of the lan966x is currently being upstreamed, using a
traditional Device Tree representation of the lan996x HW blocks [1]
A number of drivers for the different IPs of the SoC have already
been merged in upstream Linux.

- It can be used as a PCIe endpoint, connected to a separate platform
that acts as the PCIe root complex. In this case, all the devices
that are embedded on this SoC are exposed through PCIe BARs and the
ARM64 cores of the SoC are not used. Since this is a PCIe card, it
can be plugged on any platform, of any architecture supporting PCIe.

The problem that arose is that we want to reuse all the existing OF
compatible drivers that are used when in SoC mode to instantiate the
PCI device when in PCIe endpoint mode.

A previous attempt to tackle this problem was made using fwnode [1].
However, this proved being way too invasive and it required
modifications in both subsystems and drivers to support fwnode. First
series did not lead to a consensus and multiple ideas to support this
use-case were mentioned (ACPI overlay, fwnode, device-tree overlay).
Since it only seemed that fwnode was not a totally silly idea, we
continued on this way.

However, on the series that added fwnode support to the reset subsystem,
Rob Herring mentioned the fact that OF overlay might actually be the
best way to probe PCI devices and populate platform drivers using this
overlay. He also provided a branch containing some commits that helped
to implement this idea on a x86 setup. Due to the dynamic nature of PCI
bus enumeration, some other modifications needs to be applied on the
overlay to apply it correctly. Indeed, it is necessary to modify the
target node of the fragments to apply them correctly on the PCI device
that was probed. Moreover, the 'ranges' must be set according to the
BAR addresses in order to remap devices to the correct PCI addresses.
These modifications are the located into the driver since the remapping
is something that is specific to each driver.

After modifications, this proves to be successful and a full support of
the aforementioned lan966x PCI card was added. The modifications to
support that (apply an overlay on a dynamically created PCI of_node) are
actually minimal and only touches a few places (pci/of.c). This series
contains the 3 commits that are necessary to do that:

- First commit creates the root node if not present on a x86 setup
without a firmware provided device-tree.
- Second one dynamically creates the PCI bus/device device-tree node
hierarchy using changeset API.
- Finally a last commit allows to apply an overlay by targeting a
specific device-tree node.

Other problems that might be considered with this series is the fact
that CONFIG_OF is not enabled by default on x86 configuration and thus
the driver can't be used without rebuilding a complete kernel with
CONFIG_OF=y. In order to fully support this PCIe card and allow lambda
user to use this driver, it would be almost mandatory to enable
CONFIG_OF by default on such setup.

A driver using this support was added and can be seen at [3]. This
driver embeds a builtin overlay and applies it to the live tree using
of_overlay_fdt_apply_to_node(). An interrupt driver is also included and
associated to a node that is added by the overlay. The driver also
insert a specific "ranges" property based on the BAR values which allows
to remap the device-tree node to BAR addresses dynamically. This is
needed to allow applying the overlay without depending on specific
enumeration BAR addresses.

This series was tested on a x86 kernel using CONFIG_OF under a virtual
machine using PCI passthrough.

Link: [1] https://lore.kernel.org/lkml/[email protected]/t/
Link: [2] https://lore.kernel.org/lkml/[email protected]/T/
Link: [3] https://github.com/clementleger/linux/tree/lan966x/of_support

Clément Léger (3):
of: always populate a root node
PCI: of: create DT nodes for PCI devices if they do not exists
of: overlay: add of_overlay_fdt_apply_to_node()

drivers/of/base.c | 16 +++-
drivers/of/overlay.c | 21 +++--
drivers/pci/of.c | 184 +++++++++++++++++++++++++++++++++++++++++++
include/linux/of.h | 17 +++-
4 files changed, 224 insertions(+), 14 deletions(-)

--
2.34.1


2022-04-27 11:45:06

by Clément Léger

[permalink] [raw]
Subject: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

In order to apply overlays to PCI device nodes, the nodes must first
exist. This commit add support to populate a skeleton tree for PCI bus
and devices. These nodes can then be used by drivers to apply overlays.

Co-developed-by: Rob Herring <[email protected]>
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Clément Léger <[email protected]>
---
drivers/pci/of.c | 184 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 184 insertions(+)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index cb2e8351c2cc..f2325708726e 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -16,12 +16,194 @@
#include "pci.h"

#ifdef CONFIG_PCI
+static int of_pci_add_property(struct of_changeset *ocs, struct device_node *np,
+ const char *name, const void *value, int length)
+{
+ struct property *prop;
+ int ret = -ENOMEM;
+
+ prop = kzalloc(sizeof(*prop), GFP_KERNEL);
+ if (!prop)
+ return -ENOMEM;
+
+ prop->name = kstrdup(name, GFP_KERNEL);
+ if (!prop->name)
+ goto out_err;
+
+ if (value) {
+ prop->value = kmemdup(value, length, GFP_KERNEL);
+ if (!prop->value)
+ goto out_err;
+ } else {
+ /*
+ * Even if the property has no value, it must be set to a
+ * non-null value since of_get_property() is used to check
+ * some values that might or not have a values (ranges for
+ * instance). Moreover, when the node is released, prop->value
+ * is kfreed so the memory must come from kmalloc.
+ */
+ prop->value = kmalloc(1, GFP_KERNEL);
+ if (!prop->value)
+ goto out_err;
+ }
+
+ of_property_set_flag(prop, OF_DYNAMIC);
+
+ prop->length = length;
+
+ ret = of_changeset_add_property(ocs, np, prop);
+ if (!ret)
+ return 0;
+
+out_err:
+ kfree(prop->value);
+ kfree(prop->name);
+ kfree(prop);
+
+ return ret;
+}
+
+static struct device_node *of_pci_make_node(void)
+{
+ struct device_node *node;
+
+ node = kzalloc(sizeof(*node), GFP_KERNEL);
+ if (!node)
+ return NULL;
+
+ of_node_set_flag(node, OF_DYNAMIC);
+ of_node_set_flag(node, OF_DETACHED);
+ of_node_init(node);
+
+ return node;
+}
+
+static int of_pci_add_cells_props(struct device_node *node,
+ struct of_changeset *cs, int n_addr_cells,
+ int n_size_cells)
+{
+ __be32 val;
+ int ret;
+
+ ret = of_pci_add_property(cs, node, "ranges", NULL, 0);
+ if (ret)
+ return ret;
+
+ val = __cpu_to_be32(n_addr_cells);
+ ret = of_pci_add_property(cs, node, "#address-cells", &val,
+ sizeof(__be32));
+ if (ret)
+ return ret;
+
+ val = __cpu_to_be32(n_size_cells);
+ return of_pci_add_property(cs, node, "#size-cells", &val,
+ sizeof(__be32));
+}
+
+static int of_pci_add_pci_bus_props(struct device_node *node,
+ struct of_changeset *cs)
+{
+ int ret;
+
+ ret = of_pci_add_property(cs, node, "device_type", "pci",
+ strlen("pci") + 1);
+ if (ret)
+ return ret;
+
+ return of_pci_add_cells_props(node, cs, 3, 2);
+}
+
+static void of_pci_make_dev_node(struct pci_dev *dev)
+{
+ static struct of_changeset cs;
+ const char *pci_type = "dev";
+ struct device_node *node;
+ __be32 val[5] = {0};
+ int ret;
+
+ node = of_pci_make_node();
+ if (!node)
+ return;
+
+ node->parent = dev->bus->dev.of_node;
+ of_changeset_init(&cs);
+
+ if (pci_is_bridge(dev)) {
+ ret = of_pci_add_pci_bus_props(node, &cs);
+ if (ret)
+ goto changeset_destroy;
+ pci_type = "pci";
+ }
+
+ node->full_name = kasprintf(GFP_KERNEL, "%s@%x,%x", pci_type,
+ PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
+
+ val[0] = __cpu_to_be32(dev->devfn << 8);
+ val[4] = __cpu_to_be32(SZ_4K);
+ ret = of_pci_add_property(&cs, node, "reg", val, 5 * sizeof(__be32));
+ if (ret)
+ goto changeset_destroy;
+
+ ret = of_changeset_attach_node(&cs, node);
+ if (ret)
+ goto changeset_destroy;
+
+ ret = of_changeset_apply(&cs);
+ if (ret)
+ goto changeset_destroy;
+
+ dev->dev.of_node = node;
+
+ return;
+
+changeset_destroy:
+ of_changeset_destroy(&cs);
+ kfree(node);
+}
+
+static struct device_node *of_pci_create_root_bus_node(struct pci_bus *bus)
+{
+ static struct of_changeset cs;
+ struct device_node *node;
+ int ret;
+
+ node = of_pci_make_node();
+ if (!node)
+ return NULL;
+
+ node->full_name = "pci";
+ node->parent = of_root;
+
+ of_changeset_init(&cs);
+ ret = of_pci_add_pci_bus_props(node, &cs);
+ if (ret)
+ goto changeset_destroy;
+
+ ret = of_changeset_attach_node(&cs, node);
+ if (ret)
+ goto changeset_destroy;
+
+ ret = of_changeset_apply(&cs);
+ if (ret)
+ goto changeset_destroy;
+
+ return node;
+
+changeset_destroy:
+ of_changeset_destroy(&cs);
+ kfree(node);
+
+ return NULL;
+}
+
void pci_set_of_node(struct pci_dev *dev)
{
if (!dev->bus->dev.of_node)
return;
dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
dev->devfn);
+ if (!dev->dev.of_node)
+ of_pci_make_dev_node(dev);
if (dev->dev.of_node)
dev->dev.fwnode = &dev->dev.of_node->fwnode;
}
@@ -39,6 +221,8 @@ void pci_set_bus_of_node(struct pci_bus *bus)

if (bus->self == NULL) {
node = pcibios_get_phb_of_node(bus);
+ if (!node)
+ node = of_pci_create_root_bus_node(bus);
} else {
node = of_node_get(bus->self->dev.of_node);
if (node && of_property_read_bool(node, "external-facing"))
--
2.34.1

2022-04-27 18:05:46

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

Hi "Cl?ment,

I love your patch! Yet something to improve:

[auto build test ERROR on helgaas-pci/next]
[also build test ERROR on v5.18-rc4 next-20220427]
[cannot apply to robh/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/Cl-ment-L-ger/add-dynamic-PCI-device-of_node-creation-for-overlay/20220427-190828
base: https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git next
config: microblaze-randconfig-r005-20220427 (https://download.01.org/0day-ci/archive/20220428/[email protected]/config)
compiler: microblaze-linux-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/787f8567f04db060522e198fbdc55a805e99b922
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Cl-ment-L-ger/add-dynamic-PCI-device-of_node-creation-for-overlay/20220427-190828
git checkout 787f8567f04db060522e198fbdc55a805e99b922
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=microblaze SHELL=/bin/bash drivers/pci/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

drivers/pci/of.c: In function 'of_pci_add_property':
>> drivers/pci/of.c:50:9: error: implicit declaration of function 'of_property_set_flag'; did you mean 'of_node_set_flag'? [-Werror=implicit-function-declaration]
50 | of_property_set_flag(prop, OF_DYNAMIC);
| ^~~~~~~~~~~~~~~~~~~~
| of_node_set_flag
>> drivers/pci/of.c:54:15: error: implicit declaration of function 'of_changeset_add_property'; did you mean 'of_pci_add_property'? [-Werror=implicit-function-declaration]
54 | ret = of_changeset_add_property(ocs, np, prop);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| of_pci_add_property
drivers/pci/of.c: In function 'of_pci_make_dev_node':
>> drivers/pci/of.c:129:9: error: implicit declaration of function 'of_changeset_init' [-Werror=implicit-function-declaration]
129 | of_changeset_init(&cs);
| ^~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:147:15: error: implicit declaration of function 'of_changeset_attach_node' [-Werror=implicit-function-declaration]
147 | ret = of_changeset_attach_node(&cs, node);
| ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:151:15: error: implicit declaration of function 'of_changeset_apply' [-Werror=implicit-function-declaration]
151 | ret = of_changeset_apply(&cs);
| ^~~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:160:9: error: implicit declaration of function 'of_changeset_destroy' [-Werror=implicit-function-declaration]
160 | of_changeset_destroy(&cs);
| ^~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +50 drivers/pci/of.c

17
18 #ifdef CONFIG_PCI
19 static int of_pci_add_property(struct of_changeset *ocs, struct device_node *np,
20 const char *name, const void *value, int length)
21 {
22 struct property *prop;
23 int ret = -ENOMEM;
24
25 prop = kzalloc(sizeof(*prop), GFP_KERNEL);
26 if (!prop)
27 return -ENOMEM;
28
29 prop->name = kstrdup(name, GFP_KERNEL);
30 if (!prop->name)
31 goto out_err;
32
33 if (value) {
34 prop->value = kmemdup(value, length, GFP_KERNEL);
35 if (!prop->value)
36 goto out_err;
37 } else {
38 /*
39 * Even if the property has no value, it must be set to a
40 * non-null value since of_get_property() is used to check
41 * some values that might or not have a values (ranges for
42 * instance). Moreover, when the node is released, prop->value
43 * is kfreed so the memory must come from kmalloc.
44 */
45 prop->value = kmalloc(1, GFP_KERNEL);
46 if (!prop->value)
47 goto out_err;
48 }
49
> 50 of_property_set_flag(prop, OF_DYNAMIC);
51
52 prop->length = length;
53
> 54 ret = of_changeset_add_property(ocs, np, prop);
55 if (!ret)
56 return 0;
57
58 out_err:
59 kfree(prop->value);
60 kfree(prop->name);
61 kfree(prop);
62
63 return ret;
64 }
65
66 static struct device_node *of_pci_make_node(void)
67 {
68 struct device_node *node;
69
70 node = kzalloc(sizeof(*node), GFP_KERNEL);
71 if (!node)
72 return NULL;
73
74 of_node_set_flag(node, OF_DYNAMIC);
75 of_node_set_flag(node, OF_DETACHED);
76 of_node_init(node);
77
78 return node;
79 }
80
81 static int of_pci_add_cells_props(struct device_node *node,
82 struct of_changeset *cs, int n_addr_cells,
83 int n_size_cells)
84 {
85 __be32 val;
86 int ret;
87
88 ret = of_pci_add_property(cs, node, "ranges", NULL, 0);
89 if (ret)
90 return ret;
91
92 val = __cpu_to_be32(n_addr_cells);
93 ret = of_pci_add_property(cs, node, "#address-cells", &val,
94 sizeof(__be32));
95 if (ret)
96 return ret;
97
98 val = __cpu_to_be32(n_size_cells);
99 return of_pci_add_property(cs, node, "#size-cells", &val,
100 sizeof(__be32));
101 }
102
103 static int of_pci_add_pci_bus_props(struct device_node *node,
104 struct of_changeset *cs)
105 {
106 int ret;
107
108 ret = of_pci_add_property(cs, node, "device_type", "pci",
109 strlen("pci") + 1);
110 if (ret)
111 return ret;
112
113 return of_pci_add_cells_props(node, cs, 3, 2);
114 }
115
116 static void of_pci_make_dev_node(struct pci_dev *dev)
117 {
118 static struct of_changeset cs;
119 const char *pci_type = "dev";
120 struct device_node *node;
121 __be32 val[5] = {0};
122 int ret;
123
124 node = of_pci_make_node();
125 if (!node)
126 return;
127
128 node->parent = dev->bus->dev.of_node;
> 129 of_changeset_init(&cs);
130
131 if (pci_is_bridge(dev)) {
132 ret = of_pci_add_pci_bus_props(node, &cs);
133 if (ret)
134 goto changeset_destroy;
135 pci_type = "pci";
136 }
137
138 node->full_name = kasprintf(GFP_KERNEL, "%s@%x,%x", pci_type,
139 PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
140
141 val[0] = __cpu_to_be32(dev->devfn << 8);
142 val[4] = __cpu_to_be32(SZ_4K);
143 ret = of_pci_add_property(&cs, node, "reg", val, 5 * sizeof(__be32));
144 if (ret)
145 goto changeset_destroy;
146
> 147 ret = of_changeset_attach_node(&cs, node);
148 if (ret)
149 goto changeset_destroy;
150
> 151 ret = of_changeset_apply(&cs);
152 if (ret)
153 goto changeset_destroy;
154
155 dev->dev.of_node = node;
156
157 return;
158
159 changeset_destroy:
> 160 of_changeset_destroy(&cs);
161 kfree(node);
162 }
163

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-04-27 18:15:44

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

Hi "Cl?ment,

I love your patch! Yet something to improve:

[auto build test ERROR on helgaas-pci/next]
[also build test ERROR on v5.18-rc4 next-20220427]
[cannot apply to robh/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/Cl-ment-L-ger/add-dynamic-PCI-device-of_node-creation-for-overlay/20220427-190828
base: https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git next
config: sparc64-buildonly-randconfig-r003-20220427 (https://download.01.org/0day-ci/archive/20220428/[email protected]/config)
compiler: sparc64-linux-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/787f8567f04db060522e198fbdc55a805e99b922
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Cl-ment-L-ger/add-dynamic-PCI-device-of_node-creation-for-overlay/20220427-190828
git checkout 787f8567f04db060522e198fbdc55a805e99b922
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=sparc64 SHELL=/bin/bash drivers/pci/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

drivers/pci/of.c: In function 'of_pci_add_property':
>> drivers/pci/of.c:54:15: error: implicit declaration of function 'of_changeset_add_property'; did you mean 'of_pci_add_property'? [-Werror=implicit-function-declaration]
54 | ret = of_changeset_add_property(ocs, np, prop);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| of_pci_add_property
drivers/pci/of.c: In function 'of_pci_make_dev_node':
>> drivers/pci/of.c:129:9: error: implicit declaration of function 'of_changeset_init' [-Werror=implicit-function-declaration]
129 | of_changeset_init(&cs);
| ^~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:147:15: error: implicit declaration of function 'of_changeset_attach_node' [-Werror=implicit-function-declaration]
147 | ret = of_changeset_attach_node(&cs, node);
| ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:151:15: error: implicit declaration of function 'of_changeset_apply' [-Werror=implicit-function-declaration]
151 | ret = of_changeset_apply(&cs);
| ^~~~~~~~~~~~~~~~~~
>> drivers/pci/of.c:160:9: error: implicit declaration of function 'of_changeset_destroy' [-Werror=implicit-function-declaration]
160 | of_changeset_destroy(&cs);
| ^~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +54 drivers/pci/of.c

17
18 #ifdef CONFIG_PCI
19 static int of_pci_add_property(struct of_changeset *ocs, struct device_node *np,
20 const char *name, const void *value, int length)
21 {
22 struct property *prop;
23 int ret = -ENOMEM;
24
25 prop = kzalloc(sizeof(*prop), GFP_KERNEL);
26 if (!prop)
27 return -ENOMEM;
28
29 prop->name = kstrdup(name, GFP_KERNEL);
30 if (!prop->name)
31 goto out_err;
32
33 if (value) {
34 prop->value = kmemdup(value, length, GFP_KERNEL);
35 if (!prop->value)
36 goto out_err;
37 } else {
38 /*
39 * Even if the property has no value, it must be set to a
40 * non-null value since of_get_property() is used to check
41 * some values that might or not have a values (ranges for
42 * instance). Moreover, when the node is released, prop->value
43 * is kfreed so the memory must come from kmalloc.
44 */
45 prop->value = kmalloc(1, GFP_KERNEL);
46 if (!prop->value)
47 goto out_err;
48 }
49
50 of_property_set_flag(prop, OF_DYNAMIC);
51
52 prop->length = length;
53
> 54 ret = of_changeset_add_property(ocs, np, prop);
55 if (!ret)
56 return 0;
57
58 out_err:
59 kfree(prop->value);
60 kfree(prop->name);
61 kfree(prop);
62
63 return ret;
64 }
65
66 static struct device_node *of_pci_make_node(void)
67 {
68 struct device_node *node;
69
70 node = kzalloc(sizeof(*node), GFP_KERNEL);
71 if (!node)
72 return NULL;
73
74 of_node_set_flag(node, OF_DYNAMIC);
75 of_node_set_flag(node, OF_DETACHED);
76 of_node_init(node);
77
78 return node;
79 }
80
81 static int of_pci_add_cells_props(struct device_node *node,
82 struct of_changeset *cs, int n_addr_cells,
83 int n_size_cells)
84 {
85 __be32 val;
86 int ret;
87
88 ret = of_pci_add_property(cs, node, "ranges", NULL, 0);
89 if (ret)
90 return ret;
91
92 val = __cpu_to_be32(n_addr_cells);
93 ret = of_pci_add_property(cs, node, "#address-cells", &val,
94 sizeof(__be32));
95 if (ret)
96 return ret;
97
98 val = __cpu_to_be32(n_size_cells);
99 return of_pci_add_property(cs, node, "#size-cells", &val,
100 sizeof(__be32));
101 }
102
103 static int of_pci_add_pci_bus_props(struct device_node *node,
104 struct of_changeset *cs)
105 {
106 int ret;
107
108 ret = of_pci_add_property(cs, node, "device_type", "pci",
109 strlen("pci") + 1);
110 if (ret)
111 return ret;
112
113 return of_pci_add_cells_props(node, cs, 3, 2);
114 }
115
116 static void of_pci_make_dev_node(struct pci_dev *dev)
117 {
118 static struct of_changeset cs;
119 const char *pci_type = "dev";
120 struct device_node *node;
121 __be32 val[5] = {0};
122 int ret;
123
124 node = of_pci_make_node();
125 if (!node)
126 return;
127
128 node->parent = dev->bus->dev.of_node;
> 129 of_changeset_init(&cs);
130
131 if (pci_is_bridge(dev)) {
132 ret = of_pci_add_pci_bus_props(node, &cs);
133 if (ret)
134 goto changeset_destroy;
135 pci_type = "pci";
136 }
137
138 node->full_name = kasprintf(GFP_KERNEL, "%s@%x,%x", pci_type,
139 PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
140
141 val[0] = __cpu_to_be32(dev->devfn << 8);
142 val[4] = __cpu_to_be32(SZ_4K);
143 ret = of_pci_add_property(&cs, node, "reg", val, 5 * sizeof(__be32));
144 if (ret)
145 goto changeset_destroy;
146
> 147 ret = of_changeset_attach_node(&cs, node);
148 if (ret)
149 goto changeset_destroy;
150
> 151 ret = of_changeset_apply(&cs);
152 if (ret)
153 goto changeset_destroy;
154
155 dev->dev.of_node = node;
156
157 return;
158
159 changeset_destroy:
> 160 of_changeset_destroy(&cs);
161 kfree(node);
162 }
163

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-05-03 18:20:57

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

On Wed, Apr 27, 2022 at 11:45:01AM +0200, Cl?ment L?ger wrote:
> In order to apply overlays to PCI device nodes, the nodes must first
> exist. This commit add support to populate a skeleton tree for PCI bus
> and devices. These nodes can then be used by drivers to apply overlays.
>

While I implemented this creating the nodes as the PCI devices are
created, I think probably we're going to want to create the device node
and any needed parent nodes on demand. Otherwise, just turning on
CONFIG_OF could break platforms.

One potential issue is that fwnode assumes there is either a DT node or
ACPI node. With this, we have the potential for both. I'm not sure how
much that's going to be an issue.

> Co-developed-by: Rob Herring <[email protected]>
> Signed-off-by: Rob Herring <[email protected]>
> Signed-off-by: Cl?ment L?ger <[email protected]>
> ---
> drivers/pci/of.c | 184 +++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 184 insertions(+)
>
> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> index cb2e8351c2cc..f2325708726e 100644
> --- a/drivers/pci/of.c
> +++ b/drivers/pci/of.c
> @@ -16,12 +16,194 @@
> #include "pci.h"
>
> #ifdef CONFIG_PCI
> +static int of_pci_add_property(struct of_changeset *ocs, struct device_node *np,
> + const char *name, const void *value, int length)

Nothing really PCI specific about this function.

The kernel support for creating nodes and properties is pretty poor. We
should improve it with functions like this (in drivers/of/). Maybe the
changeset part should be separate though. We have some cases of creating
properties or nodes already, and whatever new APIs we make those
cases should be able to use them. And if they are converted, then it can
be merged sooner rather than when all the PCI parts are ready.

> +{
> + struct property *prop;
> + int ret = -ENOMEM;
> +
> + prop = kzalloc(sizeof(*prop), GFP_KERNEL);
> + if (!prop)
> + return -ENOMEM;
> +
> + prop->name = kstrdup(name, GFP_KERNEL);
> + if (!prop->name)
> + goto out_err;
> +
> + if (value) {
> + prop->value = kmemdup(value, length, GFP_KERNEL);
> + if (!prop->value)
> + goto out_err;
> + } else {
> + /*
> + * Even if the property has no value, it must be set to a
> + * non-null value since of_get_property() is used to check
> + * some values that might or not have a values (ranges for
> + * instance). Moreover, when the node is released, prop->value
> + * is kfreed so the memory must come from kmalloc.
> + */
> + prop->value = kmalloc(1, GFP_KERNEL);
> + if (!prop->value)
> + goto out_err;
> + }
> +
> + of_property_set_flag(prop, OF_DYNAMIC);
> +
> + prop->length = length;
> +
> + ret = of_changeset_add_property(ocs, np, prop);
> + if (!ret)
> + return 0;
> +
> +out_err:
> + kfree(prop->value);
> + kfree(prop->name);
> + kfree(prop);
> +
> + return ret;
> +}
> +
> +static struct device_node *of_pci_make_node(void)
> +{

This isn't PCI specific either.

> + struct device_node *node;
> +
> + node = kzalloc(sizeof(*node), GFP_KERNEL);
> + if (!node)
> + return NULL;
> +
> + of_node_set_flag(node, OF_DYNAMIC);
> + of_node_set_flag(node, OF_DETACHED);
> + of_node_init(node);
> +
> + return node;
> +}
> +
> +static int of_pci_add_cells_props(struct device_node *node,
> + struct of_changeset *cs, int n_addr_cells,
> + int n_size_cells)
> +{
> + __be32 val;
> + int ret;
> +
> + ret = of_pci_add_property(cs, node, "ranges", NULL, 0);

The host bridge node is going to need to fill in 'ranges'. Empty ranges
is not valid when there's a change in number of cells.

The root node also will need "#address-cells" and "#size-cells".

> + if (ret)
> + return ret;
> +
> + val = __cpu_to_be32(n_addr_cells);
> + ret = of_pci_add_property(cs, node, "#address-cells", &val,
> + sizeof(__be32));
> + if (ret)
> + return ret;
> +
> + val = __cpu_to_be32(n_size_cells);
> + return of_pci_add_property(cs, node, "#size-cells", &val,
> + sizeof(__be32));
> +}
> +
> +static int of_pci_add_pci_bus_props(struct device_node *node,
> + struct of_changeset *cs)
> +{
> + int ret;
> +
> + ret = of_pci_add_property(cs, node, "device_type", "pci",
> + strlen("pci") + 1);
> + if (ret)
> + return ret;
> +
> + return of_pci_add_cells_props(node, cs, 3, 2);
> +}
> +
> +static void of_pci_make_dev_node(struct pci_dev *dev)
> +{
> + static struct of_changeset cs;
> + const char *pci_type = "dev";
> + struct device_node *node;
> + __be32 val[5] = {0};
> + int ret;
> +
> + node = of_pci_make_node();
> + if (!node)
> + return;
> +
> + node->parent = dev->bus->dev.of_node;
> + of_changeset_init(&cs);
> +
> + if (pci_is_bridge(dev)) {
> + ret = of_pci_add_pci_bus_props(node, &cs);
> + if (ret)
> + goto changeset_destroy;
> + pci_type = "pci";
> + }
> +
> + node->full_name = kasprintf(GFP_KERNEL, "%s@%x,%x", pci_type,
> + PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn));
> +
> + val[0] = __cpu_to_be32(dev->devfn << 8);
> + val[4] = __cpu_to_be32(SZ_4K);
> + ret = of_pci_add_property(&cs, node, "reg", val, 5 * sizeof(__be32));
> + if (ret)
> + goto changeset_destroy;
> +
> + ret = of_changeset_attach_node(&cs, node);
> + if (ret)
> + goto changeset_destroy;
> +
> + ret = of_changeset_apply(&cs);
> + if (ret)
> + goto changeset_destroy;
> +
> + dev->dev.of_node = node;
> +
> + return;
> +
> +changeset_destroy:
> + of_changeset_destroy(&cs);
> + kfree(node);
> +}
> +
> +static struct device_node *of_pci_create_root_bus_node(struct pci_bus *bus)
> +{
> + static struct of_changeset cs;
> + struct device_node *node;
> + int ret;
> +
> + node = of_pci_make_node();
> + if (!node)
> + return NULL;
> +
> + node->full_name = "pci";
> + node->parent = of_root;
> +
> + of_changeset_init(&cs);
> + ret = of_pci_add_pci_bus_props(node, &cs);
> + if (ret)
> + goto changeset_destroy;
> +
> + ret = of_changeset_attach_node(&cs, node);
> + if (ret)
> + goto changeset_destroy;
> +
> + ret = of_changeset_apply(&cs);
> + if (ret)
> + goto changeset_destroy;
> +
> + return node;
> +
> +changeset_destroy:
> + of_changeset_destroy(&cs);
> + kfree(node);
> +
> + return NULL;
> +}
> +
> void pci_set_of_node(struct pci_dev *dev)
> {
> if (!dev->bus->dev.of_node)
> return;
> dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
> dev->devfn);
> + if (!dev->dev.of_node)
> + of_pci_make_dev_node(dev);
> if (dev->dev.of_node)
> dev->dev.fwnode = &dev->dev.of_node->fwnode;
> }
> @@ -39,6 +221,8 @@ void pci_set_bus_of_node(struct pci_bus *bus)
>
> if (bus->self == NULL) {
> node = pcibios_get_phb_of_node(bus);
> + if (!node)
> + node = of_pci_create_root_bus_node(bus);
> } else {
> node = of_node_get(bus->self->dev.of_node);
> if (node && of_property_read_bool(node, "external-facing"))
> --
> 2.34.1
>
>

2022-05-04 00:50:50

by Clément Léger

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

Le Tue, 3 May 2022 09:12:06 -0500,
Rob Herring <[email protected]> a écrit :

> On Wed, Apr 27, 2022 at 11:45:01AM +0200, Clément Léger wrote:
> > In order to apply overlays to PCI device nodes, the nodes must first
> > exist. This commit add support to populate a skeleton tree for PCI bus
> > and devices. These nodes can then be used by drivers to apply overlays.
> >
>
> While I implemented this creating the nodes as the PCI devices are
> created, I think probably we're going to want to create the device node
> and any needed parent nodes on demand. Otherwise, just turning on
> CONFIG_OF could break platforms.

Ok, so this creation would potentially be done on request from some PCI
driver that want to apply it's overlay on the tree. Should I actually
add some function such as of_pci_apply_overlay() which would create the
PCI node tree if not present and apply the overlay to the of_node that
is associated to the PCIe device ?

>
> One potential issue is that fwnode assumes there is either a DT node or
> ACPI node. With this, we have the potential for both. I'm not sure how
> much that's going to be an issue.

Not sure either but that's better not to play with that.

>
> > Co-developed-by: Rob Herring <[email protected]>
> > Signed-off-by: Rob Herring <[email protected]>
> > Signed-off-by: Clément Léger <[email protected]>
> > ---
> > drivers/pci/of.c | 184 +++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 184 insertions(+)
> >
> > diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> > index cb2e8351c2cc..f2325708726e 100644
> > --- a/drivers/pci/of.c
> > +++ b/drivers/pci/of.c
> > @@ -16,12 +16,194 @@
> > #include "pci.h"
> >
> > #ifdef CONFIG_PCI
> > +static int of_pci_add_property(struct of_changeset *ocs, struct device_node *np,
> > + const char *name, const void *value, int length)
>
> Nothing really PCI specific about this function.
>
> The kernel support for creating nodes and properties is pretty poor. We
> should improve it with functions like this (in drivers/of/). Maybe the
> changeset part should be separate though. We have some cases of creating
> properties or nodes already, and whatever new APIs we make those
> cases should be able to use them. And if they are converted, then it can
> be merged sooner rather than when all the PCI parts are ready.

Ok, so this will be done as a first separate series to add property
creation then.

> > +
> > +static int of_pci_add_cells_props(struct device_node *node,
> > + struct of_changeset *cs, int n_addr_cells,
> > + int n_size_cells)
> > +{
> > + __be32 val;
> > + int ret;
> > +
> > + ret = of_pci_add_property(cs, node, "ranges", NULL, 0);
>
> The host bridge node is going to need to fill in 'ranges'. Empty ranges
> is not valid when there's a change in number of cells.

Ok, wasn't aware of that. If I understand, I'll need to obtain the
range of PCI addresses that are behind the bridge to fill in this
ranges property right ?

>
> The root node also will need "#address-cells" and "#size-cells".
>

Ok.


--
Clément Léger,
Embedded Linux and Kernel engineer at Bootlin
https://bootlin.com

2022-05-04 17:38:12

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

In subject:

PCI: of: Create DT nodes ... if they do not exist

The subject could be read as saying that you're going to create DT
nodes before the PCI devices exist, but I think you mean that when we
enumerate a PCI devices, we're *always* going to create a DT node for
it, even if the DT didn't mention it.

Maybe something like:

PCI: of: Create DT node for every PCI device

although I see Rob thinks this should be done on demand instead of
doing it for every device, which sounds sensible to me.

On Wed, Apr 27, 2022 at 11:45:01AM +0200, Cl?ment L?ger wrote:
> In order to apply overlays to PCI device nodes, the nodes must first
> exist. This commit add support to populate a skeleton tree for PCI bus
> and devices. These nodes can then be used by drivers to apply overlays.

s/This commit add support/Add support/

Bjorn

2022-05-04 23:16:39

by Clément Léger

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists

Le Tue, 3 May 2022 17:53:53 -0500,
Bjorn Helgaas <[email protected]> a écrit :

> In subject:
>
> PCI: of: Create DT nodes ... if they do not exist
>
> The subject could be read as saying that you're going to create DT
> nodes before the PCI devices exist, but I think you mean that when we
> enumerate a PCI devices, we're *always* going to create a DT node for
> it, even if the DT didn't mention it.

Hi Bjorn,

Indeed ! I'll modify that.

>
> Maybe something like:
>
> PCI: of: Create DT node for every PCI device
>
> although I see Rob thinks this should be done on demand instead of
> doing it for every device, which sounds sensible to me.

Agreed, I'll rework this series.

Thanks,

>
> On Wed, Apr 27, 2022 at 11:45:01AM +0200, Clément Léger wrote:
> > In order to apply overlays to PCI device nodes, the nodes must first
> > exist. This commit add support to populate a skeleton tree for PCI bus
> > and devices. These nodes can then be used by drivers to apply overlays.
>
> s/This commit add support/Add support/
>
> Bjorn



--
Clément Léger,
Embedded Linux and Kernel engineer at Bootlin
https://bootlin.com

2022-05-09 09:26:44

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On 4/27/22 04:44, Clément Léger wrote:
> This series adds foundation work to support the lan9662 PCIe card. This
> card is meant to be used an ethernet switch with 2 x RJ45 ports and
> 2 x 2.5G SFPs. The lan966x SoCs can be used in two different ways:
>
> - It can run Linux by itself, on ARM64 cores included in the SoC. This
> use-case of the lan966x is currently being upstreamed, using a
> traditional Device Tree representation of the lan996x HW blocks [1]
> A number of drivers for the different IPs of the SoC have already
> been merged in upstream Linux.
>
> - It can be used as a PCIe endpoint, connected to a separate platform
> that acts as the PCIe root complex. In this case, all the devices
> that are embedded on this SoC are exposed through PCIe BARs and the
> ARM64 cores of the SoC are not used. Since this is a PCIe card, it
> can be plugged on any platform, of any architecture supporting PCIe.
>
> The problem that arose is that we want to reuse all the existing OF
> compatible drivers that are used when in SoC mode to instantiate the
> PCI device when in PCIe endpoint mode.
>
> A previous attempt to tackle this problem was made using fwnode [1].
> However, this proved being way too invasive and it required
> modifications in both subsystems and drivers to support fwnode. First
> series did not lead to a consensus and multiple ideas to support this
> use-case were mentioned (ACPI overlay, fwnode, device-tree overlay).
> Since it only seemed that fwnode was not a totally silly idea, we
> continued on this way.
>
> However, on the series that added fwnode support to the reset subsystem,
> Rob Herring mentioned the fact that OF overlay might actually be the
> best way to probe PCI devices and populate platform drivers using this
> overlay. He also provided a branch containing some commits that helped

I need to go look at the various email threads mentioned above before I
continue reading this patch series.

I do have serious concerns with this approach. I need to investigate
more fully before I can determine whether the concerns are addressed
sufficiently.

To give some background to my longstanding response to similar proposals,
here is my old statement from https://elinux.org/Device_Tree_Reference:

Overlays
Mainline Linux Support
Run time overlay apply and run time overlay remove from user space are
not supported in the mainline kernel. There are out of tree patches
to implement this feature via an overlay manager. The overlay manager
is used successfully by many users for specific overlays on specific
boards with specific environments and use cases. However, there are many
issues with the Linux kernel overlay implementation due to incomplete and
incorrect code. The overlay manager has not been accepted in mainline due
to these issues. Once these issues are resolved, it is expected that some
method of run time overlay apply and overlay removal from user space will
be supported by the Linux kernel.

There is a possibility that overlay apply and overlay remove support could
be phased in slowly, feature by feature, as specific issues are resolved.

Those are my words, not Rob's, but I thought that Rob was somewhat in
agreement with those ideas. Apparently either I misunderstood his
thoughts, or his thoughts have evolved, since you say that he suggested
overlays in one of the above email threads, and you list him as a
co-developer.

In the next line of the elinux info above, I provide a link to more
detailed information:

Frank's thoughts on what is needed to complete basic overlay support

The link goes to:

https://elinux.org/Frank%27s_Evolving_Overlay_Thoughts

That page provides an incomplete list of issues to be resolved, and
a list of "what has been completed".

Please read through the elinux.org page to understand the basis of
my concerns.

If after reading through the related email threads, and this thread,
I agree that overlays are a good approach, I am already aware of areas
that I will have specific comments about on the patches in this thread.

-Frank

> to implement this idea on a x86 setup. Due to the dynamic nature of PCI
> bus enumeration, some other modifications needs to be applied on the
> overlay to apply it correctly. Indeed, it is necessary to modify the
> target node of the fragments to apply them correctly on the PCI device
> that was probed. Moreover, the 'ranges' must be set according to the
> BAR addresses in order to remap devices to the correct PCI addresses.
> These modifications are the located into the driver since the remapping
> is something that is specific to each driver.
>
> After modifications, this proves to be successful and a full support of
> the aforementioned lan966x PCI card was added. The modifications to
> support that (apply an overlay on a dynamically created PCI of_node) are
> actually minimal and only touches a few places (pci/of.c). This series
> contains the 3 commits that are necessary to do that:
>
> - First commit creates the root node if not present on a x86 setup
> without a firmware provided device-tree.
> - Second one dynamically creates the PCI bus/device device-tree node
> hierarchy using changeset API.
> - Finally a last commit allows to apply an overlay by targeting a
> specific device-tree node.
>
> Other problems that might be considered with this series is the fact
> that CONFIG_OF is not enabled by default on x86 configuration and thus
> the driver can't be used without rebuilding a complete kernel with
> CONFIG_OF=y. In order to fully support this PCIe card and allow lambda
> user to use this driver, it would be almost mandatory to enable
> CONFIG_OF by default on such setup.
>
> A driver using this support was added and can be seen at [3]. This
> driver embeds a builtin overlay and applies it to the live tree using
> of_overlay_fdt_apply_to_node(). An interrupt driver is also included and
> associated to a node that is added by the overlay. The driver also
> insert a specific "ranges" property based on the BAR values which allows
> to remap the device-tree node to BAR addresses dynamically. This is
> needed to allow applying the overlay without depending on specific
> enumeration BAR addresses.
>
> This series was tested on a x86 kernel using CONFIG_OF under a virtual
> machine using PCI passthrough.
>
> Link: [1] https://lore.kernel.org/lkml/[email protected]/t/
> Link: [2] https://lore.kernel.org/lkml/[email protected]/T/
> Link: [3] https://github.com/clementleger/linux/tree/lan966x/of_support
>
> Clément Léger (3):
> of: always populate a root node
> PCI: of: create DT nodes for PCI devices if they do not exists
> of: overlay: add of_overlay_fdt_apply_to_node()
>
> drivers/of/base.c | 16 +++-
> drivers/of/overlay.c | 21 +++--
> drivers/pci/of.c | 184 +++++++++++++++++++++++++++++++++++++++++++
> include/linux/of.h | 17 +++-
> 4 files changed, 224 insertions(+), 14 deletions(-)
>


2022-05-09 12:41:41

by Clément Léger

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

Le Fri, 6 May 2022 13:33:22 -0500,
Frank Rowand <[email protected]> a écrit :

> On 4/27/22 04:44, Clément Léger wrote:
> > This series adds foundation work to support the lan9662 PCIe card.
> > This card is meant to be used an ethernet switch with 2 x RJ45
> > ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
> > different ways:
> >
> > - It can run Linux by itself, on ARM64 cores included in the SoC.
> > This use-case of the lan966x is currently being upstreamed, using a
> > traditional Device Tree representation of the lan996x HW blocks
> > [1] A number of drivers for the different IPs of the SoC have
> > already been merged in upstream Linux.
> >
> > - It can be used as a PCIe endpoint, connected to a separate
> > platform that acts as the PCIe root complex. In this case, all the
> > devices that are embedded on this SoC are exposed through PCIe BARs
> > and the ARM64 cores of the SoC are not used. Since this is a PCIe
> > card, it can be plugged on any platform, of any architecture
> > supporting PCIe.
> >
> > The problem that arose is that we want to reuse all the existing OF
> > compatible drivers that are used when in SoC mode to instantiate the
> > PCI device when in PCIe endpoint mode.
> >
> > A previous attempt to tackle this problem was made using fwnode [1].
> > However, this proved being way too invasive and it required
> > modifications in both subsystems and drivers to support fwnode.
> > First series did not lead to a consensus and multiple ideas to
> > support this use-case were mentioned (ACPI overlay, fwnode,
> > device-tree overlay). Since it only seemed that fwnode was not a
> > totally silly idea, we continued on this way.
> >
> > However, on the series that added fwnode support to the reset
> > subsystem, Rob Herring mentioned the fact that OF overlay might
> > actually be the best way to probe PCI devices and populate platform
> > drivers using this overlay. He also provided a branch containing
> > some commits that helped
>
> I need to go look at the various email threads mentioned above before
> I continue reading this patch series.
>
> I do have serious concerns with this approach. I need to investigate
> more fully before I can determine whether the concerns are addressed
> sufficiently.
>
> To give some background to my longstanding response to similar
> proposals, here is my old statement from
> https://elinux.org/Device_Tree_Reference:
>
> Overlays
> Mainline Linux Support
> Run time overlay apply and run time overlay remove from user space
> are not supported in the mainline kernel. There are out of tree
> patches to implement this feature via an overlay manager. The overlay
> manager is used successfully by many users for specific overlays on
> specific boards with specific environments and use cases. However,
> there are many issues with the Linux kernel overlay implementation
> due to incomplete and incorrect code. The overlay manager has not
> been accepted in mainline due to these issues. Once these issues are
> resolved, it is expected that some method of run time overlay apply
> and overlay removal from user space will be supported by the Linux
> kernel.
>
> There is a possibility that overlay apply and overlay remove
> support could be phased in slowly, feature by feature, as specific
> issues are resolved.

Hi Frank,

This work uses the kernel space interface (of_overlay_fdt_apply())
and the device tree overlay is builtin the driver. This interface was
used until recently by rcu-dcar driver. While the only user (sic),
this seems to work pretty well and I was able to use it successfully.

Moreover, this support targets at using this with PCI devices. This
devices are really well contained and do not interfere with other
devices. This actually consists in adding a complete subtree into the
existing device-tree and thus it limits the interactions between
potentially platform provided devices and PCI ones.

Clément

>
> Those are my words, not Rob's, but I thought that Rob was somewhat in
> agreement with those ideas. Apparently either I misunderstood his
> thoughts, or his thoughts have evolved, since you say that he
> suggested overlays in one of the above email threads, and you list
> him as a co-developer.
>
> In the next line of the elinux info above, I provide a link to more
> detailed information:
>
> Frank's thoughts on what is needed to complete basic overlay
> support
>
> The link goes to:
>
> https://elinux.org/Frank%27s_Evolving_Overlay_Thoughts
>
> That page provides an incomplete list of issues to be resolved, and
> a list of "what has been completed".
>
> Please read through the elinux.org page to understand the basis of
> my concerns.
>
> If after reading through the related email threads, and this thread,
> I agree that overlays are a good approach, I am already aware of areas
> that I will have specific comments about on the patches in this
> thread.
>
> -Frank
>
> > to implement this idea on a x86 setup. Due to the dynamic nature of
> > PCI bus enumeration, some other modifications needs to be applied
> > on the overlay to apply it correctly. Indeed, it is necessary to
> > modify the target node of the fragments to apply them correctly on
> > the PCI device that was probed. Moreover, the 'ranges' must be set
> > according to the BAR addresses in order to remap devices to the
> > correct PCI addresses. These modifications are the located into the
> > driver since the remapping is something that is specific to each
> > driver.
> >
> > After modifications, this proves to be successful and a full
> > support of the aforementioned lan966x PCI card was added. The
> > modifications to support that (apply an overlay on a dynamically
> > created PCI of_node) are actually minimal and only touches a few
> > places (pci/of.c). This series contains the 3 commits that are
> > necessary to do that:
> >
> > - First commit creates the root node if not present on a x86 setup
> > without a firmware provided device-tree.
> > - Second one dynamically creates the PCI bus/device device-tree node
> > hierarchy using changeset API.
> > - Finally a last commit allows to apply an overlay by targeting a
> > specific device-tree node.
> >
> > Other problems that might be considered with this series is the fact
> > that CONFIG_OF is not enabled by default on x86 configuration and
> > thus the driver can't be used without rebuilding a complete kernel
> > with CONFIG_OF=y. In order to fully support this PCIe card and
> > allow lambda user to use this driver, it would be almost mandatory
> > to enable CONFIG_OF by default on such setup.
> >
> > A driver using this support was added and can be seen at [3]. This
> > driver embeds a builtin overlay and applies it to the live tree
> > using of_overlay_fdt_apply_to_node(). An interrupt driver is also
> > included and associated to a node that is added by the overlay. The
> > driver also insert a specific "ranges" property based on the BAR
> > values which allows to remap the device-tree node to BAR addresses
> > dynamically. This is needed to allow applying the overlay without
> > depending on specific enumeration BAR addresses.
> >
> > This series was tested on a x86 kernel using CONFIG_OF under a
> > virtual machine using PCI passthrough.
> >
> > Link: [1] https://lore.kernel.org/lkml/[email protected]/t/
> > Link: [2]
> > https://lore.kernel.org/lkml/[email protected]/T/
> > Link: [3]
> > https://github.com/clementleger/linux/tree/lan966x/of_support
> >
> > Clément Léger (3):
> > of: always populate a root node
> > PCI: of: create DT nodes for PCI devices if they do not exists
> > of: overlay: add of_overlay_fdt_apply_to_node()
> >
> > drivers/of/base.c | 16 +++-
> > drivers/of/overlay.c | 21 +++--
> > drivers/pci/of.c | 184
> > +++++++++++++++++++++++++++++++++++++++++++ include/linux/of.h |
> > 17 +++- 4 files changed, 224 insertions(+), 14 deletions(-)
> >
>


2022-05-09 16:02:31

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On 5/9/22 07:16, Clément Léger wrote:
> Le Fri, 6 May 2022 13:33:22 -0500,
> Frank Rowand <[email protected]> a écrit :
>
>> On 4/27/22 04:44, Clément Léger wrote:
>>> This series adds foundation work to support the lan9662 PCIe card.
>>> This card is meant to be used an ethernet switch with 2 x RJ45
>>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
>>> different ways:
>>>
>>> - It can run Linux by itself, on ARM64 cores included in the SoC.
>>> This use-case of the lan966x is currently being upstreamed, using a
>>> traditional Device Tree representation of the lan996x HW blocks
>>> [1] A number of drivers for the different IPs of the SoC have
>>> already been merged in upstream Linux.
>>>
>>> - It can be used as a PCIe endpoint, connected to a separate
>>> platform that acts as the PCIe root complex. In this case, all the
>>> devices that are embedded on this SoC are exposed through PCIe BARs
>>> and the ARM64 cores of the SoC are not used. Since this is a PCIe
>>> card, it can be plugged on any platform, of any architecture
>>> supporting PCIe.
>>>
>>> The problem that arose is that we want to reuse all the existing OF
>>> compatible drivers that are used when in SoC mode to instantiate the
>>> PCI device when in PCIe endpoint mode.
>>>
>>> A previous attempt to tackle this problem was made using fwnode [1].
>>> However, this proved being way too invasive and it required
>>> modifications in both subsystems and drivers to support fwnode.
>>> First series did not lead to a consensus and multiple ideas to
>>> support this use-case were mentioned (ACPI overlay, fwnode,
>>> device-tree overlay). Since it only seemed that fwnode was not a
>>> totally silly idea, we continued on this way.
>>>
>>> However, on the series that added fwnode support to the reset
>>> subsystem, Rob Herring mentioned the fact that OF overlay might
>>> actually be the best way to probe PCI devices and populate platform
>>> drivers using this overlay. He also provided a branch containing
>>> some commits that helped
>>
>> I need to go look at the various email threads mentioned above before
>> I continue reading this patch series.
>>
>> I do have serious concerns with this approach. I need to investigate
>> more fully before I can determine whether the concerns are addressed
>> sufficiently.
>>
>> To give some background to my longstanding response to similar
>> proposals, here is my old statement from
>> https://elinux.org/Device_Tree_Reference:
>>
>> Overlays
>> Mainline Linux Support
>> Run time overlay apply and run time overlay remove from user space
>> are not supported in the mainline kernel. There are out of tree
>> patches to implement this feature via an overlay manager. The overlay
>> manager is used successfully by many users for specific overlays on
>> specific boards with specific environments and use cases. However,
>> there are many issues with the Linux kernel overlay implementation
>> due to incomplete and incorrect code. The overlay manager has not
>> been accepted in mainline due to these issues. Once these issues are
>> resolved, it is expected that some method of run time overlay apply
>> and overlay removal from user space will be supported by the Linux
>> kernel.
>>
>> There is a possibility that overlay apply and overlay remove
>> support could be phased in slowly, feature by feature, as specific
>> issues are resolved.
>
> Hi Frank,
>
> This work uses the kernel space interface (of_overlay_fdt_apply())
> and the device tree overlay is builtin the driver. This interface was
> used until recently by rcu-dcar driver. While the only user (sic),
> this seems to work pretty well and I was able to use it successfully.

Yes, of_overlay_fdt_apply() was used by one driver. But that driver
was explicitly recognized as a grandfathered exception, and not an
example for other users. It was finally removed in 5.18-rc1.

You may have used of_overlay_fdt_apply() in a specific use case at
a specific kernel version, but if you read through the references
I provided you will find that applying overlays after the kernel
boots is a fragile endeavor, with expectations of bugs and problems
being exposed as usage is changed (simple example is that my adding
some overlay notifier unittests exposed yet another memory leak).

The reference that I provided also shows how the overlay code is
being improved over time. Even with improvements, it will remain
fragile.

>
> Moreover, this support targets at using this with PCI devices. This
> devices are really well contained and do not interfere with other
> devices. This actually consists in adding a complete subtree into the
> existing device-tree and thus it limits the interactions between
> potentially platform provided devices and PCI ones.

Yes, that it is very important that you have described this fact, both
here and in other emails. Thank you for that information, it does help
understanding the alternatives.

I've hesitated in recommending a specific solution before better
understanding the architecture of your pcie board and drivers, but
I've delayed too long, so I am going to go ahead and mention one
possibility at the risk of not yet fully understanding the situation.

On the surface, it appears that your need might be well met by having
a base devicetree that describes all of the pcie nodes, but with each
node having a status of "disabled" so that they will not be used.
Have a devicetree overlay describing the pcie card (as you proposed),
where the overlay also includes a status of "ok" for the pcie node.
Applying the overlay, with a method of redirecting the target to a
specific pcie node would change the status of the pcie node to enable
its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
to allow a modified target, so not a new concept from me.) My suggestion
is to apply the overlay devicetree to the base devicetree before the
combined FDT devicetree is passed to the kernel at boot. The overlay
apply could be done by several different entities. It could be before
the bootloader executes, it could be done by the bootloader, it could
be done by a shim between the bootloader and the kernel. This method
avoids all of the issues of applying an overlay to a running system
that I find problematic. It is also a method used by the U-boot
bootloader, as an example.

The other big issue is mixing ACPI and devicetree on a single system.
Historically, the Linux devicetree community has not been receptive
to the ides of that mixture. Your example might be a specific case
where the two can be isolated from each other, or maybe not. (For
disclosure, I am essentially ACPI ignorant.) I suspect that mixing
ACPI and devicetree is a recipe for disaster in the general case.

More to come later as I finish reading through the various threads.

-Frank

>
> Clément
>
>>
>> Those are my words, not Rob's, but I thought that Rob was somewhat in
>> agreement with those ideas. Apparently either I misunderstood his
>> thoughts, or his thoughts have evolved, since you say that he
>> suggested overlays in one of the above email threads, and you list
>> him as a co-developer.
>>
>> In the next line of the elinux info above, I provide a link to more
>> detailed information:
>>
>> Frank's thoughts on what is needed to complete basic overlay
>> support
>>
>> The link goes to:
>>
>> https://elinux.org/Frank%27s_Evolving_Overlay_Thoughts
>>
>> That page provides an incomplete list of issues to be resolved, and
>> a list of "what has been completed".
>>
>> Please read through the elinux.org page to understand the basis of
>> my concerns.
>>
>> If after reading through the related email threads, and this thread,
>> I agree that overlays are a good approach, I am already aware of areas
>> that I will have specific comments about on the patches in this
>> thread.
>>
>> -Frank
>>
>>> to implement this idea on a x86 setup. Due to the dynamic nature of
>>> PCI bus enumeration, some other modifications needs to be applied
>>> on the overlay to apply it correctly. Indeed, it is necessary to
>>> modify the target node of the fragments to apply them correctly on
>>> the PCI device that was probed. Moreover, the 'ranges' must be set
>>> according to the BAR addresses in order to remap devices to the
>>> correct PCI addresses. These modifications are the located into the
>>> driver since the remapping is something that is specific to each
>>> driver.
>>>
>>> After modifications, this proves to be successful and a full
>>> support of the aforementioned lan966x PCI card was added. The
>>> modifications to support that (apply an overlay on a dynamically
>>> created PCI of_node) are actually minimal and only touches a few
>>> places (pci/of.c). This series contains the 3 commits that are
>>> necessary to do that:
>>>
>>> - First commit creates the root node if not present on a x86 setup
>>> without a firmware provided device-tree.
>>> - Second one dynamically creates the PCI bus/device device-tree node
>>> hierarchy using changeset API.
>>> - Finally a last commit allows to apply an overlay by targeting a
>>> specific device-tree node.
>>>
>>> Other problems that might be considered with this series is the fact
>>> that CONFIG_OF is not enabled by default on x86 configuration and
>>> thus the driver can't be used without rebuilding a complete kernel
>>> with CONFIG_OF=y. In order to fully support this PCIe card and
>>> allow lambda user to use this driver, it would be almost mandatory
>>> to enable CONFIG_OF by default on such setup.
>>>
>>> A driver using this support was added and can be seen at [3]. This
>>> driver embeds a builtin overlay and applies it to the live tree
>>> using of_overlay_fdt_apply_to_node(). An interrupt driver is also
>>> included and associated to a node that is added by the overlay. The
>>> driver also insert a specific "ranges" property based on the BAR
>>> values which allows to remap the device-tree node to BAR addresses
>>> dynamically. This is needed to allow applying the overlay without
>>> depending on specific enumeration BAR addresses.
>>>
>>> This series was tested on a x86 kernel using CONFIG_OF under a
>>> virtual machine using PCI passthrough.
>>>
>>> Link: [1] https://lore.kernel.org/lkml/[email protected]/t/
>>> Link: [2]
>>> https://lore.kernel.org/lkml/[email protected]/T/
>>> Link: [3]
>>> https://github.com/clementleger/linux/tree/lan966x/of_support
>>>
>>> Clément Léger (3):
>>> of: always populate a root node
>>> PCI: of: create DT nodes for PCI devices if they do not exists
>>> of: overlay: add of_overlay_fdt_apply_to_node()
>>>
>>> drivers/of/base.c | 16 +++-
>>> drivers/of/overlay.c | 21 +++--
>>> drivers/pci/of.c | 184
>>> +++++++++++++++++++++++++++++++++++++++++++ include/linux/of.h |
>>> 17 +++- 4 files changed, 224 insertions(+), 14 deletions(-)
>>>
>>
>


2022-05-09 16:13:52

by Clément Léger

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

Le Mon, 9 May 2022 10:56:36 -0500,
Frank Rowand <[email protected]> a écrit :

> > Hi Frank,
> >
> > This work uses the kernel space interface (of_overlay_fdt_apply())
> > and the device tree overlay is builtin the driver. This interface
> > was used until recently by rcu-dcar driver. While the only user
> > (sic), this seems to work pretty well and I was able to use it
> > successfully.
>
> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
> was explicitly recognized as a grandfathered exception, and not an
> example for other users. It was finally removed in 5.18-rc1.

I noticed that unfortunately.

>
> You may have used of_overlay_fdt_apply() in a specific use case at
> a specific kernel version, but if you read through the references
> I provided you will find that applying overlays after the kernel
> boots is a fragile endeavor, with expectations of bugs and problems
> being exposed as usage is changed (simple example is that my adding
> some overlay notifier unittests exposed yet another memory leak).
>
> The reference that I provided also shows how the overlay code is
> being improved over time. Even with improvements, it will remain
> fragile.

Acked.

>
> >
> > Moreover, this support targets at using this with PCI devices. This
> > devices are really well contained and do not interfere with other
> > devices. This actually consists in adding a complete subtree into
> > the existing device-tree and thus it limits the interactions between
> > potentially platform provided devices and PCI ones.
>
> Yes, that it is very important that you have described this fact, both
> here and in other emails. Thank you for that information, it does
> help understanding the alternatives.
>
> I've hesitated in recommending a specific solution before better
> understanding the architecture of your pcie board and drivers, but
> I've delayed too long, so I am going to go ahead and mention one
> possibility at the risk of not yet fully understanding the situation.
>
> On the surface, it appears that your need might be well met by having
> a base devicetree that describes all of the pcie nodes, but with each
> node having a status of "disabled" so that they will not be used.
> Have a devicetree overlay describing the pcie card (as you proposed),
> where the overlay also includes a status of "ok" for the pcie node.
> Applying the overlay, with a method of redirecting the target to a
> specific pcie node would change the status of the pcie node to enable
> its use. (You have already proposed a patch to modify
> of_overlay_fdt_apply() to allow a modified target, so not a new
> concept from me.) My suggestion is to apply the overlay devicetree
> to the base devicetree before the combined FDT devicetree is passed
> to the kernel at boot. The overlay apply could be done by several
> different entities. It could be before the bootloader executes, it
> could be done by the bootloader, it could be done by a shim between
> the bootloader and the kernel. This method avoids all of the issues
> of applying an overlay to a running system that I find problematic.
> It is also a method used by the U-boot bootloader, as an example.

Ok, that is actually possible on a system that is given a device-tree
by the bootloader. But on a system that is desrcibed using ACPI (such
as the x86), this is much more difficult (at least to my knowledge)...
We want this feature to be easy to use for the end user. Adding such
configuration which also differs between various architecture is
clearly not so easy to setup.

Moreover, since the PCI is meant to be "Plug and Play", such
configuration would completely break that. If the user switches the
PCIe card from one slot to another, the bootloader configuration will
need to be modified. This seems a big no way for me (and for the user).

>
> The other big issue is mixing ACPI and devicetree on a single system.
> Historically, the Linux devicetree community has not been receptive
> to the ides of that mixture. Your example might be a specific case
> where the two can be isolated from each other, or maybe not. (For
> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> ACPI and devicetree is a recipe for disaster in the general case.

Agreed, on that fact, it did raised some eyebrows, and it was for that
specific concern that initially, I proposed the fwnode solution.
Honestly, the fwnode conversion represent a lot of work (hundreds of
lines easily) + requires a conversion of all the subsystem that are not
fwnode ready (spoiler: almost all of them are not ready).

After implementing Rob's solution, the device-tree overlay really seems
the cleaner to me and requires much less modifications.

>
> More to come later as I finish reading through the various threads.

Ok, thanks for your time !

Clément

>
> -Frank


2022-05-09 17:10:19

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On Mon, May 09, 2022 at 06:09:17PM +0200, Cl?ment L?ger wrote:
> Le Mon, 9 May 2022 10:56:36 -0500,
> Frank Rowand <[email protected]> a ?crit :

...

> > On the surface, it appears that your need might be well met by having
> > a base devicetree that describes all of the pcie nodes, but with each
> > node having a status of "disabled" so that they will not be used.
> > Have a devicetree overlay describing the pcie card (as you proposed),
> > where the overlay also includes a status of "ok" for the pcie node.
> > Applying the overlay, with a method of redirecting the target to a
> > specific pcie node would change the status of the pcie node to enable
> > its use. (You have already proposed a patch to modify
> > of_overlay_fdt_apply() to allow a modified target, so not a new
> > concept from me.) My suggestion is to apply the overlay devicetree
> > to the base devicetree before the combined FDT devicetree is passed
> > to the kernel at boot. The overlay apply could be done by several
> > different entities. It could be before the bootloader executes, it
> > could be done by the bootloader, it could be done by a shim between
> > the bootloader and the kernel. This method avoids all of the issues
> > of applying an overlay to a running system that I find problematic.
> > It is also a method used by the U-boot bootloader, as an example.
>
> Ok, that is actually possible on a system that is given a device-tree
> by the bootloader. But on a system that is desrcibed using ACPI (such
> as the x86), this is much more difficult (at least to my knowledge)...
> We want this feature to be easy to use for the end user. Adding such
> configuration which also differs between various architecture is
> clearly not so easy to setup.
>
> Moreover, since the PCI is meant to be "Plug and Play", such
> configuration would completely break that. If the user switches the
> PCIe card from one slot to another, the bootloader configuration will
> need to be modified. This seems a big no way for me (and for the user).

The main problem here is that Linux does not support hotplugging for the
devices behind non-hotpluggable buses. You need to develop something to
say that the device tree (in terms of hardware) can morph at run-time
transparently to the user. I think the closest one is what FPGA does,
or at least should do.

> > The other big issue is mixing ACPI and devicetree on a single system.
> > Historically, the Linux devicetree community has not been receptive
> > to the ides of that mixture. Your example might be a specific case
> > where the two can be isolated from each other, or maybe not. (For
> > disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> > ACPI and devicetree is a recipe for disaster in the general case.
>
> Agreed, on that fact, it did raised some eyebrows, and it was for that
> specific concern that initially, I proposed the fwnode solution.
> Honestly, the fwnode conversion represent a lot of work (hundreds of
> lines easily) + requires a conversion of all the subsystem that are not
> fwnode ready (spoiler: almost all of them are not ready).

In either case you need to provide a format that would be suitable for
DT-based as well as ACPI-based platforms.

--
With Best Regards,
Andy Shevchenko



2022-05-09 18:44:22

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On Mon, May 9, 2022 at 10:56 AM Frank Rowand <[email protected]> wrote:
>
> On 5/9/22 07:16, Clément Léger wrote:
> > Le Fri, 6 May 2022 13:33:22 -0500,
> > Frank Rowand <[email protected]> a écrit :
> >
> >> On 4/27/22 04:44, Clément Léger wrote:
> >>> This series adds foundation work to support the lan9662 PCIe card.
> >>> This card is meant to be used an ethernet switch with 2 x RJ45
> >>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
> >>> different ways:
> >>>
> >>> - It can run Linux by itself, on ARM64 cores included in the SoC.
> >>> This use-case of the lan966x is currently being upstreamed, using a
> >>> traditional Device Tree representation of the lan996x HW blocks
> >>> [1] A number of drivers for the different IPs of the SoC have
> >>> already been merged in upstream Linux.
> >>>
> >>> - It can be used as a PCIe endpoint, connected to a separate
> >>> platform that acts as the PCIe root complex. In this case, all the
> >>> devices that are embedded on this SoC are exposed through PCIe BARs
> >>> and the ARM64 cores of the SoC are not used. Since this is a PCIe
> >>> card, it can be plugged on any platform, of any architecture
> >>> supporting PCIe.
> >>>
> >>> The problem that arose is that we want to reuse all the existing OF
> >>> compatible drivers that are used when in SoC mode to instantiate the
> >>> PCI device when in PCIe endpoint mode.
> >>>
> >>> A previous attempt to tackle this problem was made using fwnode [1].
> >>> However, this proved being way too invasive and it required
> >>> modifications in both subsystems and drivers to support fwnode.
> >>> First series did not lead to a consensus and multiple ideas to
> >>> support this use-case were mentioned (ACPI overlay, fwnode,
> >>> device-tree overlay). Since it only seemed that fwnode was not a
> >>> totally silly idea, we continued on this way.
> >>>
> >>> However, on the series that added fwnode support to the reset
> >>> subsystem, Rob Herring mentioned the fact that OF overlay might
> >>> actually be the best way to probe PCI devices and populate platform
> >>> drivers using this overlay. He also provided a branch containing
> >>> some commits that helped
> >>
> >> I need to go look at the various email threads mentioned above before
> >> I continue reading this patch series.
> >>
> >> I do have serious concerns with this approach. I need to investigate
> >> more fully before I can determine whether the concerns are addressed
> >> sufficiently.
> >>
> >> To give some background to my longstanding response to similar
> >> proposals, here is my old statement from
> >> https://elinux.org/Device_Tree_Reference:
> >>
> >> Overlays
> >> Mainline Linux Support
> >> Run time overlay apply and run time overlay remove from user space
> >> are not supported in the mainline kernel. There are out of tree
> >> patches to implement this feature via an overlay manager. The overlay
> >> manager is used successfully by many users for specific overlays on
> >> specific boards with specific environments and use cases. However,
> >> there are many issues with the Linux kernel overlay implementation
> >> due to incomplete and incorrect code. The overlay manager has not
> >> been accepted in mainline due to these issues. Once these issues are
> >> resolved, it is expected that some method of run time overlay apply
> >> and overlay removal from user space will be supported by the Linux
> >> kernel.
> >>
> >> There is a possibility that overlay apply and overlay remove
> >> support could be phased in slowly, feature by feature, as specific
> >> issues are resolved.
> >
> > Hi Frank,
> >
> > This work uses the kernel space interface (of_overlay_fdt_apply())
> > and the device tree overlay is builtin the driver. This interface was
> > used until recently by rcu-dcar driver. While the only user (sic),
> > this seems to work pretty well and I was able to use it successfully.
>
> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
> was explicitly recognized as a grandfathered exception, and not an
> example for other users. It was finally removed in 5.18-rc1.

What API are folks supposed to use exactly? That's the only API to
apply an overlay. I thought the FPGA mgr code was using it too, but
it's not. It doesn't look to me like the upstream code there even
works as nothing applies the overlays AFAICT. If there are no in
kernel users applying overlays, then let's remove the overlay code. I
hear it has lots of problems.

I am *way* more comfortable with driver specific applying of overlays
than any generic mechanism. I don't think we'll ever have a generic
mechanism. At least not one that doesn't end up with the same usage
constraints driver specific cases would have.


> You may have used of_overlay_fdt_apply() in a specific use case at
> a specific kernel version, but if you read through the references
> I provided you will find that applying overlays after the kernel
> boots is a fragile endeavor, with expectations of bugs and problems
> being exposed as usage is changed (simple example is that my adding
> some overlay notifier unittests exposed yet another memory leak).

The exception being specific drivers that are only applying overlays
isolated to their device as this usecase is. The usecase here is
entirely self-contained. The only base tree is only what's needed to
represent the PCI device.

> The reference that I provided also shows how the overlay code is
> being improved over time. Even with improvements, it will remain
> fragile.
>
> >
> > Moreover, this support targets at using this with PCI devices. This
> > devices are really well contained and do not interfere with other
> > devices. This actually consists in adding a complete subtree into the
> > existing device-tree and thus it limits the interactions between
> > potentially platform provided devices and PCI ones.
>
> Yes, that it is very important that you have described this fact, both
> here and in other emails. Thank you for that information, it does help
> understanding the alternatives.
>
> I've hesitated in recommending a specific solution before better
> understanding the architecture of your pcie board and drivers, but
> I've delayed too long, so I am going to go ahead and mention one
> possibility at the risk of not yet fully understanding the situation.
>
> On the surface, it appears that your need might be well met by having
> a base devicetree that describes all of the pcie nodes, but with each
> node having a status of "disabled" so that they will not be used.
> Have a devicetree overlay describing the pcie card (as you proposed),
> where the overlay also includes a status of "ok" for the pcie node.
> Applying the overlay, with a method of redirecting the target to a
> specific pcie node would change the status of the pcie node to enable
> its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
> to allow a modified target, so not a new concept from me.) My suggestion
> is to apply the overlay devicetree to the base devicetree before the
> combined FDT devicetree is passed to the kernel at boot. The overlay
> apply could be done by several different entities. It could be before
> the bootloader executes, it could be done by the bootloader, it could
> be done by a shim between the bootloader and the kernel. This method
> avoids all of the issues of applying an overlay to a running system
> that I find problematic. It is also a method used by the U-boot
> bootloader, as an example.

Adding a layer, the solution to all problems...

I don't think that's a workable solution unless all the components are
in one party's control. Given the desire to work on ACPI and DT based
systems, that doesn't sound like the case here.

> The other big issue is mixing ACPI and devicetree on a single system.
> Historically, the Linux devicetree community has not been receptive
> to the ides of that mixture. Your example might be a specific case
> where the two can be isolated from each other, or maybe not. (For
> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> ACPI and devicetree is a recipe for disaster in the general case.

The idea here is what is described by ACPI and DT are disjoint which I
think we can enforce. Enforcement comes from fwnode assuming it has
either an ACPI or a DT handle, but not both.

> More to come later as I finish reading through the various threads.

There is also the Xilinx folks wanting to support their PCI FPGA card
with DT for the FPGA contents on both ACPI and DT systems.

Rob

2022-05-09 20:36:04

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On 5/9/22 12:00, Andy Shevchenko wrote:
> On Mon, May 09, 2022 at 06:09:17PM +0200, Clément Léger wrote:
>> Le Mon, 9 May 2022 10:56:36 -0500,
>> Frank Rowand <[email protected]> a écrit :
>
> ...
>
>>> On the surface, it appears that your need might be well met by having
>>> a base devicetree that describes all of the pcie nodes, but with each
>>> node having a status of "disabled" so that they will not be used.
>>> Have a devicetree overlay describing the pcie card (as you proposed),
>>> where the overlay also includes a status of "ok" for the pcie node.
>>> Applying the overlay, with a method of redirecting the target to a
>>> specific pcie node would change the status of the pcie node to enable
>>> its use. (You have already proposed a patch to modify
>>> of_overlay_fdt_apply() to allow a modified target, so not a new
>>> concept from me.) My suggestion is to apply the overlay devicetree
>>> to the base devicetree before the combined FDT devicetree is passed
>>> to the kernel at boot. The overlay apply could be done by several
>>> different entities. It could be before the bootloader executes, it
>>> could be done by the bootloader, it could be done by a shim between
>>> the bootloader and the kernel. This method avoids all of the issues
>>> of applying an overlay to a running system that I find problematic.
>>> It is also a method used by the U-boot bootloader, as an example.
>>
>> Ok, that is actually possible on a system that is given a device-tree
>> by the bootloader. But on a system that is desrcibed using ACPI (such
>> as the x86), this is much more difficult (at least to my knowledge)...
>> We want this feature to be easy to use for the end user. Adding such
>> configuration which also differs between various architecture is
>> clearly not so easy to setup.
>>
>> Moreover, since the PCI is meant to be "Plug and Play", such
>> configuration would completely break that. If the user switches the
>> PCIe card from one slot to another, the bootloader configuration will
>> need to be modified. This seems a big no way for me (and for the user).
>
> The main problem here is that Linux does not support hotplugging for the
> devices behind non-hotpluggable buses. You need to develop something to
> say that the device tree (in terms of hardware) can morph at run-time
> transparently to the user. I think the closest one is what FPGA does,
> or at least should do.

That is something I was not aware of yet. Is the card in question a
hotpluggable card? Do the systems that you anticipate plugging the
card into support hotplug?

-Frank

>
>>> The other big issue is mixing ACPI and devicetree on a single system.
>>> Historically, the Linux devicetree community has not been receptive
>>> to the ides of that mixture. Your example might be a specific case
>>> where the two can be isolated from each other, or maybe not. (For
>>> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
>>> ACPI and devicetree is a recipe for disaster in the general case.
>>
>> Agreed, on that fact, it did raised some eyebrows, and it was for that
>> specific concern that initially, I proposed the fwnode solution.
>> Honestly, the fwnode conversion represent a lot of work (hundreds of
>> lines easily) + requires a conversion of all the subsystem that are not
>> fwnode ready (spoiler: almost all of them are not ready).
>
> In either case you need to provide a format that would be suitable for
> DT-based as well as ACPI-based platforms.
>


2022-05-09 20:40:06

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On 5/9/22 11:09, Clément Léger wrote:
> Le Mon, 9 May 2022 10:56:36 -0500,
> Frank Rowand <[email protected]> a écrit :
>
>>> Hi Frank,
>>>
>>> This work uses the kernel space interface (of_overlay_fdt_apply())
>>> and the device tree overlay is builtin the driver. This interface
>>> was used until recently by rcu-dcar driver. While the only user
>>> (sic), this seems to work pretty well and I was able to use it
>>> successfully.
>>
>> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
>> was explicitly recognized as a grandfathered exception, and not an
>> example for other users. It was finally removed in 5.18-rc1.
>
> I noticed that unfortunately.
>
>>
>> You may have used of_overlay_fdt_apply() in a specific use case at
>> a specific kernel version, but if you read through the references
>> I provided you will find that applying overlays after the kernel
>> boots is a fragile endeavor, with expectations of bugs and problems
>> being exposed as usage is changed (simple example is that my adding
>> some overlay notifier unittests exposed yet another memory leak).
>>
>> The reference that I provided also shows how the overlay code is
>> being improved over time. Even with improvements, it will remain
>> fragile.
>
> Acked.
>
>>
>>>
>>> Moreover, this support targets at using this with PCI devices. This
>>> devices are really well contained and do not interfere with other
>>> devices. This actually consists in adding a complete subtree into
>>> the existing device-tree and thus it limits the interactions between
>>> potentially platform provided devices and PCI ones.
>>
>> Yes, that it is very important that you have described this fact, both
>> here and in other emails. Thank you for that information, it does
>> help understanding the alternatives.
>>
>> I've hesitated in recommending a specific solution before better
>> understanding the architecture of your pcie board and drivers, but
>> I've delayed too long, so I am going to go ahead and mention one
>> possibility at the risk of not yet fully understanding the situation.
>>
>> On the surface, it appears that your need might be well met by having
>> a base devicetree that describes all of the pcie nodes, but with each
>> node having a status of "disabled" so that they will not be used.
>> Have a devicetree overlay describing the pcie card (as you proposed),
>> where the overlay also includes a status of "ok" for the pcie node.
>> Applying the overlay, with a method of redirecting the target to a
>> specific pcie node would change the status of the pcie node to enable
>> its use. (You have already proposed a patch to modify
>> of_overlay_fdt_apply() to allow a modified target, so not a new
>> concept from me.) My suggestion is to apply the overlay devicetree
>> to the base devicetree before the combined FDT devicetree is passed
>> to the kernel at boot. The overlay apply could be done by several
>> different entities. It could be before the bootloader executes, it
>> could be done by the bootloader, it could be done by a shim between
>> the bootloader and the kernel. This method avoids all of the issues
>> of applying an overlay to a running system that I find problematic.
>> It is also a method used by the U-boot bootloader, as an example.
>

Apologies if my following questions have already been answered in the
other threads...

> Ok, that is actually possible on a system that is given a device-tree
> by the bootloader. But on a system that is desrcibed using ACPI (such
> as the x86), this is much more difficult (at least to my knowledge)...
> We want this feature to be easy to use for the end user. Adding such
> configuration which also differs between various architecture is
> clearly not so easy to setup.

Are you trying to make your card work on any ACPI based system (x86,
x86-64, etc)? Or do you have a specific model of computer that you
want to make this work on for a specific customer or appliance?

If for many arbitrary systems, can you limit it to one architecture
or sub-architecture?

>
> Moreover, since the PCI is meant to be "Plug and Play", such
> configuration would completely break that. If the user switches the
> PCIe card from one slot to another, the bootloader configuration will
> need to be modified. This seems a big no way for me (and for the user).

Yes. I was envisioning the pre-bootloader, bootloader, or Linux pre-boot
shim dynamically determining the slot containing the card, and applying
the overlay devicetree to the base devicetree, retargeting the overlay
to the proper location, before the Linux boot.

The base devicetree would be for a specific type of machine or family
of machines, just as is the case for all devicetree based systems.

>
>>
>> The other big issue is mixing ACPI and devicetree on a single system.
>> Historically, the Linux devicetree community has not been receptive
>> to the ides of that mixture. Your example might be a specific case
>> where the two can be isolated from each other, or maybe not. (For
>> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
>> ACPI and devicetree is a recipe for disaster in the general case.
>
> Agreed, on that fact, it did raised some eyebrows, and it was for that
> specific concern that initially, I proposed the fwnode solution.
> Honestly, the fwnode conversion represent a lot of work (hundreds of
> lines easily) + requires a conversion of all the subsystem that are not
> fwnode ready (spoiler: almost all of them are not ready).
>
> After implementing Rob's solution, the device-tree overlay really seems
> the cleaner to me and requires much less modifications.
>
>>
>> More to come later as I finish reading through the various threads.
>
> Ok, thanks for your time !

Your welcome. I'll keep looking deeper into the previous threads.

-Frank

>
> Clément
>
>>
>> -Frank
>
> .


2022-05-09 20:44:14

by Frank Rowand

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On 5/9/22 13:36, Rob Herring wrote:
> On Mon, May 9, 2022 at 10:56 AM Frank Rowand <[email protected]> wrote:
>>
>> On 5/9/22 07:16, Clément Léger wrote:
>>> Le Fri, 6 May 2022 13:33:22 -0500,
>>> Frank Rowand <[email protected]> a écrit :
>>>
>>>> On 4/27/22 04:44, Clément Léger wrote:
>>>>> This series adds foundation work to support the lan9662 PCIe card.
>>>>> This card is meant to be used an ethernet switch with 2 x RJ45
>>>>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
>>>>> different ways:
>>>>>
>>>>> - It can run Linux by itself, on ARM64 cores included in the SoC.
>>>>> This use-case of the lan966x is currently being upstreamed, using a
>>>>> traditional Device Tree representation of the lan996x HW blocks
>>>>> [1] A number of drivers for the different IPs of the SoC have
>>>>> already been merged in upstream Linux.
>>>>>
>>>>> - It can be used as a PCIe endpoint, connected to a separate
>>>>> platform that acts as the PCIe root complex. In this case, all the
>>>>> devices that are embedded on this SoC are exposed through PCIe BARs
>>>>> and the ARM64 cores of the SoC are not used. Since this is a PCIe
>>>>> card, it can be plugged on any platform, of any architecture
>>>>> supporting PCIe.
>>>>>
>>>>> The problem that arose is that we want to reuse all the existing OF
>>>>> compatible drivers that are used when in SoC mode to instantiate the
>>>>> PCI device when in PCIe endpoint mode.
>>>>>
>>>>> A previous attempt to tackle this problem was made using fwnode [1].
>>>>> However, this proved being way too invasive and it required
>>>>> modifications in both subsystems and drivers to support fwnode.
>>>>> First series did not lead to a consensus and multiple ideas to
>>>>> support this use-case were mentioned (ACPI overlay, fwnode,
>>>>> device-tree overlay). Since it only seemed that fwnode was not a
>>>>> totally silly idea, we continued on this way.
>>>>>
>>>>> However, on the series that added fwnode support to the reset
>>>>> subsystem, Rob Herring mentioned the fact that OF overlay might
>>>>> actually be the best way to probe PCI devices and populate platform
>>>>> drivers using this overlay. He also provided a branch containing
>>>>> some commits that helped
>>>>
>>>> I need to go look at the various email threads mentioned above before
>>>> I continue reading this patch series.
>>>>
>>>> I do have serious concerns with this approach. I need to investigate
>>>> more fully before I can determine whether the concerns are addressed
>>>> sufficiently.
>>>>
>>>> To give some background to my longstanding response to similar
>>>> proposals, here is my old statement from
>>>> https://elinux.org/Device_Tree_Reference:
>>>>
>>>> Overlays
>>>> Mainline Linux Support
>>>> Run time overlay apply and run time overlay remove from user space
>>>> are not supported in the mainline kernel. There are out of tree
>>>> patches to implement this feature via an overlay manager. The overlay
>>>> manager is used successfully by many users for specific overlays on
>>>> specific boards with specific environments and use cases. However,
>>>> there are many issues with the Linux kernel overlay implementation
>>>> due to incomplete and incorrect code. The overlay manager has not
>>>> been accepted in mainline due to these issues. Once these issues are
>>>> resolved, it is expected that some method of run time overlay apply
>>>> and overlay removal from user space will be supported by the Linux
>>>> kernel.
>>>>
>>>> There is a possibility that overlay apply and overlay remove
>>>> support could be phased in slowly, feature by feature, as specific
>>>> issues are resolved.
>>>
>>> Hi Frank,
>>>
>>> This work uses the kernel space interface (of_overlay_fdt_apply())
>>> and the device tree overlay is builtin the driver. This interface was
>>> used until recently by rcu-dcar driver. While the only user (sic),
>>> this seems to work pretty well and I was able to use it successfully.
>>
>> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
>> was explicitly recognized as a grandfathered exception, and not an
>> example for other users. It was finally removed in 5.18-rc1.
>
> What API are folks supposed to use exactly? That's the only API to
> apply an overlay.

Yes, that is the API to designed to be used if overlays are applied
after the kernel has booted.

> I thought the FPGA mgr code was using it too, but

That was my understanding too.

> it's not. It doesn't look to me like the upstream code there even
> works as nothing applies the overlays AFAICT. If there are no in
> kernel users applying overlays, then let's remove the overlay code. I
> hear it has lots of problems.

I would not object to doing that. But I suspect there are other people
who might. I much prefer that overlays be applied before boot.

>
> I am *way* more comfortable with driver specific applying of overlays
> than any generic mechanism. I don't think we'll ever have a generic
> mechanism. At least not one that doesn't end up with the same usage
> constraints driver specific cases would have.

Yes, generic use leads to a lot of the complications of implementing
runtime overlays.

>
>
>> You may have used of_overlay_fdt_apply() in a specific use case at
>> a specific kernel version, but if you read through the references
>> I provided you will find that applying overlays after the kernel
>> boots is a fragile endeavor, with expectations of bugs and problems
>> being exposed as usage is changed (simple example is that my adding
>> some overlay notifier unittests exposed yet another memory leak).
>
> The exception being specific drivers that are only applying overlays
> isolated to their device as this usecase is. The usecase here is
> entirely self-contained. The only base tree is only what's needed to
> represent the PCI device.

This example might be the best example of such a use case. As it is
claimed that the drivers for the devices on the board do not access
anything on the host other than the pcie slot.

>
>> The reference that I provided also shows how the overlay code is
>> being improved over time. Even with improvements, it will remain
>> fragile.
>>
>>>
>>> Moreover, this support targets at using this with PCI devices. This
>>> devices are really well contained and do not interfere with other
>>> devices. This actually consists in adding a complete subtree into the
>>> existing device-tree and thus it limits the interactions between
>>> potentially platform provided devices and PCI ones.
>>
>> Yes, that it is very important that you have described this fact, both
>> here and in other emails. Thank you for that information, it does help
>> understanding the alternatives.
>>
>> I've hesitated in recommending a specific solution before better
>> understanding the architecture of your pcie board and drivers, but
>> I've delayed too long, so I am going to go ahead and mention one
>> possibility at the risk of not yet fully understanding the situation.
>>
>> On the surface, it appears that your need might be well met by having
>> a base devicetree that describes all of the pcie nodes, but with each
>> node having a status of "disabled" so that they will not be used.
>> Have a devicetree overlay describing the pcie card (as you proposed),
>> where the overlay also includes a status of "ok" for the pcie node.
>> Applying the overlay, with a method of redirecting the target to a
>> specific pcie node would change the status of the pcie node to enable
>> its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
>> to allow a modified target, so not a new concept from me.) My suggestion
>> is to apply the overlay devicetree to the base devicetree before the
>> combined FDT devicetree is passed to the kernel at boot. The overlay
>> apply could be done by several different entities. It could be before
>> the bootloader executes, it could be done by the bootloader, it could
>> be done by a shim between the bootloader and the kernel. This method
>> avoids all of the issues of applying an overlay to a running system
>> that I find problematic. It is also a method used by the U-boot
>> bootloader, as an example.
>
> Adding a layer, the solution to all problems...

< insert xkcd reference here > :-)

>
> I don't think that's a workable solution unless all the components are
> in one party's control. Given the desire to work on ACPI and DT based
> systems, that doesn't sound like the case here.

That is the motivation behind my questions of how generic or targeted
the use of this one specific card is.

A pre-boot shim that discovers the card and applies the overlay could
be somewhat generic to ACPI systems.

Is the overlay approach also being proposed for DT based systems?

>
>> The other big issue is mixing ACPI and devicetree on a single system.
>> Historically, the Linux devicetree community has not been receptive
>> to the ides of that mixture. Your example might be a specific case
>> where the two can be isolated from each other, or maybe not. (For
>> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
>> ACPI and devicetree is a recipe for disaster in the general case.
>
> The idea here is what is described by ACPI and DT are disjoint which I
> think we can enforce. Enforcement comes from fwnode assuming it has
> either an ACPI or a DT handle, but not both.

I thought the intent was to use DT API drivers, not fwnode API drivers.
And I thought the card was not to be described by ACPI, so there would
not be any ACPI fwnode info for the card _and_ fwnode is somewhat
irrelevant since the drivers are using the DT API. So the enforcement
you mention could be implemented by the overlay apply code verifying
that there is no ACPI fwnode for any DT node in the overlay. So not
an _assumption_, but a testable rule.

That is a long way of essentially agreeing with you.

>
>> More to come later as I finish reading through the various threads.
>
> There is also the Xilinx folks wanting to support their PCI FPGA card
> with DT for the FPGA contents on both ACPI and DT systems.

Is that the XRT Alveo driver infrastructure thread? I was not cc:ed
on that thread, and just recently stumbled upon it. Yet another
thread that is on my to-read list.

>
> Rob


2022-05-09 20:55:02

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On Mon, May 9, 2022 at 10:36 PM Frank Rowand <[email protected]> wrote:
>
> On 5/9/22 12:00, Andy Shevchenko wrote:
> > On Mon, May 09, 2022 at 06:09:17PM +0200, Clément Léger wrote:
> >> Le Mon, 9 May 2022 10:56:36 -0500,
> >> Frank Rowand <[email protected]> a écrit :
> >
> > ...
> >
> >>> On the surface, it appears that your need might be well met by having
> >>> a base devicetree that describes all of the pcie nodes, but with each
> >>> node having a status of "disabled" so that they will not be used.
> >>> Have a devicetree overlay describing the pcie card (as you proposed),
> >>> where the overlay also includes a status of "ok" for the pcie node.
> >>> Applying the overlay, with a method of redirecting the target to a
> >>> specific pcie node would change the status of the pcie node to enable
> >>> its use. (You have already proposed a patch to modify
> >>> of_overlay_fdt_apply() to allow a modified target, so not a new
> >>> concept from me.) My suggestion is to apply the overlay devicetree
> >>> to the base devicetree before the combined FDT devicetree is passed
> >>> to the kernel at boot. The overlay apply could be done by several
> >>> different entities. It could be before the bootloader executes, it
> >>> could be done by the bootloader, it could be done by a shim between
> >>> the bootloader and the kernel. This method avoids all of the issues
> >>> of applying an overlay to a running system that I find problematic.
> >>> It is also a method used by the U-boot bootloader, as an example.
> >>
> >> Ok, that is actually possible on a system that is given a device-tree
> >> by the bootloader. But on a system that is desrcibed using ACPI (such
> >> as the x86), this is much more difficult (at least to my knowledge)...
> >> We want this feature to be easy to use for the end user. Adding such
> >> configuration which also differs between various architecture is
> >> clearly not so easy to setup.
> >>
> >> Moreover, since the PCI is meant to be "Plug and Play", such
> >> configuration would completely break that. If the user switches the
> >> PCIe card from one slot to another, the bootloader configuration will
> >> need to be modified. This seems a big no way for me (and for the user).
> >
> > The main problem here is that Linux does not support hotplugging for the
> > devices behind non-hotpluggable buses. You need to develop something to
> > say that the device tree (in terms of hardware) can morph at run-time
> > transparently to the user. I think the closest one is what FPGA does,
> > or at least should do.
>
> That is something I was not aware of yet. Is the card in question a
> hotpluggable card? Do the systems that you anticipate plugging the
> card into support hotplug?

Any PCIe card is potentially hotpluggable (seems nobody actually cares
in 90%+ drivers in the Linux kernel). But what I have heard in a
thread (not this one IIRC) is that the card may have pluggable modules
and it would be nice to change configuration and notify OS somehow. I
might be mistaken if it's the case here or not.

--
With Best Regards,
Andy Shevchenko

2022-05-10 11:13:49

by Clément Léger

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

Le Mon, 9 May 2022 15:07:18 -0500,
Frank Rowand <[email protected]> a écrit :

> > Ok, that is actually possible on a system that is given a
> > device-tree by the bootloader. But on a system that is desrcibed
> > using ACPI (such as the x86), this is much more difficult (at least
> > to my knowledge)... We want this feature to be easy to use for the
> > end user. Adding such configuration which also differs between
> > various architecture is clearly not so easy to setup.
>
> Are you trying to make your card work on any ACPI based system (x86,
> x86-64, etc)? Or do you have a specific model of computer that you
> want to make this work on for a specific customer or appliance?

There is no particular appliance/architecture targeted. This card should
work with any system that can support PCIe.

>
> If for many arbitrary systems, can you limit it to one architecture
> or sub-architecture?

Previous answer does rule out this one.

>
> >
> > Moreover, since the PCI is meant to be "Plug and Play", such
> > configuration would completely break that. If the user switches the
> > PCIe card from one slot to another, the bootloader configuration
> > will need to be modified. This seems a big no way for me (and for
> > the user).
>
> Yes. I was envisioning the pre-bootloader, bootloader, or Linux
> pre-boot shim dynamically determining the slot containing the card,
> and applying the overlay devicetree to the base devicetree,
> retargeting the overlay to the proper location, before the Linux boot.

Ok, this is however not doable on many already existing platforms since
it would require to adapt this system for whatever bootloader is
present on the platform. Moreover, AFAIK some platforms bootchain (x86)
are hardly modifiable.

>
> The base devicetree would be for a specific type of machine or family
> of machines, just as is the case for all devicetree based systems.
>
> >
> >>
> >> The other big issue is mixing ACPI and devicetree on a single
> >> system. Historically, the Linux devicetree community has not been
> >> receptive to the ides of that mixture. Your example might be a
> >> specific case where the two can be isolated from each other, or
> >> maybe not. (For disclosure, I am essentially ACPI ignorant.) I
> >> suspect that mixing ACPI and devicetree is a recipe for disaster
> >> in the general case.
> >
> > Agreed, on that fact, it did raised some eyebrows, and it was for
> > that specific concern that initially, I proposed the fwnode
> > solution. Honestly, the fwnode conversion represent a lot of work
> > (hundreds of lines easily) + requires a conversion of all the
> > subsystem that are not fwnode ready (spoiler: almost all of them
> > are not ready).
> >
> > After implementing Rob's solution, the device-tree overlay really
> > seems the cleaner to me and requires much less modifications.
> >
> >>
> >> More to come later as I finish reading through the various
> >> threads.
> >
> > Ok, thanks for your time !
>
> Your welcome. I'll keep looking deeper into the previous threads.
>
> -Frank
>
> >
> > Clément
> >
> >>
> >> -Frank
> >
> > .
>


2022-05-10 13:24:16

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On Mon, May 09, 2022 at 10:40:12PM +0200, Andy Shevchenko wrote:
> > hotpluggable card? Do the systems that you anticipate plugging the
> > card into support hotplug?
>
> Any PCIe card is potentially hotpluggable (seems nobody actually cares
> in 90%+ drivers in the Linux kernel). But what I have heard in a
> thread (not this one IIRC) is that the card may have pluggable modules
> and it would be nice to change configuration and notify OS somehow. I
> might be mistaken if it's the case here or not.

Well. M.2 for example is not hotpluggable, as are soldered on BGA
devices or a lot of not quite PCIe devices that actually sit on CPUs
or shipset components. But for all but the last category an upstream
bridge could still be hot plugged, so not supporting it in drivers is
indeed generally speaking a bad idea.

2022-05-10 20:07:46

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay

On Mon, May 09, 2022 at 03:35:45PM -0500, Frank Rowand wrote:
> On 5/9/22 13:36, Rob Herring wrote:
> > On Mon, May 9, 2022 at 10:56 AM Frank Rowand <[email protected]> wrote:
> >>
> >> On 5/9/22 07:16, Cl?ment L?ger wrote:
> >>> Le Fri, 6 May 2022 13:33:22 -0500,
> >>> Frank Rowand <[email protected]> a ?crit :
> >>>
> >>>> On 4/27/22 04:44, Cl?ment L?ger wrote:
> >>>>> This series adds foundation work to support the lan9662 PCIe card.
> >>>>> This card is meant to be used an ethernet switch with 2 x RJ45
> >>>>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
> >>>>> different ways:

[...]

> >>> This work uses the kernel space interface (of_overlay_fdt_apply())
> >>> and the device tree overlay is builtin the driver. This interface was
> >>> used until recently by rcu-dcar driver. While the only user (sic),
> >>> this seems to work pretty well and I was able to use it successfully.
> >>
> >> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
> >> was explicitly recognized as a grandfathered exception, and not an
> >> example for other users. It was finally removed in 5.18-rc1.
> >
> > What API are folks supposed to use exactly? That's the only API to
> > apply an overlay.
>
> Yes, that is the API to designed to be used if overlays are applied
> after the kernel has booted.
>
> > I thought the FPGA mgr code was using it too, but
>
> That was my understanding too.
>
> > it's not. It doesn't look to me like the upstream code there even
> > works as nothing applies the overlays AFAICT. If there are no in
> > kernel users applying overlays, then let's remove the overlay code. I
> > hear it has lots of problems.
>
> I would not object to doing that. But I suspect there are other people
> who might. I much prefer that overlays be applied before boot.

Strictly speaking, we don't keep APIs with no users. Perhaps removing or
threatening to remove would spur on some work in your overlay issues
list. :)


> >>> Moreover, this support targets at using this with PCI devices. This
> >>> devices are really well contained and do not interfere with other
> >>> devices. This actually consists in adding a complete subtree into the
> >>> existing device-tree and thus it limits the interactions between
> >>> potentially platform provided devices and PCI ones.
> >>
> >> Yes, that it is very important that you have described this fact, both
> >> here and in other emails. Thank you for that information, it does help
> >> understanding the alternatives.
> >>
> >> I've hesitated in recommending a specific solution before better
> >> understanding the architecture of your pcie board and drivers, but
> >> I've delayed too long, so I am going to go ahead and mention one
> >> possibility at the risk of not yet fully understanding the situation.
> >>
> >> On the surface, it appears that your need might be well met by having
> >> a base devicetree that describes all of the pcie nodes, but with each
> >> node having a status of "disabled" so that they will not be used.
> >> Have a devicetree overlay describing the pcie card (as you proposed),
> >> where the overlay also includes a status of "ok" for the pcie node.
> >> Applying the overlay, with a method of redirecting the target to a
> >> specific pcie node would change the status of the pcie node to enable
> >> its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
> >> to allow a modified target, so not a new concept from me.) My suggestion
> >> is to apply the overlay devicetree to the base devicetree before the
> >> combined FDT devicetree is passed to the kernel at boot. The overlay
> >> apply could be done by several different entities. It could be before
> >> the bootloader executes, it could be done by the bootloader, it could
> >> be done by a shim between the bootloader and the kernel. This method
> >> avoids all of the issues of applying an overlay to a running system
> >> that I find problematic. It is also a method used by the U-boot
> >> bootloader, as an example.
> >
> > Adding a layer, the solution to all problems...
>
> < insert xkcd reference here > :-)
>
> >
> > I don't think that's a workable solution unless all the components are
> > in one party's control. Given the desire to work on ACPI and DT based
> > systems, that doesn't sound like the case here.
>
> That is the motivation behind my questions of how generic or targeted
> the use of this one specific card is.
>
> A pre-boot shim that discovers the card and applies the overlay could
> be somewhat generic to ACPI systems.
>
> Is the overlay approach also being proposed for DT based systems?

Yes, the intent is it would work for either.

Another usecase I have in mind are USB to serial chips with downstream
GPIOs, I2C, SPI, etc. where you want to describe downstream devices. And
then you plug in more than 1...

> >> The other big issue is mixing ACPI and devicetree on a single system.
> >> Historically, the Linux devicetree community has not been receptive
> >> to the ides of that mixture. Your example might be a specific case
> >> where the two can be isolated from each other, or maybe not. (For
> >> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> >> ACPI and devicetree is a recipe for disaster in the general case.
> >
> > The idea here is what is described by ACPI and DT are disjoint which I
> > think we can enforce. Enforcement comes from fwnode assuming it has
> > either an ACPI or a DT handle, but not both.
>
> I thought the intent was to use DT API drivers, not fwnode API drivers.

Yes.

> And I thought the card was not to be described by ACPI, so there would
> not be any ACPI fwnode info for the card _and_ fwnode is somewhat
> irrelevant since the drivers are using the DT API.

The card would not be described, but the PCI host bridge would likely
be and that may be where the conflict is. Though I think PCI hosts are
described differently from devices that end up with a fwnode handle in
ACPI.

My current thinking is the whole PCI bus structure from the card
device up the tree needs to be created in the base DT. Maybe we could
create just a fake parent for the device and avoid the above issue, but
I'd guess that would just create other issues.

> So the enforcement
> you mention could be implemented by the overlay apply code verifying
> that there is no ACPI fwnode for any DT node in the overlay. So not
> an _assumption_, but a testable rule.

I think we'd need something where ever device fwnode ptrs are getting
set. Not entirely sure yet.

> That is a long way of essentially agreeing with you.
>
> >
> >> More to come later as I finish reading through the various threads.
> >
> > There is also the Xilinx folks wanting to support their PCI FPGA card
> > with DT for the FPGA contents on both ACPI and DT systems.
>
> Is that the XRT Alveo driver infrastructure thread? I was not cc:ed
> on that thread, and just recently stumbled upon it. Yet another
> thread that is on my to-read list.

Yes, and I've had a call about it in the 'system DT' call.

Rob

2022-05-18 19:23:30

by Lizhi Hou

[permalink] [raw]
Subject: Re: [PATCH 2/3] PCI: of: create DT nodes for PCI devices if they do not exists


On 5/4/22 06:43, Clément Léger wrote:
> Le Tue, 3 May 2022 17:53:53 -0500,
> Bjorn Helgaas <[email protected]> a écrit :
>
>> In subject:
>>
>> PCI: of: Create DT nodes ... if they do not exist
>>
>> The subject could be read as saying that you're going to create DT
>> nodes before the PCI devices exist, but I think you mean that when we
>> enumerate a PCI devices, we're *always* going to create a DT node for
>> it, even if the DT didn't mention it.
> Hi Bjorn,
>
> Indeed ! I'll modify that.

Linking the dynamic generated dt node to PCIe device through
pci_dev->dev.of_node may cause issues. Kernel and driver code may check
of_node pointer and run complete different code path if of_node is not
NULL.

For example:  in of_irq_parse_pci():
https://elixir.bootlin.com/linux/v5.18-rc2/source/drivers/pci/of.c#L492

I encountered different issues when I tried to create a prototype. And I
have sent all may questions/thoughts through
https://lore.kernel.org/lkml/[email protected]/#t

I am wondering what would be the right way to resolve it?


Thanks,

Lizhi

>
>> Maybe something like:
>>
>> PCI: of: Create DT node for every PCI device
>>
>> although I see Rob thinks this should be done on demand instead of
>> doing it for every device, which sounds sensible to me.
> Agreed, I'll rework this series.
>
> Thanks,
>
>> On Wed, Apr 27, 2022 at 11:45:01AM +0200, Clément Léger wrote:
>>> In order to apply overlays to PCI device nodes, the nodes must first
>>> exist. This commit add support to populate a skeleton tree for PCI bus
>>> and devices. These nodes can then be used by drivers to apply overlays.
>> s/This commit add support/Add support/
>>
>> Bjorn
>
>