2024-03-28 21:18:11

by Oreoluwa Babatunde

[permalink] [raw]
Subject: [PATCH v5 1/4] of: reserved_mem: Restruture how the reserved memory regions are processed

The current implementation processes the reserved memory regions in two
stages which are done with two separate functions within the
early_init_fdt_scan_reserved_mem() function.

Within the two stages of processing, the reserved memory regions are
broken up into two groups which are processed differently:
i) Statically-placed reserved memory regions
i.e. regions defined with a static start address and size using the
"reg" property in the DT.
ii) Dynamically-placed reserved memory regions.
i.e. regions defined by specifying a range of addresses where they can
be placed in memory using the "alloc_ranges" and "size" properties
in the DT.

Stage 1: fdt_scan_reserved_mem()
This stage of the reserved memory processing is used to scan through the
reserved memory nodes defined in the devicetree and do the following on
each of the nodes:

1) If the node represents a statically-placed reserved memory region,
i.e. it is defined using the "reg" property:
- Call memblock_reserve() or memblock_mark_nomap() as needed.
- Add the information for the reserved region to the reserved_mem
array.
eg: fdt_reserved_mem_save_node(node, name, base, size);

2) If the node represents a dynamically-placed reserved memory region,
i.e. it is defined using "alloc-ranges" and "size" properties:
- Add the information for the region to the reserved_mem array with
the starting address and size set to 0.
eg: fdt_reserved_mem_save_node(node, name, 0, 0);

Stage 2: fdt_init_reserved_mem()
This stage of the reserved memory processing is used to iterate through
the reserved_mem array which was populated in stage 1 and do the
following on each of the entries:

1) If the entry represents a statically-placed reserved memory region:
- Call the region specific init function.
2) If the entry represents a dynamically-placed reserved memory region:
- Call __reserved_mem_alloc_size() which is used to allocate memory
for the region using memblock_phys_alloc_range(), and call
memblock_mark_nomap() on the allocated region if the region is
specified as a no-map region.
- Call the region specific init function.

On architectures such as arm64, the dynamic allocation of the
reserved_mem array needs to be done after the page tables have been
setup because memblock allocated memory is not writable until then. This
means that the reserved_mem array will not be available to store any
reserved memory information until after the page tables have been setup.

It is possible to call memblock_reserve() and memblock_mark_nomap() on
the statically-placed reserved memory regions and not need to save them
to the reserved_mem array until later. This is because all the
information we need is present in the devicetree.
Dynamically-placed reserved memory regions on the other hand get
assigned a start address only at runtime, and since memblock_reserve()
and memblock_mark_nomap() need to be called before the memory mappings
are created, the allocation needs to happen before the page tables are
setup.

To make it easier to handle dynamically-placed reserved memory regions
before the page tables are setup, this patch makes changes to the steps
above to process the reserved memory regions in the following ways:

Step 1: fdt_scan_reserved_mem()
This stage of the reserved memory processing is used to scan through the
reserved memory nodes defined in the devicetree and do the following on
each of the nodes:

1) If the node represents a statically-placed reserved memory region,
i.e. it is defined using the "reg" property:
- Call memblock_reserve() or memblock_mark_nomap() as needed.

2) If the node represents a dynamically-placed reserved memory region,
i.e. it is defined using "alloc-ranges" and "size" properties:
- Call __reserved_mem_alloc_size() which will:
i) Allocate memory for the reserved memory region.
ii) Call memblock_mark_nomap() as needed.
Note: There is no need to explicitly call memblock_reserve() here
because it is already called by memblock when the memory for the
region is being allocated.
iii) Save the information for the region in the reserved_mem array.

Step 2: fdt_init_reserved_mem()
This stage of the reserved memory processing is used to:

1) Add the information for the statically-placed reserved memory into
the reserved_mem array.

2) Iterate through all the entries in the array and call the region
specific init function for each of them.

fdt_init_reserved_mem() is also now called from within the
unflatten_device_tree() function so that this step happens after the
page tables have been setup.

Signed-off-by: Oreoluwa Babatunde <[email protected]>
---
drivers/of/fdt.c | 5 +-
drivers/of/of_private.h | 1 +
drivers/of/of_reserved_mem.c | 134 +++++++++++++++++++++++++----------
3 files changed, 100 insertions(+), 40 deletions(-)

diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index a8a04f27915b..527e6bc1c096 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -532,8 +532,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
break;
memblock_reserve(base, size);
}
-
- fdt_init_reserved_mem();
}

/**
@@ -1259,6 +1257,9 @@ void __init unflatten_device_tree(void)
of_alias_scan(early_init_dt_alloc_memory_arch);

unittest_unflatten_overlay_base();
+
+ /* initialize the reserved memory regions */
+ fdt_init_reserved_mem();
}

/**
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 485483524b7f..9ea250b80657 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -9,6 +9,7 @@
*/

#define FDT_ALIGN_SIZE 8
+#define MAX_RESERVED_REGIONS 64

/**
* struct alias_prop - Alias property in 'aliases' node
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 8236ecae2953..db991de16cc0 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -27,7 +27,6 @@

#include "of_private.h"

-#define MAX_RESERVED_REGIONS 64
static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
static int reserved_mem_count;

@@ -106,7 +105,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
phys_addr_t base, size;
int len;
const __be32 *prop;
- int first = 1;
bool nomap;

prop = of_get_flat_dt_prop(node, "reg", &len);
@@ -134,10 +132,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
uname, &base, (unsigned long)(size / SZ_1M));

len -= t_len;
- if (first) {
- fdt_reserved_mem_save_node(node, uname, base, size);
- first = 0;
- }
}
return 0;
}
@@ -165,12 +159,69 @@ static int __init __reserved_mem_check_root(unsigned long node)
return 0;
}

+/**
+ * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined
+ * reserved memory regions.
+ *
+ * This function is used to scan through the DT and store the
+ * information for the reserved memory regions that are defined using
+ * the "reg" property. The region node number, name, base address, and
+ * size are all stored in the reserved_mem array by calling the
+ * fdt_reserved_mem_save_node() function.
+ */
+static void __init fdt_scan_reserved_mem_reg_nodes(void)
+{
+ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
+ const void *fdt = initial_boot_params;
+ phys_addr_t base, size;
+ const __be32 *prop;
+ int node, child;
+ int len;
+
+ node = fdt_path_offset(fdt, "/reserved-memory");
+ if (node < 0) {
+ pr_info("Reserved memory: No reserved-memory node in the DT\n");
+ return;
+ }
+
+ if (__reserved_mem_check_root(node)) {
+ pr_err("Reserved memory: unsupported node format, ignoring\n");
+ return;
+ }
+
+ fdt_for_each_subnode(child, fdt, node) {
+ const char *uname;
+
+ prop = of_get_flat_dt_prop(child, "reg", &len);
+ if (!prop)
+ continue;
+ if (!of_fdt_device_is_available(fdt, child))
+ continue;
+
+ uname = fdt_get_name(fdt, child, NULL);
+ if (len && len % t_len != 0) {
+ pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n",
+ uname);
+ continue;
+ }
+ base = dt_mem_next_cell(dt_root_addr_cells, &prop);
+ size = dt_mem_next_cell(dt_root_size_cells, &prop);
+
+ if (size)
+ fdt_reserved_mem_save_node(child, uname, base, size);
+ }
+}
+
+static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname);
+
/*
* fdt_scan_reserved_mem() - scan a single FDT node for reserved memory
*/
int __init fdt_scan_reserved_mem(void)
{
int node, child;
+ int dynamic_nodes_cnt = 0;
+ int dynamic_nodes[MAX_RESERVED_REGIONS];
const void *fdt = initial_boot_params;

node = fdt_path_offset(fdt, "/reserved-memory");
@@ -192,8 +243,24 @@ int __init fdt_scan_reserved_mem(void)
uname = fdt_get_name(fdt, child, NULL);

err = __reserved_mem_reserve_reg(child, uname);
- if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL))
- fdt_reserved_mem_save_node(child, uname, 0, 0);
+ /*
+ * Save the nodes for the dynamically-placed regions
+ * into an array which will be used for allocation right
+ * after all the statically-placed regions are reserved
+ * or marked as no-map. This is done to avoid dynamically
+ * allocating from one of the statically-placed regions.
+ */
+ if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) {
+ dynamic_nodes[dynamic_nodes_cnt] = child;
+ dynamic_nodes_cnt++;
+ }
+ }
+ for (int i = 0; i < dynamic_nodes_cnt; i++) {
+ const char *uname;
+
+ child = dynamic_nodes[i];
+ uname = fdt_get_name(fdt, child, NULL);
+ __reserved_mem_alloc_size(child, uname);
}
return 0;
}
@@ -253,8 +320,7 @@ static int __init __reserved_mem_alloc_in_range(phys_addr_t size,
* __reserved_mem_alloc_size() - allocate reserved memory described by
* 'size', 'alignment' and 'alloc-ranges' properties.
*/
-static int __init __reserved_mem_alloc_size(unsigned long node,
- const char *uname, phys_addr_t *res_base, phys_addr_t *res_size)
+static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname)
{
int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
phys_addr_t start = 0, end = 0;
@@ -333,10 +399,7 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
uname, (unsigned long)(size / SZ_1M));
return -ENOMEM;
}
-
- *res_base = base;
- *res_size = size;
-
+ fdt_reserved_mem_save_node(node, uname, base, size);
return 0;
}

@@ -431,6 +494,8 @@ void __init fdt_init_reserved_mem(void)
{
int i;

+ fdt_scan_reserved_mem_reg_nodes();
+
/* check for overlapping reserved regions */
__rmem_check_for_overlap();

@@ -449,30 +514,23 @@ void __init fdt_init_reserved_mem(void)
if (prop)
rmem->phandle = of_read_number(prop, len/4);

- if (rmem->size == 0)
- err = __reserved_mem_alloc_size(node, rmem->name,
- &rmem->base, &rmem->size);
- if (err == 0) {
- err = __reserved_mem_init_node(rmem);
- if (err != 0 && err != -ENOENT) {
- pr_info("node %s compatible matching fail\n",
- rmem->name);
- if (nomap)
- memblock_clear_nomap(rmem->base, rmem->size);
- else
- memblock_phys_free(rmem->base,
- rmem->size);
- } else {
- phys_addr_t end = rmem->base + rmem->size - 1;
- bool reusable =
- (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
-
- pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
- &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
- nomap ? "nomap" : "map",
- reusable ? "reusable" : "non-reusable",
- rmem->name ? rmem->name : "unknown");
- }
+ err = __reserved_mem_init_node(rmem);
+ if (err != 0 && err != -ENOENT) {
+ pr_info("node %s compatible matching fail\n", rmem->name);
+ if (nomap)
+ memblock_clear_nomap(rmem->base, rmem->size);
+ else
+ memblock_phys_free(rmem->base, rmem->size);
+ } else {
+ phys_addr_t end = rmem->base + rmem->size - 1;
+ bool reusable =
+ (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
+
+ pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
+ &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
+ nomap ? "nomap" : "map",
+ reusable ? "reusable" : "non-reusable",
+ rmem->name ? rmem->name : "unknown");
}
}
}
--
2.34.1