2023-06-25 00:26:31

by Alison Schofield

[permalink] [raw]
Subject: [PATCH v3 0/2] CXL: Apply SRAT defined PXM to entire CFMWS window

From: Alison Schofield <[email protected]>

Changes in v3:
- Define CFMWS and add CXL Spec link in cover letter (Peter, Jonathan)
- s/HPA/physical address in Patch 1 (Peter)
- Remove overkill comment in Patch 1 (Dan)
- Simplify cmp_memblk() in Patch 1 (Dan)

v2: https://lore.kernel.org/linux-cxl/[email protected]/

----
Cover Letter:

The CXL subsystem requires the creation of NUMA nodes for CFMWS
Windows[1] not described in the SRAT. The existing implementation
only addresses windows that the SRAT describes completely or not
at all. This work addresses the case of partially described CFMWS
Windows by extending proximity domains in a portion of a CFMWS
window to the entire window.

Introduce a NUMA helper, numa_fill_memblks(), to fill gaps in a
numa_meminfo memblk address range. Update the CFMWS parsing in the
ACPI driver to use numa_fill_memblks() to extend SRAT defined
proximity domains to entire CXL windows.

An RFC of this patchset was previously posted for CXL folks review
here[2]. The RFC feedback led to the implementation here, extending
existing memblks (Dan). Also, both Jonathan and Dan influenced the
changelog comments in the ACPI patch, with regards to setting
expectations on this evolving heuristic.

Repeating here to set reviewer expectations:
*Note that this heuristic will evolve when CFMWS Windows present a
wider range of characteristics. The extension of the proximity domain,
implemented here, is likely a step in developing a more sophisticated
performance profile in the future.

[1] CFMWS is defined in CXL Spec 3.0 Section 9.17.1.3 :
https://www.computeexpresslink.org/spec-landing

A CXL Fixed Memory Window is a region of Host Physical Address (HPA)
Space which routes accesses to CXL Host bridges. The 'S', of CFMWS,
stand for the structure that describes the window, hence it's common
name, CFMWS.

[2] https://lore.kernel.org/linux-cxl/[email protected]/

Alison Schofield (2):
x86/numa: Introduce numa_fill_memblks()
ACPI: NUMA: Apply SRAT proximity domain to entire CFMWS window

arch/x86/include/asm/sparsemem.h | 2 +
arch/x86/mm/numa.c | 81 ++++++++++++++++++++++++++++++++
drivers/acpi/numa/srat.c | 11 +++--
include/linux/numa.h | 7 +++
4 files changed, 98 insertions(+), 3 deletions(-)


base-commit: 214a71b53bc7cb30f6b8d43089037e9fe7f3ae1f
--
2.37.3



2023-06-25 00:26:45

by Alison Schofield

[permalink] [raw]
Subject: [PATCH v3 1/2] x86/numa: Introduce numa_fill_memblks()

From: Alison Schofield <[email protected]>

numa_fill_memblks() fills in the gaps in numa_meminfo memblks
over an physical address range.

The ACPI driver will use numa_fill_memblks() to implement a new Linux
policy that prescribes extending proximity domains in a portion of a
CFMWS window to the entire window.

Dan Williams offered this explanation of the policy:
A CFWMS is an ACPI data structure that indicates *potential* locations
where CXL memory can be placed. It is the playground where the CXL
driver has free reign to establish regions. That space can be populated
by BIOS created regions, or driver created regions, after hotplug or
other reconfiguration.

When BIOS creates a region in a CXL Window it additionally describes
that subset of the Window range in the other typical ACPI tables SRAT,
SLIT, and HMAT. The rationale for BIOS not pre-describing the entire
CXL Window in SRAT, SLIT, and HMAT is that it can not predict the
future. I.e. there is nothing stopping higher or lower performance
devices being placed in the same Window. Compare that to ACPI memory
hotplug that just onlines additional capacity in the proximity domain
with little freedom for dynamic performance differentiation.

That leaves the OS with a choice, should unpopulated window capacity
match the proximity domain of an existing region, or should it allocate
a new one? This patch takes the simple position of minimizing proximity
domain proliferation by reusing any proximity domain intersection for
the entire Window. If the Window has no intersections then allocate a
new proximity domain. Note that SRAT, SLIT and HMAT information can be
enumerated dynamically in a standard way from device provided data.
Think of CXL as the end of ACPI needing to describe memory attributes,
CXL offers a standard discovery model for performance attributes, but
Linux still needs to interoperate with the old regime.

Reported-by: Derick Marks <[email protected]>
Suggested-by: Dan Williams <[email protected]>
Signed-off-by: Alison Schofield <[email protected]>
Tested-by: Derick Marks <[email protected]>
---
arch/x86/include/asm/sparsemem.h | 2 +
arch/x86/mm/numa.c | 81 ++++++++++++++++++++++++++++++++
include/linux/numa.h | 7 +++
3 files changed, 90 insertions(+)

diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h
index 64df897c0ee3..1be13b2dfe8b 100644
--- a/arch/x86/include/asm/sparsemem.h
+++ b/arch/x86/include/asm/sparsemem.h
@@ -37,6 +37,8 @@ extern int phys_to_target_node(phys_addr_t start);
#define phys_to_target_node phys_to_target_node
extern int memory_add_physaddr_to_nid(u64 start);
#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
+extern int numa_fill_memblks(u64 start, u64 end);
+#define numa_fill_memblks numa_fill_memblks
#endif
#endif /* __ASSEMBLY__ */

diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 2aadb2019b4f..152398bdecc4 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -11,6 +11,7 @@
#include <linux/nodemask.h>
#include <linux/sched.h>
#include <linux/topology.h>
+#include <linux/sort.h>

#include <asm/e820/api.h>
#include <asm/proto.h>
@@ -961,4 +962,84 @@ int memory_add_physaddr_to_nid(u64 start)
return nid;
}
EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
+
+static int __init cmp_memblk(const void *a, const void *b)
+{
+ const struct numa_memblk *ma = *(const struct numa_memblk **)a;
+ const struct numa_memblk *mb = *(const struct numa_memblk **)b;
+
+ return ma->start - mb->start;
+}
+
+static struct numa_memblk *numa_memblk_list[NR_NODE_MEMBLKS] __initdata;
+
+/**
+ * numa_fill_memblks - Fill gaps in numa_meminfo memblks
+ * @start: address to begin fill
+ * @end: address to end fill
+ *
+ * Find and extend numa_meminfo memblks to cover the @start-@end
+ * physical address range, such that the first memblk includes
+ * @start, the last memblk includes @end, and any gaps in between
+ * are filled.
+ *
+ * RETURNS:
+ * 0 : Success
+ * NUMA_NO_MEMBLK : No memblk exists in @start-@end range
+ */
+
+int __init numa_fill_memblks(u64 start, u64 end)
+{
+ struct numa_memblk **blk = &numa_memblk_list[0];
+ struct numa_meminfo *mi = &numa_meminfo;
+ int count = 0;
+ u64 prev_end;
+
+ /*
+ * Create a list of pointers to numa_meminfo memblks that
+ * overlap start, end. Exclude (start == bi->end) since
+ * end addresses in both a CFMWS range and a memblk range
+ * are exclusive.
+ *
+ * This list of pointers is used to make in-place changes
+ * that fill out the numa_meminfo memblks.
+ */
+ for (int i = 0; i < mi->nr_blks; i++) {
+ struct numa_memblk *bi = &mi->blk[i];
+
+ if (start < bi->end && end >= bi->start) {
+ blk[count] = &mi->blk[i];
+ count++;
+ }
+ }
+ if (!count)
+ return NUMA_NO_MEMBLK;
+
+ /* Sort the list of pointers in memblk->start order */
+ sort(&blk[0], count, sizeof(blk[0]), cmp_memblk, NULL);
+
+ /* Make sure the first/last memblks include start/end */
+ blk[0]->start = min(blk[0]->start, start);
+ blk[count - 1]->end = max(blk[count - 1]->end, end);
+
+ /*
+ * Fill any gaps by tracking the previous memblks
+ * end address and backfilling to it if needed.
+ */
+ prev_end = blk[0]->end;
+ for (int i = 1; i < count; i++) {
+ struct numa_memblk *curr = blk[i];
+
+ if (prev_end >= curr->start) {
+ if (prev_end < curr->end)
+ prev_end = curr->end;
+ } else {
+ curr->start = prev_end;
+ prev_end = curr->end;
+ }
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(numa_fill_memblks);
+
#endif
diff --git a/include/linux/numa.h b/include/linux/numa.h
index 59df211d051f..0f512c0aba54 100644
--- a/include/linux/numa.h
+++ b/include/linux/numa.h
@@ -12,6 +12,7 @@
#define MAX_NUMNODES (1 << NODES_SHIFT)

#define NUMA_NO_NODE (-1)
+#define NUMA_NO_MEMBLK (-1)

/* optionally keep NUMA memory info available post init */
#ifdef CONFIG_NUMA_KEEP_MEMINFO
@@ -43,6 +44,12 @@ static inline int phys_to_target_node(u64 start)
return 0;
}
#endif
+#ifndef numa_fill_memblks
+static inline int __init numa_fill_memblks(u64 start, u64 end)
+{
+ return NUMA_NO_MEMBLK;
+}
+#endif
#else /* !CONFIG_NUMA */
static inline int numa_map_to_online_node(int node)
{
--
2.37.3


2023-06-25 00:26:45

by Alison Schofield

[permalink] [raw]
Subject: [PATCH v3 2/2] ACPI: NUMA: Apply SRAT proximity domain to entire CFMWS window

From: Alison Schofield <[email protected]>

Commit fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each
CFMWS not in SRAT") did not account for the case where the BIOS
only partially describes a CFMWS Window in the SRAT. That means
the omitted address ranges, of a partially described CFMWS Window,
do not get assigned to a NUMA node.

Replace the call to phys_to_target_node() with numa_add_memblks().
Numa_add_memblks() searches an HPA range for existing memblk(s)
and extends those memblk(s) to fill the entire CFMWS Window.

Extending the existing memblks is a simple strategy that reuses
SRAT defined proximity domains from part of a window to fill out
the entire window, based on the knowledge* that all of a CFMWS
window is of a similar performance class.

*Note that this heuristic will evolve when CFMWS Windows present
a wider range of characteristics. The extension of the proximity
domain, implemented here, is likely a step in developing a more
sophisticated performance profile in the future.

There is no change in behavior when the SRAT does not describe
the CFMWS Window at all. In that case, a new NUMA node with a
single memblk covering the entire CFMWS Window is created.

Fixes: fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT")
Reported-by: Derick Marks <[email protected]>
Suggested-by: Dan Williams <[email protected]>
Signed-off-by: Alison Schofield <[email protected]>
Tested-by: Derick Marks <[email protected]>
---
drivers/acpi/numa/srat.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
index 1f4fc5f8a819..12f330b0eac0 100644
--- a/drivers/acpi/numa/srat.c
+++ b/drivers/acpi/numa/srat.c
@@ -310,11 +310,16 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header,
start = cfmws->base_hpa;
end = cfmws->base_hpa + cfmws->window_size;

- /* Skip if the SRAT already described the NUMA details for this HPA */
- node = phys_to_target_node(start);
- if (node != NUMA_NO_NODE)
+ /*
+ * The SRAT may have already described NUMA details for all,
+ * or a portion of, this CFMWS HPA range. Extend the memblks
+ * found for any portion of the window to cover the entire
+ * window.
+ */
+ if (!numa_fill_memblks(start, end))
return 0;

+ /* No SRAT description. Create a new node. */
node = acpi_map_pxm_to_node(*fake_pxm);

if (node == NUMA_NO_NODE) {
--
2.37.3


2023-06-25 06:13:20

by Dan Williams

[permalink] [raw]
Subject: RE: [PATCH v3 1/2] x86/numa: Introduce numa_fill_memblks()

alison.schofield@ wrote:
> From: Alison Schofield <[email protected]>
>
> numa_fill_memblks() fills in the gaps in numa_meminfo memblks
> over an physical address range.
[..]
> diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> index 2aadb2019b4f..152398bdecc4 100644
> --- a/arch/x86/mm/numa.c
> +++ b/arch/x86/mm/numa.c
[..]
> +int __init numa_fill_memblks(u64 start, u64 end)
> +{
> + struct numa_memblk **blk = &numa_memblk_list[0];
> + struct numa_meminfo *mi = &numa_meminfo;
> + int count = 0;
> + u64 prev_end;
> +
> + /*
> + * Create a list of pointers to numa_meminfo memblks that
> + * overlap start, end. Exclude (start == bi->end) since
> + * end addresses in both a CFMWS range and a memblk range
> + * are exclusive.
> + *
> + * This list of pointers is used to make in-place changes
> + * that fill out the numa_meminfo memblks.
> + */
> + for (int i = 0; i < mi->nr_blks; i++) {
> + struct numa_memblk *bi = &mi->blk[i];
> +
> + if (start < bi->end && end >= bi->start) {
> + blk[count] = &mi->blk[i];
> + count++;
> + }
> + }
> + if (!count)
> + return NUMA_NO_MEMBLK;
> +
> + /* Sort the list of pointers in memblk->start order */
> + sort(&blk[0], count, sizeof(blk[0]), cmp_memblk, NULL);
> +
> + /* Make sure the first/last memblks include start/end */
> + blk[0]->start = min(blk[0]->start, start);
> + blk[count - 1]->end = max(blk[count - 1]->end, end);
> +
> + /*
> + * Fill any gaps by tracking the previous memblks
> + * end address and backfilling to it if needed.
> + */
> + prev_end = blk[0]->end;
> + for (int i = 1; i < count; i++) {
> + struct numa_memblk *curr = blk[i];
> +
> + if (prev_end >= curr->start) {
> + if (prev_end < curr->end)
> + prev_end = curr->end;
> + } else {
> + curr->start = prev_end;
> + prev_end = curr->end;
> + }
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(numa_fill_memblks);

After deleting this export you can add:

Reviewed-by: Dan Williams <[email protected]>

2023-06-27 01:02:08

by Alison Schofield

[permalink] [raw]
Subject: Re: [PATCH v3 1/2] x86/numa: Introduce numa_fill_memblks()

On Sat, Jun 24, 2023 at 11:01:38PM -0700, Dan Williams wrote:
> alison.schofield@ wrote:
> > From: Alison Schofield <[email protected]>
> >
> > numa_fill_memblks() fills in the gaps in numa_meminfo memblks
> > over an physical address range.
> [..]
> > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> > index 2aadb2019b4f..152398bdecc4 100644
> > --- a/arch/x86/mm/numa.c
> > +++ b/arch/x86/mm/numa.c
> [..]
> > +int __init numa_fill_memblks(u64 start, u64 end)
> > +{
> > + struct numa_memblk **blk = &numa_memblk_list[0];
> > + struct numa_meminfo *mi = &numa_meminfo;
> > + int count = 0;
> > + u64 prev_end;
> > +
> > + /*
> > + * Create a list of pointers to numa_meminfo memblks that
> > + * overlap start, end. Exclude (start == bi->end) since
> > + * end addresses in both a CFMWS range and a memblk range
> > + * are exclusive.
> > + *
> > + * This list of pointers is used to make in-place changes
> > + * that fill out the numa_meminfo memblks.
> > + */
> > + for (int i = 0; i < mi->nr_blks; i++) {
> > + struct numa_memblk *bi = &mi->blk[i];
> > +
> > + if (start < bi->end && end >= bi->start) {
> > + blk[count] = &mi->blk[i];
> > + count++;
> > + }
> > + }
> > + if (!count)
> > + return NUMA_NO_MEMBLK;
> > +
> > + /* Sort the list of pointers in memblk->start order */
> > + sort(&blk[0], count, sizeof(blk[0]), cmp_memblk, NULL);
> > +
> > + /* Make sure the first/last memblks include start/end */
> > + blk[0]->start = min(blk[0]->start, start);
> > + blk[count - 1]->end = max(blk[count - 1]->end, end);
> > +
> > + /*
> > + * Fill any gaps by tracking the previous memblks
> > + * end address and backfilling to it if needed.
> > + */
> > + prev_end = blk[0]->end;
> > + for (int i = 1; i < count; i++) {
> > + struct numa_memblk *curr = blk[i];
> > +
> > + if (prev_end >= curr->start) {
> > + if (prev_end < curr->end)
> > + prev_end = curr->end;
> > + } else {
> > + curr->start = prev_end;
> > + prev_end = curr->end;
> > + }
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(numa_fill_memblks);
>
> After deleting this export you can add:

Drats! Sorry for missing that.
>
> Reviewed-by: Dan Williams <[email protected]>