[Problem]
cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
workqueue does not update wq_numa_possible_cpumask.
So here is the problem:
Assume we have the following cpuid <-> nodeid in the beginning:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 2 | 30-44, 90-104
node 3 | 45-59, 105-119
and we hot-remove node2 and node3, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
and we hot-add node4 and node5, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 4 | 30-59
node 5 | 90-119
But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
When a pool workqueue is initialized, if its cpumask belongs to a node, its
pool->node will be mapped to that node. And memory used by this workqueue will
also be allocated on that node.
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
...
/* if cpumask is contained inside a NUMA node, we belong to that node */
if (wq_numa_enabled) {
for_each_node(node) {
if (cpumask_subset(pool->attrs->cpumask,
wq_numa_possible_cpumask[node])) {
pool->node = node;
break;
}
}
}
Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
which will lead to memory allocation failure:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
It happens here:
create_worker(struct worker_pool *pool)
|--> worker = alloc_worker(pool->node);
static struct worker *alloc_worker(int node)
{
struct worker *worker;
worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
......
return worker;
}
[Solution]
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
mapping. So the key point is obtaining all cpus' apicid.
apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
MADT (Multiple APIC Description Table). So we finish the job in the following steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
This is done by introducing an extra parameter to generic_processor_info to let the
caller control if disabled cpus are ignored.
2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
registering local apic. Store the mapping in this array.
3. Enable _MAT and MADT relative apis to return non-presnet or disabled cpus' apicid.
This is also done by introducing an extra parameter to these apis to let the caller
control if disabled cpus are ignored.
4. Establish all possible cpuid <-> nodeid mapping.
This is done via an additional acpi namespace walk for processors.
For previous discussion, please refer to:
https://lkml.org/lkml/2015/2/27/145
https://lkml.org/lkml/2015/3/25/989
https://lkml.org/lkml/2015/5/14/244
https://lkml.org/lkml/2015/7/7/200
https://lkml.org/lkml/2015/9/27/209
Change log v2 -> v3:
1. Online memory-less nodes at boot time to map cpus of memory-less nodes.
2. Build zonelists for memory-less nodes so that memory allocator will fall
back to proper nodes automatically.
Change log v1 -> v2:
1. Split code movement and actual changes. Add patch 1.
2. Synchronize best near online node record when node hotplug happens. In patch 2.
3. Fix some comment.
Gu Zheng (4):
x86, acpi, cpu-hotplug: Enable acpi to register all possible cpus at
boot time.
x86, acpi, cpu-hotplug: Introduce cpuid_to_apicid[] array to store
persistent cpuid <-> apicid mapping.
x86, acpi, cpu-hotplug: Enable MADT APIs to return disabled apicid.
x86, acpi, cpu-hotplug: Set persistent cpuid <-> nodeid mapping when
booting.
Tang Chen (1):
x86, memhp, numa: Online memory-less nodes at boot time.
arch/ia64/kernel/acpi.c | 2 +-
arch/x86/include/asm/mpspec.h | 1 +
arch/x86/kernel/acpi/boot.c | 8 ++-
arch/x86/kernel/apic/apic.c | 85 +++++++++++++++++++++++++----
arch/x86/mm/numa.c | 30 ++++++-----
drivers/acpi/acpi_processor.c | 5 +-
drivers/acpi/bus.c | 3 ++
drivers/acpi/processor_core.c | 122 ++++++++++++++++++++++++++++++++++--------
include/linux/acpi.h | 2 +
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 2 +-
11 files changed, 209 insertions(+), 52 deletions(-)
--
1.8.3.1
For now, x86 does not support memory-less node. A node without memory
will not be onlined, and the cpus on it will be mapped to the other
online nodes with memory in init_cpu_to_node(). The reason of doing this
is to ensure each cpu has mapped to a node with memory, so that it will
be able to allocate local memory for that cpu.
But we don't have to do it in this way.
In this series of patches, we are going to construct cpu <-> node mapping
for all possible cpus at boot time, which is a 1-1 mapping. It means the
cpu will be mapped to the node it belongs to, and will never be changed.
If a node has only cpus but no memory, the cpus on it will be mapped to
a memory-less node. And the memory-less node should be onlined.
This patch allocate pgdats for all memory-less nodes and online them at
boot time. Then build zonelists for these nodes. As a result, when cpus
on these memory-less nodes try to allocate memory from local node, it
will automatically fall back to the proper zones in the zonelists.
---
arch/x86/mm/numa.c | 30 ++++++++++++++++--------------
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 2 +-
3 files changed, 18 insertions(+), 15 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index c3b3f65..3537c31 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -704,22 +704,22 @@ void __init x86_numa_init(void)
numa_init(dummy_numa_init);
}
-static __init int find_near_online_node(int node)
+static void __init init_memory_less_node(int nid)
{
- int n, val;
- int min_val = INT_MAX;
- int best_node = -1;
+ unsigned long zones_size[MAX_NR_ZONES] = {0};
+ unsigned long zholes_size[MAX_NR_ZONES] = {0};
- for_each_online_node(n) {
- val = node_distance(node, n);
+ /* Allocate and initialize node data. Memory-less node is now online.*/
+ alloc_node_data(nid);
+ free_area_init_node(nid, zones_size, 0, zholes_size);
- if (val < min_val) {
- min_val = val;
- best_node = n;
- }
- }
-
- return best_node;
+ /*
+ * Build zonelist so that when the cpus try to allocate memory on local
+ * node, which has no memory, it will fall back to the best near node.
+ * No need to rebuild zonelist for the other nodes since memory-less
+ * node has no memory. And no need to lock at boot time.
+ */
+ build_zonelists(NODE_DATA(nid));
}
/*
@@ -748,8 +748,10 @@ void __init init_cpu_to_node(void)
if (node == NUMA_NO_NODE)
continue;
+
if (!node_online(node))
- node = find_near_online_node(node);
+ init_memory_less_node(node);
+
numa_set_node(cpu, node);
}
}
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e23a9e7..9c4d4d5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -736,6 +736,7 @@ static inline bool is_dev_zone(const struct zone *zone)
extern struct mutex zonelists_mutex;
void build_all_zonelists(pg_data_t *pgdat, struct zone *zone);
+void build_zonelists(pg_data_t *pgdat);
void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx);
bool zone_watermark_ok(struct zone *z, unsigned int order,
unsigned long mark, int classzone_idx, int alloc_flags);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 17a3c66..761f302 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4144,7 +4144,7 @@ static void set_zonelist_order(void)
current_zonelist_order = user_zonelist_order;
}
-static void build_zonelists(pg_data_t *pgdat)
+void build_zonelists(pg_data_t *pgdat)
{
int j, node, load;
enum zone_type i;
--
1.8.3.1
From: Gu Zheng <[email protected]>
[Problem]
cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
workqueue does not update wq_numa_possible_cpumask.
So here is the problem:
Assume we have the following cpuid <-> nodeid in the beginning:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 2 | 30-44, 90-104
node 3 | 45-59, 105-119
and we hot-remove node2 and node3, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
and we hot-add node4 and node5, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 4 | 30-59
node 5 | 90-119
But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
When a pool workqueue is initialized, if its cpumask belongs to a node, its
pool->node will be mapped to that node. And memory used by this workqueue will
also be allocated on that node.
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
...
/* if cpumask is contained inside a NUMA node, we belong to that node */
if (wq_numa_enabled) {
for_each_node(node) {
if (cpumask_subset(pool->attrs->cpumask,
wq_numa_possible_cpumask[node])) {
pool->node = node;
break;
}
}
}
Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
which will lead to memory allocation failure:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
It happens here:
create_worker(struct worker_pool *pool)
|--> worker = alloc_worker(pool->node);
static struct worker *alloc_worker(int node)
{
struct worker *worker;
worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
......
return worker;
}
[Solution]
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
mapping. So the key point is obtaining all cpus' apicid.
apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
MADT (Multiple APIC Description Table). So we finish the job in the following steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
This is done by introducing an extra parameter to generic_processor_info to let the
caller control if disabled cpus are ignored.
2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
registering local apic. Store the mapping in this array.
3. Enable _MAT and MADT relative apis to return non-presnet or disabled cpus' apicid.
This is also done by introducing an extra parameter to these apis to let the caller
control if disabled cpus are ignored.
4. Establish all possible cpuid <-> nodeid mapping.
This is done via an additional acpi namespace walk for processors.
This patch finished step 1.
Signed-off-by: Gu Zheng <[email protected]>
Signed-off-by: Tang Chen <[email protected]>
---
arch/x86/kernel/apic/apic.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index 2f69e3b..29915cf 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1988,7 +1988,7 @@ void disconnect_bsp_APIC(int virt_wire_setup)
apic_write(APIC_LVT1, value);
}
-int generic_processor_info(int apicid, int version)
+static int __generic_processor_info(int apicid, int version, bool enabled)
{
int cpu, max = nr_cpu_ids;
bool boot_cpu_detected = physid_isset(boot_cpu_physical_apicid,
@@ -2022,7 +2022,8 @@ int generic_processor_info(int apicid, int version)
" Processor %d/0x%x ignored.\n",
thiscpu, apicid);
- disabled_cpus++;
+ if (enabled)
+ disabled_cpus++;
return -ENODEV;
}
@@ -2039,7 +2040,8 @@ int generic_processor_info(int apicid, int version)
" reached. Keeping one slot for boot cpu."
" Processor %d/0x%x ignored.\n", max, thiscpu, apicid);
- disabled_cpus++;
+ if (enabled)
+ disabled_cpus++;
return -ENODEV;
}
@@ -2050,11 +2052,14 @@ int generic_processor_info(int apicid, int version)
"ACPI: NR_CPUS/possible_cpus limit of %i reached."
" Processor %d/0x%x ignored.\n", max, thiscpu, apicid);
- disabled_cpus++;
+ if (enabled)
+ disabled_cpus++;
return -EINVAL;
}
- num_processors++;
+ if (enabled)
+ num_processors++;
+
if (apicid == boot_cpu_physical_apicid) {
/*
* x86_bios_cpu_apicid is required to have processors listed
@@ -2082,7 +2087,8 @@ int generic_processor_info(int apicid, int version)
apic_version[boot_cpu_physical_apicid], cpu, version);
}
- physid_set(apicid, phys_cpu_present_map);
+ if (enabled)
+ physid_set(apicid, phys_cpu_present_map);
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
@@ -2095,11 +2101,17 @@ int generic_processor_info(int apicid, int version)
apic->x86_32_early_logical_apicid(cpu);
#endif
set_cpu_possible(cpu, true);
- set_cpu_present(cpu, true);
+ if (enabled)
+ set_cpu_present(cpu, true);
return cpu;
}
+int generic_processor_info(int apicid, int version)
+{
+ return __generic_processor_info(apicid, version, true);
+}
+
int hard_smp_processor_id(void)
{
return read_apic_id();
--
1.8.3.1
From: Gu Zheng <[email protected]>
This patch finishes step 2.
In this patch, we introduce a new static array named cpuid_to_apicid[],
which is large enough to store info for all possible cpus.
And then, we modify the cpuid calculation. In generic_processor_info(),
it simply finds the next unused cpuid. And it is also why the cpuid <-> nodeid
mapping changes with node hotplug.
After this patch, we find the next unused cpuid, map it to an apicid,
and store the mapping in cpuid_to_apicid[], so that cpuid <-> apicid
mapping will be persistent.
And finally we will use this array to make cpuid <-> nodeid persistent.
cpuid <-> apicid mapping is established at local apic registeration time.
But non-present or disabled cpus are ignored.
In this patch, we establish all possible cpuid <-> apicid mapping when
registering local apic.
Signed-off-by: Gu Zheng <[email protected]>
Signed-off-by: Tang Chen <[email protected]>
---
arch/x86/include/asm/mpspec.h | 1 +
arch/x86/kernel/acpi/boot.c | 6 ++---
arch/x86/kernel/apic/apic.c | 61 ++++++++++++++++++++++++++++++++++++++++---
3 files changed, 61 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/mpspec.h b/arch/x86/include/asm/mpspec.h
index b07233b..db902d8 100644
--- a/arch/x86/include/asm/mpspec.h
+++ b/arch/x86/include/asm/mpspec.h
@@ -86,6 +86,7 @@ static inline void early_reserve_e820_mpc_new(void) { }
#endif
int generic_processor_info(int apicid, int version);
+int __generic_processor_info(int apicid, int version, bool enabled);
#define PHYSID_ARRAY_SIZE BITS_TO_LONGS(MAX_LOCAL_APIC)
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index e759076..0ce06ee 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -174,15 +174,13 @@ static int acpi_register_lapic(int id, u8 enabled)
return -EINVAL;
}
- if (!enabled) {
+ if (!enabled)
++disabled_cpus;
- return -EINVAL;
- }
if (boot_cpu_physical_apicid != -1U)
ver = apic_version[boot_cpu_physical_apicid];
- return generic_processor_info(id, ver);
+ return __generic_processor_info(id, ver, enabled);
}
static int __init
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index 29915cf..d0a2f32 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1988,7 +1988,53 @@ void disconnect_bsp_APIC(int virt_wire_setup)
apic_write(APIC_LVT1, value);
}
-static int __generic_processor_info(int apicid, int version, bool enabled)
+/*
+ * The number of allocated logical CPU IDs. Since logical CPU IDs are allocated
+ * contiguously, it equals to current allocated max logical CPU ID plus 1.
+ * All allocated CPU ID should be in [0, nr_logical_cpuidi), so the maximum of
+ * nr_logical_cpuids is nr_cpu_ids.
+ *
+ * NOTE: Reserve 0 for BSP.
+ */
+static int nr_logical_cpuids = 1;
+
+/*
+ * Used to store mapping between logical CPU IDs and APIC IDs.
+ */
+static int cpuid_to_apicid[] = {
+ [0 ... NR_CPUS - 1] = -1,
+};
+
+/*
+ * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
+ * and cpuid_to_apicid[] synchronized.
+ */
+static int allocate_logical_cpuid(int apicid)
+{
+ int i;
+
+ /*
+ * cpuid <-> apicid mapping is persistent, so when a cpu is up,
+ * check if the kernel has allocated a cpuid for it.
+ */
+ for (i = 0; i < nr_logical_cpuids; i++) {
+ if (cpuid_to_apicid[i] == apicid)
+ return i;
+ }
+
+ /* Allocate a new cpuid. */
+ if (nr_logical_cpuids >= nr_cpu_ids) {
+ WARN_ONCE(1, "Only %d processors supported."
+ "Processor %d/0x%x and the rest are ignored.\n",
+ nr_cpu_ids - 1, nr_logical_cpuids, apicid);
+ return -1;
+ }
+
+ cpuid_to_apicid[nr_logical_cpuids] = apicid;
+ return nr_logical_cpuids++;
+}
+
+int __generic_processor_info(int apicid, int version, bool enabled)
{
int cpu, max = nr_cpu_ids;
bool boot_cpu_detected = physid_isset(boot_cpu_physical_apicid,
@@ -2069,8 +2115,17 @@ static int __generic_processor_info(int apicid, int version, bool enabled)
* for BSP.
*/
cpu = 0;
- } else
- cpu = cpumask_next_zero(-1, cpu_present_mask);
+
+ /* Logical cpuid 0 is reserved for BSP. */
+ cpuid_to_apicid[0] = apicid;
+ } else {
+ cpu = allocate_logical_cpuid(apicid);
+ if (cpu < 0) {
+ if (enabled)
+ disabled_cpus++;
+ return -EINVAL;
+ }
+ }
/*
* Validate version
--
1.8.3.1
From: Gu Zheng <[email protected]>
This patch finishes step 3.
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
So, in order to setup persistent cpuid <-> nodeid mapping for all possible CPUs,
we should:
1. Setup cpuid <-> apicid mapping for all possible CPUs, which has been done in step 1.
2. Setup cpuid <-> nodeid mapping for all possible CPUs. But before that, we should
obtain all apicids from MADT.
All processors' apicids can be obtained by _MAT method or from MADT in ACPI.
The current code ignores disabled processors and returns -ENODEV.
After this patch, a new parameter will be added to MADT APIs so that caller
is able to control if disabled processors are ignored.
Signed-off-by: Gu Zheng <[email protected]>
Signed-off-by: Tang Chen <[email protected]>
---
drivers/acpi/acpi_processor.c | 5 +++-
drivers/acpi/processor_core.c | 57 +++++++++++++++++++++++++++----------------
2 files changed, 40 insertions(+), 22 deletions(-)
diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 6979186..d30111a 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -300,8 +300,11 @@ static int acpi_processor_get_info(struct acpi_device *device)
* Extra Processor objects may be enumerated on MP systems with
* less than the max # of CPUs. They should be ignored _iff
* they are physically not present.
+ *
+ * NOTE: Even if the processor has a cpuid, it may not present because
+ * cpuid <-> apicid mapping is persistent now.
*/
- if (invalid_logical_cpuid(pr->id)) {
+ if (invalid_logical_cpuid(pr->id) || !cpu_present(pr->id)) {
int ret = acpi_processor_hotadd_init(pr);
if (ret)
return ret;
diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c
index 33a38d6..824b98b 100644
--- a/drivers/acpi/processor_core.c
+++ b/drivers/acpi/processor_core.c
@@ -32,12 +32,12 @@ static struct acpi_table_madt *get_madt_table(void)
}
static int map_lapic_id(struct acpi_subtable_header *entry,
- u32 acpi_id, phys_cpuid_t *apic_id)
+ u32 acpi_id, phys_cpuid_t *apic_id, bool ignore_disabled)
{
struct acpi_madt_local_apic *lapic =
container_of(entry, struct acpi_madt_local_apic, header);
- if (!(lapic->lapic_flags & ACPI_MADT_ENABLED))
+ if (ignore_disabled && !(lapic->lapic_flags & ACPI_MADT_ENABLED))
return -ENODEV;
if (lapic->processor_id != acpi_id)
@@ -48,12 +48,13 @@ static int map_lapic_id(struct acpi_subtable_header *entry,
}
static int map_x2apic_id(struct acpi_subtable_header *entry,
- int device_declaration, u32 acpi_id, phys_cpuid_t *apic_id)
+ int device_declaration, u32 acpi_id, phys_cpuid_t *apic_id,
+ bool ignore_disabled)
{
struct acpi_madt_local_x2apic *apic =
container_of(entry, struct acpi_madt_local_x2apic, header);
- if (!(apic->lapic_flags & ACPI_MADT_ENABLED))
+ if (ignore_disabled && !(apic->lapic_flags & ACPI_MADT_ENABLED))
return -ENODEV;
if (device_declaration && (apic->uid == acpi_id)) {
@@ -65,12 +66,13 @@ static int map_x2apic_id(struct acpi_subtable_header *entry,
}
static int map_lsapic_id(struct acpi_subtable_header *entry,
- int device_declaration, u32 acpi_id, phys_cpuid_t *apic_id)
+ int device_declaration, u32 acpi_id, phys_cpuid_t *apic_id,
+ bool ignore_disabled)
{
struct acpi_madt_local_sapic *lsapic =
container_of(entry, struct acpi_madt_local_sapic, header);
- if (!(lsapic->lapic_flags & ACPI_MADT_ENABLED))
+ if (ignore_disabled && !(lsapic->lapic_flags & ACPI_MADT_ENABLED))
return -ENODEV;
if (device_declaration) {
@@ -87,12 +89,13 @@ static int map_lsapic_id(struct acpi_subtable_header *entry,
* Retrieve the ARM CPU physical identifier (MPIDR)
*/
static int map_gicc_mpidr(struct acpi_subtable_header *entry,
- int device_declaration, u32 acpi_id, phys_cpuid_t *mpidr)
+ int device_declaration, u32 acpi_id, phys_cpuid_t *mpidr,
+ bool ignore_disabled)
{
struct acpi_madt_generic_interrupt *gicc =
container_of(entry, struct acpi_madt_generic_interrupt, header);
- if (!(gicc->flags & ACPI_MADT_ENABLED))
+ if (ignore_disabled && !(gicc->flags & ACPI_MADT_ENABLED))
return -ENODEV;
/* device_declaration means Device object in DSDT, in the
@@ -108,7 +111,7 @@ static int map_gicc_mpidr(struct acpi_subtable_header *entry,
return -EINVAL;
}
-static phys_cpuid_t map_madt_entry(int type, u32 acpi_id)
+static phys_cpuid_t map_madt_entry(int type, u32 acpi_id, bool ignore_disabled)
{
unsigned long madt_end, entry;
phys_cpuid_t phys_id = PHYS_CPUID_INVALID; /* CPU hardware ID */
@@ -128,16 +131,20 @@ static phys_cpuid_t map_madt_entry(int type, u32 acpi_id)
struct acpi_subtable_header *header =
(struct acpi_subtable_header *)entry;
if (header->type == ACPI_MADT_TYPE_LOCAL_APIC) {
- if (!map_lapic_id(header, acpi_id, &phys_id))
+ if (!map_lapic_id(header, acpi_id, &phys_id,
+ ignore_disabled))
break;
} else if (header->type == ACPI_MADT_TYPE_LOCAL_X2APIC) {
- if (!map_x2apic_id(header, type, acpi_id, &phys_id))
+ if (!map_x2apic_id(header, type, acpi_id, &phys_id,
+ ignore_disabled))
break;
} else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC) {
- if (!map_lsapic_id(header, type, acpi_id, &phys_id))
+ if (!map_lsapic_id(header, type, acpi_id, &phys_id,
+ ignore_disabled))
break;
} else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) {
- if (!map_gicc_mpidr(header, type, acpi_id, &phys_id))
+ if (!map_gicc_mpidr(header, type, acpi_id, &phys_id,
+ ignore_disabled))
break;
}
entry += header->length;
@@ -145,7 +152,8 @@ static phys_cpuid_t map_madt_entry(int type, u32 acpi_id)
return phys_id;
}
-static phys_cpuid_t map_mat_entry(acpi_handle handle, int type, u32 acpi_id)
+static phys_cpuid_t map_mat_entry(acpi_handle handle, int type, u32 acpi_id,
+ bool ignore_disabled)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
@@ -166,30 +174,37 @@ static phys_cpuid_t map_mat_entry(acpi_handle handle, int type, u32 acpi_id)
header = (struct acpi_subtable_header *)obj->buffer.pointer;
if (header->type == ACPI_MADT_TYPE_LOCAL_APIC)
- map_lapic_id(header, acpi_id, &phys_id);
+ map_lapic_id(header, acpi_id, &phys_id, ignore_disabled);
else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC)
- map_lsapic_id(header, type, acpi_id, &phys_id);
+ map_lsapic_id(header, type, acpi_id, &phys_id, ignore_disabled);
else if (header->type == ACPI_MADT_TYPE_LOCAL_X2APIC)
- map_x2apic_id(header, type, acpi_id, &phys_id);
+ map_x2apic_id(header, type, acpi_id, &phys_id, ignore_disabled);
else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT)
- map_gicc_mpidr(header, type, acpi_id, &phys_id);
+ map_gicc_mpidr(header, type, acpi_id, &phys_id,
+ ignore_disabled);
exit:
kfree(buffer.pointer);
return phys_id;
}
-phys_cpuid_t acpi_get_phys_id(acpi_handle handle, int type, u32 acpi_id)
+static phys_cpuid_t __acpi_get_phys_id(acpi_handle handle, int type,
+ u32 acpi_id, bool ignore_disabled)
{
phys_cpuid_t phys_id;
- phys_id = map_mat_entry(handle, type, acpi_id);
+ phys_id = map_mat_entry(handle, type, acpi_id, ignore_disabled);
if (invalid_phys_cpuid(phys_id))
- phys_id = map_madt_entry(type, acpi_id);
+ phys_id = map_madt_entry(type, acpi_id, ignore_disabled);
return phys_id;
}
+phys_cpuid_t acpi_get_phys_id(acpi_handle handle, int type, u32 acpi_id)
+{
+ return __acpi_get_phys_id(handle, type, acpi_id, true);
+}
+
int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id)
{
#ifdef CONFIG_SMP
--
1.8.3.1
From: Gu Zheng <[email protected]>
This patch finishes step 4.
This patch set the persistent cpuid <-> nodeid mapping for all enabled/disabled
processors at boot time via an additional acpi namespace walk for processors.
Signed-off-by: Gu Zheng <[email protected]>
Signed-off-by: Tang Chen <[email protected]>
---
arch/ia64/kernel/acpi.c | 2 +-
arch/x86/kernel/acpi/boot.c | 2 +-
drivers/acpi/bus.c | 3 ++
drivers/acpi/processor_core.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/acpi.h | 2 ++
5 files changed, 72 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
index b1698bc..7db5563 100644
--- a/arch/ia64/kernel/acpi.c
+++ b/arch/ia64/kernel/acpi.c
@@ -796,7 +796,7 @@ int acpi_isa_irq_to_gsi(unsigned isa_irq, u32 *gsi)
* ACPI based hotplug CPU support
*/
#ifdef CONFIG_ACPI_HOTPLUG_CPU
-static int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
+int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
{
#ifdef CONFIG_ACPI_NUMA
/*
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 0ce06ee..7d45261 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -696,7 +696,7 @@ static void __init acpi_set_irq_model_ioapic(void)
#ifdef CONFIG_ACPI_HOTPLUG_CPU
#include <acpi/processor.h>
-static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
+void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
{
#ifdef CONFIG_ACPI_NUMA
int nid;
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index a212cef..d59e1cd 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -1094,6 +1094,9 @@ static int __init acpi_init(void)
acpi_debugfs_init();
acpi_sleep_proc_init();
acpi_wakeup_device_init();
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+ acpi_set_processor_mapping();
+#endif
return 0;
}
diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c
index 824b98b..45580ff 100644
--- a/drivers/acpi/processor_core.c
+++ b/drivers/acpi/processor_core.c
@@ -261,6 +261,71 @@ int acpi_get_cpuid(acpi_handle handle, int type, u32 acpi_id)
}
EXPORT_SYMBOL_GPL(acpi_get_cpuid);
+#ifdef CONFIG_ACPI_HOTPLUG_CPU
+static bool map_processor(acpi_handle handle, int *phys_id, int *cpuid)
+{
+ int type;
+ u32 acpi_id;
+ acpi_status status;
+ acpi_object_type acpi_type;
+ unsigned long long tmp;
+ union acpi_object object = { 0 };
+ struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
+
+ status = acpi_get_type(handle, &acpi_type);
+ if (ACPI_FAILURE(status))
+ return false;
+
+ switch (acpi_type) {
+ case ACPI_TYPE_PROCESSOR:
+ status = acpi_evaluate_object(handle, NULL, NULL, &buffer);
+ if (ACPI_FAILURE(status))
+ return false;
+ acpi_id = object.processor.proc_id;
+ break;
+ case ACPI_TYPE_DEVICE:
+ status = acpi_evaluate_integer(handle, "_UID", NULL, &tmp);
+ if (ACPI_FAILURE(status))
+ return false;
+ acpi_id = tmp;
+ break;
+ default:
+ return false;
+ }
+
+ type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
+
+ *phys_id = __acpi_get_phys_id(handle, type, acpi_id, false);
+ *cpuid = acpi_map_cpuid(*phys_id, acpi_id);
+ if (*cpuid == -1)
+ return false;
+
+ return true;
+}
+
+static acpi_status __init
+set_processor_node_mapping(acpi_handle handle, u32 lvl, void *context,
+ void **rv)
+{
+ u32 apic_id;
+ int cpu_id;
+
+ if (!map_processor(handle, &apic_id, &cpu_id))
+ return AE_ERROR;
+
+ acpi_map_cpu2node(handle, cpu_id, apic_id);
+ return AE_OK;
+}
+
+void __init acpi_set_processor_mapping(void)
+{
+ /* Set persistent cpu <-> node mapping for all processors. */
+ acpi_walk_namespace(ACPI_TYPE_PROCESSOR, ACPI_ROOT_OBJECT,
+ ACPI_UINT32_MAX, set_processor_node_mapping,
+ NULL, NULL, NULL);
+}
+#endif
+
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
static int get_ioapic_id(struct acpi_subtable_header *entry, u32 gsi_base,
u64 *phys_addr, int *ioapic_id)
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 0548339..c07f541 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -194,6 +194,8 @@ static inline bool invalid_phys_cpuid(phys_cpuid_t phys_id)
/* Arch dependent functions for cpu hotplug support */
int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, int *pcpu);
int acpi_unmap_cpu(int cpu);
+void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid);
+void __init acpi_set_processor_mapping(void);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
--
1.8.3.1
Hi,
Sorry for the terrible delay for this patch-set.
But unfortunately, they are still not fully tested for the memory-less
node case.
Please help to review first. Will soon do the tests.
Thanks.
Hello,
On Thu, Nov 19, 2015 at 12:22:10PM +0800, Tang Chen wrote:
> [Solution]
>
> There are four mappings in the kernel:
> 1. nodeid (logical node id) <-> pxm
> 2. apicid (physical cpu id) <-> nodeid
> 3. cpuid (logical cpu id) <-> apicid
> 4. cpuid (logical cpu id) <-> nodeid
>
> 1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
> mapping is setup at boot time. This mapping is persistent, won't change.
>
> 2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
> time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
> persistent.
>
> 3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
> allocated, lower ids first, and released at CPU hotremove time, reused for other
> hotadded CPUs. So this mapping is not persistent.
>
> 4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
> cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
>
> To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
> cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
> cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
> mapping. So the key point is obtaining all cpus' apicid.
I don't know much about acpi so can't actually review the patches but
the overall approach looks good to me.
Thanks.
--
tejun
On 11/24/2015 06:04 AM, Tejun Heo wrote:
> Hello,
>
> On Thu, Nov 19, 2015 at 12:22:10PM +0800, Tang Chen wrote:
>> [Solution]
>>
>> There are four mappings in the kernel:
>> 1. nodeid (logical node id) <-> pxm
>> 2. apicid (physical cpu id) <-> nodeid
>> 3. cpuid (logical cpu id) <-> apicid
>> 4. cpuid (logical cpu id) <-> nodeid
>>
>> 1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
>> mapping is setup at boot time. This mapping is persistent, won't change.
>>
>> 2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
>> time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
>> persistent.
>>
>> 3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
>> allocated, lower ids first, and released at CPU hotremove time, reused for other
>> hotadded CPUs. So this mapping is not persistent.
>>
>> 4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
>> cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
>>
>> To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
>> cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
>> cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
>> mapping. So the key point is obtaining all cpus' apicid.
> I don't know much about acpi so can't actually review the patches but
> the overall approach looks good to me.
Thank you, TJ. Will test it recently.
>
> Thanks.
>
--
This message has been scanned for viruses and
dangerous content by Fujitsu, and is believed to be clean.
Hi Tang,
I applied your patches into linux-4.4.0-rc4 and tried to boot up
the system with mem= boot option, but system does not boot up.
Unfortunately boot messages were not shown. So I cannot find out
the reason.
The reason of using the mem= boot option is to limit memory and
create memoryless node on purpose since your patches support
memoryless node.
Here is an example method to create memoryless node on purpose.
My box has the following SRAT:
SRAT: Node 0 PXM 0 [mem 0x00000000-0x5fffffff]
SRAT: Node 0 PXM 0 [mem 0x100000000-0x109fffffff]
SRAT: Node 1 PXM 1 [mem 0x10a0000000-0x209fffffff]
SRAT: Node 2 PXM 2 [mem 0x20a0000000-0x309fffffff]
SRAT: Node 3 PXM 3 [mem 0x30a0000000-0x409fffffff]
So when booting up the system with mem=0x20a0000000, Memory of
Node 2 and 3 are ignored and the Nodes become memoryless node.
Thanks,
Yasuaki Ishimatsu
On Thu, 19 Nov 2015 12:22:10 +0800
Tang Chen <[email protected]> wrote:
> [Problem]
>
> cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
>
> When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
> which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
> workqueue does not update wq_numa_possible_cpumask.
>
> So here is the problem:
>
> Assume we have the following cpuid <-> nodeid in the beginning:
>
> Node | CPU
> ------------------------
> node 0 | 0-14, 60-74
> node 1 | 15-29, 75-89
> node 2 | 30-44, 90-104
> node 3 | 45-59, 105-119
>
> and we hot-remove node2 and node3, it becomes:
>
> Node | CPU
> ------------------------
> node 0 | 0-14, 60-74
> node 1 | 15-29, 75-89
>
> and we hot-add node4 and node5, it becomes:
>
> Node | CPU
> ------------------------
> node 0 | 0-14, 60-74
> node 1 | 15-29, 75-89
> node 4 | 30-59
> node 5 | 90-119
>
> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
>
> When a pool workqueue is initialized, if its cpumask belongs to a node, its
> pool->node will be mapped to that node. And memory used by this workqueue will
> also be allocated on that node.
>
> static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
> ...
> /* if cpumask is contained inside a NUMA node, we belong to that node */
> if (wq_numa_enabled) {
> for_each_node(node) {
> if (cpumask_subset(pool->attrs->cpumask,
> wq_numa_possible_cpumask[node])) {
> pool->node = node;
> break;
> }
> }
> }
>
> Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
> which will lead to memory allocation failure:
>
> SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
> cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
> node 0: slabs: 6172, objs: 259224, free: 245741
> node 1: slabs: 3261, objs: 136962, free: 127656
>
> It happens here:
>
> create_worker(struct worker_pool *pool)
> |--> worker = alloc_worker(pool->node);
>
> static struct worker *alloc_worker(int node)
> {
> struct worker *worker;
>
> worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
>
> ......
>
> return worker;
> }
>
>
> [Solution]
>
> There are four mappings in the kernel:
> 1. nodeid (logical node id) <-> pxm
> 2. apicid (physical cpu id) <-> nodeid
> 3. cpuid (logical cpu id) <-> apicid
> 4. cpuid (logical cpu id) <-> nodeid
>
> 1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
> mapping is setup at boot time. This mapping is persistent, won't change.
>
> 2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
> time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
> persistent.
>
> 3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
> allocated, lower ids first, and released at CPU hotremove time, reused for other
> hotadded CPUs. So this mapping is not persistent.
>
> 4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
> cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
>
> To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
> cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
> cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
> mapping. So the key point is obtaining all cpus' apicid.
>
> apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
> MADT (Multiple APIC Description Table). So we finish the job in the following steps:
>
> 1. Enable apic registeration flow to handle both enabled and disabled cpus.
> This is done by introducing an extra parameter to generic_processor_info to let the
> caller control if disabled cpus are ignored.
>
> 2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
> the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
> registering local apic. Store the mapping in this array.
>
> 3. Enable _MAT and MADT relative apis to return non-presnet or disabled cpus' apicid.
> This is also done by introducing an extra parameter to these apis to let the caller
> control if disabled cpus are ignored.
>
> 4. Establish all possible cpuid <-> nodeid mapping.
> This is done via an additional acpi namespace walk for processors.
>
>
> For previous discussion, please refer to:
> https://lkml.org/lkml/2015/2/27/145
> https://lkml.org/lkml/2015/3/25/989
> https://lkml.org/lkml/2015/5/14/244
> https://lkml.org/lkml/2015/7/7/200
> https://lkml.org/lkml/2015/9/27/209
>
>
> Change log v2 -> v3:
> 1. Online memory-less nodes at boot time to map cpus of memory-less nodes.
> 2. Build zonelists for memory-less nodes so that memory allocator will fall
> back to proper nodes automatically.
>
> Change log v1 -> v2:
> 1. Split code movement and actual changes. Add patch 1.
> 2. Synchronize best near online node record when node hotplug happens. In patch 2.
> 3. Fix some comment.
>
>
> Gu Zheng (4):
> x86, acpi, cpu-hotplug: Enable acpi to register all possible cpus at
> boot time.
> x86, acpi, cpu-hotplug: Introduce cpuid_to_apicid[] array to store
> persistent cpuid <-> apicid mapping.
> x86, acpi, cpu-hotplug: Enable MADT APIs to return disabled apicid.
> x86, acpi, cpu-hotplug: Set persistent cpuid <-> nodeid mapping when
> booting.
>
> Tang Chen (1):
> x86, memhp, numa: Online memory-less nodes at boot time.
>
> arch/ia64/kernel/acpi.c | 2 +-
> arch/x86/include/asm/mpspec.h | 1 +
> arch/x86/kernel/acpi/boot.c | 8 ++-
> arch/x86/kernel/apic/apic.c | 85 +++++++++++++++++++++++++----
> arch/x86/mm/numa.c | 30 ++++++-----
> drivers/acpi/acpi_processor.c | 5 +-
> drivers/acpi/bus.c | 3 ++
> drivers/acpi/processor_core.c | 122 ++++++++++++++++++++++++++++++++++--------
> include/linux/acpi.h | 2 +
> include/linux/mmzone.h | 1 +
> mm/page_alloc.c | 2 +-
> 11 files changed, 209 insertions(+), 52 deletions(-)
>
> --
> 1.8.3.1
>
Hi Ishimatsu,
On 12/10/2015 05:10 AM, Yasuaki Ishimatsu wrote:
> Hi Tang,
>
> I applied your patches into linux-4.4.0-rc4 and tried to boot up
> the system with mem= boot option, but system does not boot up.
> Unfortunately boot messages were not shown. So I cannot find out
> the reason.
Thank you for testing. And yes, it failed to boot too early.
I'm working on it.
>
> The reason of using the mem= boot option is to limit memory and
> create memoryless node on purpose since your patches support
> memoryless node.
>
> Here is an example method to create memoryless node on purpose.
>
> My box has the following SRAT:
>
> SRAT: Node 0 PXM 0 [mem 0x00000000-0x5fffffff]
> SRAT: Node 0 PXM 0 [mem 0x100000000-0x109fffffff]
> SRAT: Node 1 PXM 1 [mem 0x10a0000000-0x209fffffff]
> SRAT: Node 2 PXM 2 [mem 0x20a0000000-0x309fffffff]
> SRAT: Node 3 PXM 3 [mem 0x30a0000000-0x409fffffff]
>
> So when booting up the system with mem=0x20a0000000, Memory of
> Node 2 and 3 are ignored and the Nodes become memoryless node.
OK, I'm using initrd overwrite. It could also fake the memory-less node.
Thanks.
>
> Thanks,
> Yasuaki Ishimatsu
> On Thu, 19 Nov 2015 12:22:10 +0800
> Tang Chen <[email protected]> wrote:
>
>> [Problem]
>>
>> cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
>> the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
>>
>> When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
>> which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
>> workqueue does not update wq_numa_possible_cpumask.
>>
>> So here is the problem:
>>
>> Assume we have the following cpuid <-> nodeid in the beginning:
>>
>> Node | CPU
>> ------------------------
>> node 0 | 0-14, 60-74
>> node 1 | 15-29, 75-89
>> node 2 | 30-44, 90-104
>> node 3 | 45-59, 105-119
>>
>> and we hot-remove node2 and node3, it becomes:
>>
>> Node | CPU
>> ------------------------
>> node 0 | 0-14, 60-74
>> node 1 | 15-29, 75-89
>>
>> and we hot-add node4 and node5, it becomes:
>>
>> Node | CPU
>> ------------------------
>> node 0 | 0-14, 60-74
>> node 1 | 15-29, 75-89
>> node 4 | 30-59
>> node 5 | 90-119
>>
>> But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
>>
>> When a pool workqueue is initialized, if its cpumask belongs to a node, its
>> pool->node will be mapped to that node. And memory used by this workqueue will
>> also be allocated on that node.
>>
>> static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
>> ...
>> /* if cpumask is contained inside a NUMA node, we belong to that node */
>> if (wq_numa_enabled) {
>> for_each_node(node) {
>> if (cpumask_subset(pool->attrs->cpumask,
>> wq_numa_possible_cpumask[node])) {
>> pool->node = node;
>> break;
>> }
>> }
>> }
>>
>> Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
>> which will lead to memory allocation failure:
>>
>> SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
>> cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
>> node 0: slabs: 6172, objs: 259224, free: 245741
>> node 1: slabs: 3261, objs: 136962, free: 127656
>>
>> It happens here:
>>
>> create_worker(struct worker_pool *pool)
>> |--> worker = alloc_worker(pool->node);
>>
>> static struct worker *alloc_worker(int node)
>> {
>> struct worker *worker;
>>
>> worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
>>
>> ......
>>
>> return worker;
>> }
>>
>>
>> [Solution]
>>
>> There are four mappings in the kernel:
>> 1. nodeid (logical node id) <-> pxm
>> 2. apicid (physical cpu id) <-> nodeid
>> 3. cpuid (logical cpu id) <-> apicid
>> 4. cpuid (logical cpu id) <-> nodeid
>>
>> 1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
>> mapping is setup at boot time. This mapping is persistent, won't change.
>>
>> 2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
>> time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
>> persistent.
>>
>> 3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
>> allocated, lower ids first, and released at CPU hotremove time, reused for other
>> hotadded CPUs. So this mapping is not persistent.
>>
>> 4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
>> cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
>>
>> To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
>> cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
>> cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
>> mapping. So the key point is obtaining all cpus' apicid.
>>
>> apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
>> MADT (Multiple APIC Description Table). So we finish the job in the following steps:
>>
>> 1. Enable apic registeration flow to handle both enabled and disabled cpus.
>> This is done by introducing an extra parameter to generic_processor_info to let the
>> caller control if disabled cpus are ignored.
>>
>> 2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
>> the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
>> registering local apic. Store the mapping in this array.
>>
>> 3. Enable _MAT and MADT relative apis to return non-presnet or disabled cpus' apicid.
>> This is also done by introducing an extra parameter to these apis to let the caller
>> control if disabled cpus are ignored.
>>
>> 4. Establish all possible cpuid <-> nodeid mapping.
>> This is done via an additional acpi namespace walk for processors.
>>
>>
>> For previous discussion, please refer to:
>> https://lkml.org/lkml/2015/2/27/145
>> https://lkml.org/lkml/2015/3/25/989
>> https://lkml.org/lkml/2015/5/14/244
>> https://lkml.org/lkml/2015/7/7/200
>> https://lkml.org/lkml/2015/9/27/209
>>
>>
>> Change log v2 -> v3:
>> 1. Online memory-less nodes at boot time to map cpus of memory-less nodes.
>> 2. Build zonelists for memory-less nodes so that memory allocator will fall
>> back to proper nodes automatically.
>>
>> Change log v1 -> v2:
>> 1. Split code movement and actual changes. Add patch 1.
>> 2. Synchronize best near online node record when node hotplug happens. In patch 2.
>> 3. Fix some comment.
>>
>>
>> Gu Zheng (4):
>> x86, acpi, cpu-hotplug: Enable acpi to register all possible cpus at
>> boot time.
>> x86, acpi, cpu-hotplug: Introduce cpuid_to_apicid[] array to store
>> persistent cpuid <-> apicid mapping.
>> x86, acpi, cpu-hotplug: Enable MADT APIs to return disabled apicid.
>> x86, acpi, cpu-hotplug: Set persistent cpuid <-> nodeid mapping when
>> booting.
>>
>> Tang Chen (1):
>> x86, memhp, numa: Online memory-less nodes at boot time.
>>
>> arch/ia64/kernel/acpi.c | 2 +-
>> arch/x86/include/asm/mpspec.h | 1 +
>> arch/x86/kernel/acpi/boot.c | 8 ++-
>> arch/x86/kernel/apic/apic.c | 85 +++++++++++++++++++++++++----
>> arch/x86/mm/numa.c | 30 ++++++-----
>> drivers/acpi/acpi_processor.c | 5 +-
>> drivers/acpi/bus.c | 3 ++
>> drivers/acpi/processor_core.c | 122 ++++++++++++++++++++++++++++++++++--------
>> include/linux/acpi.h | 2 +
>> include/linux/mmzone.h | 1 +
>> mm/page_alloc.c | 2 +-
>> 11 files changed, 209 insertions(+), 52 deletions(-)
>>
>> --
>> 1.8.3.1
>>
>
> .
>