[reposting because the malformed cc list confused my email client]
On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
> In register_mem_sect_under_node() the system_state’s value is checked to
> detect whether the operation the call is made during boot time or during an
> hot-plug operation. Unfortunately, that check is wrong on some
> architecture, and may lead to sections being registered under multiple
> nodes if node's memory ranges are interleaved.
Why is this check arch specific?
> This can be seen on PowerPC LPAR after multiple memory hot-plug and
> hot-unplug operations are done. At the next reboot the node's memory ranges
> can be interleaved
What is the exact memory layout?
> and since the call to link_mem_sections() is made in
> topology_init() while the system is in the SYSTEM_SCHEDULING state, the
> node's id is not checked, and the sections registered multiple times.
So a single memory section/memblock belongs to two numa nodes?
> In
> that case, the system is able to boot but later hot-plug operation may lead
> to this panic because the node's links are correctly broken:
Correctly broken? Could you provide more details on the inconsistency
please?
Which physical memory range you are trying to add here and what is the
node affinity?
> ------------[ cut here ]------------
> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> Oops: Exception in kernel mode, sig: 5 [#1]
> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
> REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
> MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
> CFAR: c000000000846d20 IRQMASK: 0
> GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
> GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
> GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
> GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
> GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
> GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
> GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
> GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
> NIP [c000000000403f34] add_memory_resource+0x244/0x340
> LR [c000000000403f2c] add_memory_resource+0x23c/0x340
> Call Trace:
> [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
> [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
> [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
> [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
> [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
> [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
> [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
> [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
> [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
> [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
> [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
> [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
> [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
> Instruction dump:
> 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
> 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
> ---[ end trace 562fd6c109cd0fb2 ]---
The BUG_ON on failure is absolutely horrendous. There must be a better
way to handle a failure like that. The failure means that
sysfs_create_link_nowarn has failed. Please describe why that is the
case.
> This patch addresses the root cause by not relying on the system_state
> value to detect whether the call is due to a hot-plug operation or not. An
> additional parameter is added to link_mem_sections() to tell the context of
> the call and this parameter is propagated to register_mem_sect_under_node()
> throuugh the walk_memory_blocks()'s call.
This looks like a hack to me and it deserves a better explanation. The
existing code is a hack on its own and it is inconsistent with other
boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
places IIRC. Would it help to use the same here as well? Maybe we want to
wrap that inside a helper (early_memory_init()) and use it at all
places.
> Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()")
> Signed-off-by: Laurent Dufour <[email protected]>
> Cc: [email protected]
> Cc: Greg Kroah-Hartman <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Andrew Morton <[email protected]>
> ---
> drivers/base/node.c | 20 +++++++++++++++-----
> include/linux/node.h | 6 +++---
> mm/memory_hotplug.c | 3 ++-
> 3 files changed, 20 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 508b80f6329b..27f828eeb531 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn)
> }
>
> /* register memory section under specified node if it spans that node */
> +struct rmsun_args {
> + int nid;
> + bool hotadd;
> +};
> static int register_mem_sect_under_node(struct memory_block *mem_blk,
> - void *arg)
> + void *args)
> {
> unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;
> unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
> unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
> - int ret, nid = *(int *)arg;
> + int ret, nid = ((struct rmsun_args *)args)->nid;
> unsigned long pfn;
> + bool hotadd = ((struct rmsun_args *)args)->hotadd;
>
> for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
> int page_nid;
> @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk,
> * case, during hotplug we know that all pages in the memory
> * block belong to the same node.
> */
> - if (system_state == SYSTEM_BOOTING) {
> + if (!hotadd) {
> page_nid = get_nid_for_pfn(pfn);
> if (page_nid < 0)
> continue;
> @@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk)
> kobject_name(&node_devices[mem_blk->nid]->dev.kobj));
> }
>
> -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
> +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
> + bool hotadd)
> {
> + struct rmsun_args args;
> +
> + args.nid = nid;
> + args.hotadd = hotadd;
> return walk_memory_blocks(PFN_PHYS(start_pfn),
> - PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
> + PFN_PHYS(end_pfn - start_pfn), (void *)&args,
> register_mem_sect_under_node);
> }
>
> diff --git a/include/linux/node.h b/include/linux/node.h
> index 4866f32a02d8..6df9a4548650 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *);
>
> #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA)
> extern int link_mem_sections(int nid, unsigned long start_pfn,
> - unsigned long end_pfn);
> + unsigned long end_pfn, bool hotadd);
> #else
> static inline int link_mem_sections(int nid, unsigned long start_pfn,
> - unsigned long end_pfn)
> + unsigned long end_pfn, bool hotadd)
> {
> return 0;
> }
> @@ -128,7 +128,7 @@ static inline int register_one_node(int nid)
> if (error)
> return error;
> /* link memory sections under this node */
> - error = link_mem_sections(nid, start_pfn, end_pfn);
> + error = link_mem_sections(nid, start_pfn, end_pfn, false);
> }
>
> return error;
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index e9d5ab5d3ca0..28028db8364a 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1080,7 +1080,8 @@ int __ref add_memory_resource(int nid, struct resource *res)
> }
>
> /* link memory sections under this node.*/
> - ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
> + ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
> + true);
> BUG_ON(ret);
>
> /* create new memmap entry */
> --
> 2.28.0
--
Michal Hocko
SUSE Labs
Le 09/09/2020 à 09:40, Michal Hocko a écrit :
> [reposting because the malformed cc list confused my email client]
>
> On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
>> In register_mem_sect_under_node() the system_state’s value is checked to
>> detect whether the operation the call is made during boot time or during an
>> hot-plug operation. Unfortunately, that check is wrong on some
>> architecture, and may lead to sections being registered under multiple
>> nodes if node's memory ranges are interleaved.
>
> Why is this check arch specific?
I was wrong the check is not arch specific.
>> This can be seen on PowerPC LPAR after multiple memory hot-plug and
>> hot-unplug operations are done. At the next reboot the node's memory ranges
>> can be interleaved
>
> What is the exact memory layout?
For instance:
[ 0.000000] Early memory node ranges
[ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff]
[ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff]
[ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff]
[ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff]
[ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
>
>> and since the call to link_mem_sections() is made in
>> topology_init() while the system is in the SYSTEM_SCHEDULING state, the
>> node's id is not checked, and the sections registered multiple times.
>
> So a single memory section/memblock belongs to two numa nodes?
If the node id is not checked in register_mem_sect_under_node(), yes that the case.
>
>> In
>> that case, the system is able to boot but later hot-plug operation may lead
>> to this panic because the node's links are correctly broken:
>
> Correctly broken? Could you provide more details on the inconsistency
> please?
laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
total 0
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
-rw-r--r-- 1 root root 65536 Aug 24 05:27 online
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
-r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
drwxr-xr-x 2 root root 0 Aug 24 05:27 power
-r--r--r-- 1 root root 65536 Aug 24 05:27 removable
-rw-r--r-- 1 root root 65536 Aug 24 05:27 state
lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
-rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
-r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
>
> Which physical memory range you are trying to add here and what is the
> node affinity?
None is added, the root cause of the issue is happening at boot time.
>
>> ------------[ cut here ]------------
>> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
>> Oops: Exception in kernel mode, sig: 5 [#1]
>> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
>> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
>> NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
>> REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
>> MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
>> CFAR: c000000000846d20 IRQMASK: 0
>> GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
>> GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
>> GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
>> GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
>> GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
>> GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
>> GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
>> GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
>> NIP [c000000000403f34] add_memory_resource+0x244/0x340
>> LR [c000000000403f2c] add_memory_resource+0x23c/0x340
>> Call Trace:
>> [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
>> [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
>> [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
>> [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
>> [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
>> [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
>> [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
>> [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
>> [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
>> [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
>> [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
>> [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
>> [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
>> Instruction dump:
>> 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
>> 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
>> ---[ end trace 562fd6c109cd0fb2 ]---
>
> The BUG_ON on failure is absolutely horrendous. There must be a better
> way to handle a failure like that. The failure means that
> sysfs_create_link_nowarn has failed. Please describe why that is the
> case.
>
>> This patch addresses the root cause by not relying on the system_state
>> value to detect whether the call is due to a hot-plug operation or not. An
>> additional parameter is added to link_mem_sections() to tell the context of
>> the call and this parameter is propagated to register_mem_sect_under_node()
>> throuugh the walk_memory_blocks()'s call.
>
> This looks like a hack to me and it deserves a better explanation. The
> existing code is a hack on its own and it is inconsistent with other
> boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
> places IIRC. Would it help to use the same here as well? Maybe we want to
> wrap that inside a helper (early_memory_init()) and use it at all
> places.
I agree, this looks like a hack to check for the system_state value.
I'll follow the David's proposal and introduce an enum detailing when the node
id check has to be done or not.
The option of the wrapper seems good to me to, but it doesn't highlight why the
early processing is differing from the hot plug one. By using an enum explicitly
saying that the node id check is not done seems better to me.
>> Fixes: 4fbce633910e ("mm/memory_hotplug.c: make register_mem_sect_under_node() a callback of walk_memory_range()")
>> Signed-off-by: Laurent Dufour <[email protected]>
>> Cc: [email protected]
>> Cc: Greg Kroah-Hartman <[email protected]>
>> Cc: "Rafael J. Wysocki" <[email protected]>
>> Cc: Andrew Morton <[email protected]>
>> ---
>> drivers/base/node.c | 20 +++++++++++++++-----
>> include/linux/node.h | 6 +++---
>> mm/memory_hotplug.c | 3 ++-
>> 3 files changed, 20 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/base/node.c b/drivers/base/node.c
>> index 508b80f6329b..27f828eeb531 100644
>> --- a/drivers/base/node.c
>> +++ b/drivers/base/node.c
>> @@ -762,14 +762,19 @@ static int __ref get_nid_for_pfn(unsigned long pfn)
>> }
>>
>> /* register memory section under specified node if it spans that node */
>> +struct rmsun_args {
>> + int nid;
>> + bool hotadd;
>> +};
>> static int register_mem_sect_under_node(struct memory_block *mem_blk,
>> - void *arg)
>> + void *args)
>> {
>> unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;
>> unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
>> unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
>> - int ret, nid = *(int *)arg;
>> + int ret, nid = ((struct rmsun_args *)args)->nid;
>> unsigned long pfn;
>> + bool hotadd = ((struct rmsun_args *)args)->hotadd;
>>
>> for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
>> int page_nid;
>> @@ -789,7 +794,7 @@ static int register_mem_sect_under_node(struct memory_block *mem_blk,
>> * case, during hotplug we know that all pages in the memory
>> * block belong to the same node.
>> */
>> - if (system_state == SYSTEM_BOOTING) {
>> + if (!hotadd) {
>> page_nid = get_nid_for_pfn(pfn);
>> if (page_nid < 0)
>> continue;
>> @@ -832,10 +837,15 @@ void unregister_memory_block_under_nodes(struct memory_block *mem_blk)
>> kobject_name(&node_devices[mem_blk->nid]->dev.kobj));
>> }
>>
>> -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
>> +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
>> + bool hotadd)
>> {
>> + struct rmsun_args args;
>> +
>> + args.nid = nid;
>> + args.hotadd = hotadd;
>> return walk_memory_blocks(PFN_PHYS(start_pfn),
>> - PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
>> + PFN_PHYS(end_pfn - start_pfn), (void *)&args,
>> register_mem_sect_under_node);
>> }
>>
>> diff --git a/include/linux/node.h b/include/linux/node.h
>> index 4866f32a02d8..6df9a4548650 100644
>> --- a/include/linux/node.h
>> +++ b/include/linux/node.h
>> @@ -100,10 +100,10 @@ typedef void (*node_registration_func_t)(struct node *);
>>
>> #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA)
>> extern int link_mem_sections(int nid, unsigned long start_pfn,
>> - unsigned long end_pfn);
>> + unsigned long end_pfn, bool hotadd);
>> #else
>> static inline int link_mem_sections(int nid, unsigned long start_pfn,
>> - unsigned long end_pfn)
>> + unsigned long end_pfn, bool hotadd)
>> {
>> return 0;
>> }
>> @@ -128,7 +128,7 @@ static inline int register_one_node(int nid)
>> if (error)
>> return error;
>> /* link memory sections under this node */
>> - error = link_mem_sections(nid, start_pfn, end_pfn);
>> + error = link_mem_sections(nid, start_pfn, end_pfn, false);
>> }
>>
>> return error;
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index e9d5ab5d3ca0..28028db8364a 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1080,7 +1080,8 @@ int __ref add_memory_resource(int nid, struct resource *res)
>> }
>>
>> /* link memory sections under this node.*/
>> - ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
>> + ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
>> + true);
>> BUG_ON(ret);
>>
>> /* create new memmap entry */
>> --
>> 2.28.0
On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
> Le 09/09/2020 à 09:40, Michal Hocko a écrit :
> > [reposting because the malformed cc list confused my email client]
> >
> > On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
> > > In register_mem_sect_under_node() the system_state’s value is checked to
> > > detect whether the operation the call is made during boot time or during an
> > > hot-plug operation. Unfortunately, that check is wrong on some
> > > architecture, and may lead to sections being registered under multiple
> > > nodes if node's memory ranges are interleaved.
> >
> > Why is this check arch specific?
>
> I was wrong the check is not arch specific.
>
> > > This can be seen on PowerPC LPAR after multiple memory hot-plug and
> > > hot-unplug operations are done. At the next reboot the node's memory ranges
> > > can be interleaved
> >
> > What is the exact memory layout?
>
> For instance:
> [ 0.000000] Early memory node ranges
> [ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff]
> [ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff]
> [ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff]
> [ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff]
> [ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
Include this into the changelog.
> > > and since the call to link_mem_sections() is made in
> > > topology_init() while the system is in the SYSTEM_SCHEDULING state, the
> > > node's id is not checked, and the sections registered multiple times.
> >
> > So a single memory section/memblock belongs to two numa nodes?
>
> If the node id is not checked in register_mem_sect_under_node(), yes that the case.
I do not follow. register_mem_sect_under_node is about user interface.
This is independent on the low level memory representation - aka memory
section. I do not think we can handle a section in multiple zones/nodes.
Memblock in multiple zones/nodes is a different story and interleaving
physical memory layout can indeed lead to it. This is something that we
do not allow for runtime hotplug but have to somehow live with that - at
least not crash.
> > > In
> > > that case, the system is able to boot but later hot-plug operation may lead
> > > to this panic because the node's links are correctly broken:
> >
> > Correctly broken? Could you provide more details on the inconsistency
> > please?
>
> laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
> total 0
> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
> -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
> drwxr-xr-x 2 root root 0 Aug 24 05:27 power
> -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
> -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
> lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
> -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
> -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
OK, so there are two nodes referenced here. Not terrible from the user
point of view. Such a memory block will refuse to offline or online
IIRC.
> > Which physical memory range you are trying to add here and what is the
> > node affinity?
>
> None is added, the root cause of the issue is happening at boot time.
Let me clarify my question. The crash has clearly happened during the
hotplug add_memory_resource - which is clearly not a boot time path.
I was askin for more information about why this has failed. It is quite
clear that sysfs machinery has failed and that led to BUG_ON but we are
mising an information on why. What was the physical memory range to be
added and why sysfs failed?
> > > ------------[ cut here ]------------
> > > kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> > > Oops: Exception in kernel mode, sig: 5 [#1]
> > > LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> > > Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
> > > CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> > > NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
> > > REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
> > > MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
> > > CFAR: c000000000846d20 IRQMASK: 0
> > > GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
> > > GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
> > > GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
> > > GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
> > > GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
> > > GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
> > > GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
> > > GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
> > > NIP [c000000000403f34] add_memory_resource+0x244/0x340
> > > LR [c000000000403f2c] add_memory_resource+0x23c/0x340
> > > Call Trace:
> > > [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
> > > [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
> > > [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
> > > [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
> > > [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
> > > [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
> > > [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
> > > [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
> > > [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
> > > [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
> > > [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
> > > [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
> > > [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
> > > Instruction dump:
> > > 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
> > > 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
> > > ---[ end trace 562fd6c109cd0fb2 ]---
> >
> > The BUG_ON on failure is absolutely horrendous. There must be a better
> > way to handle a failure like that. The failure means that
> > sysfs_create_link_nowarn has failed. Please describe why that is the
> > case.
> >
> > > This patch addresses the root cause by not relying on the system_state
> > > value to detect whether the call is due to a hot-plug operation or not. An
> > > additional parameter is added to link_mem_sections() to tell the context of
> > > the call and this parameter is propagated to register_mem_sect_under_node()
> > > throuugh the walk_memory_blocks()'s call.
> >
> > This looks like a hack to me and it deserves a better explanation. The
> > existing code is a hack on its own and it is inconsistent with other
> > boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
> > places IIRC. Would it help to use the same here as well? Maybe we want to
> > wrap that inside a helper (early_memory_init()) and use it at all
> > places.
>
> I agree, this looks like a hack to check for the system_state value.
> I'll follow the David's proposal and introduce an enum detailing when the
> node id check has to be done or not.
I am not sure an enum is going to make the existing situation less
messy. Sure we somehow have to distinguish boot init and runtime hotplug
because they have different constrains. I am arguing that a) we should
have a consistent way to check for those and b) we shouldn't blow up
easily just because sysfs infrastructure has failed to initialize.
--
Michal Hocko
SUSE Labs
Le 09/09/2020 à 11:09, Michal Hocko a écrit :
> On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
>> Le 09/09/2020 à 09:40, Michal Hocko a écrit :
>>> [reposting because the malformed cc list confused my email client]
>>>
>>> On Tue 08-09-20 19:08:35, Laurent Dufour wrote:
>>>> In register_mem_sect_under_node() the system_state’s value is checked to
>>>> detect whether the operation the call is made during boot time or during an
>>>> hot-plug operation. Unfortunately, that check is wrong on some
>>>> architecture, and may lead to sections being registered under multiple
>>>> nodes if node's memory ranges are interleaved.
>>>
>>> Why is this check arch specific?
>>
>> I was wrong the check is not arch specific.
>>
>>>> This can be seen on PowerPC LPAR after multiple memory hot-plug and
>>>> hot-unplug operations are done. At the next reboot the node's memory ranges
>>>> can be interleaved
>>>
>>> What is the exact memory layout?
>>
>> For instance:
>> [ 0.000000] Early memory node ranges
>> [ 0.000000] node 1: [mem 0x0000000000000000-0x000000011fffffff]
>> [ 0.000000] node 2: [mem 0x0000000120000000-0x000000014fffffff]
>> [ 0.000000] node 1: [mem 0x0000000150000000-0x00000001ffffffff]
>> [ 0.000000] node 0: [mem 0x0000000200000000-0x000000048fffffff]
>> [ 0.000000] node 2: [mem 0x0000000490000000-0x00000007ffffffff]
>
> Include this into the changelog.
>
>>>> and since the call to link_mem_sections() is made in
>>>> topology_init() while the system is in the SYSTEM_SCHEDULING state, the
>>>> node's id is not checked, and the sections registered multiple times.
>>>
>>> So a single memory section/memblock belongs to two numa nodes?
>>
>> If the node id is not checked in register_mem_sect_under_node(), yes that the case.
>
> I do not follow. register_mem_sect_under_node is about user interface.
> This is independent on the low level memory representation - aka memory
> section. I do not think we can handle a section in multiple zones/nodes.
> Memblock in multiple zones/nodes is a different story and interleaving
> physical memory layout can indeed lead to it. This is something that we
> do not allow for runtime hotplug but have to somehow live with that - at
> least not crash.
register_mem_sect_under_node() is called at boot time and when memory is hot
added. In the later case the assumption is made that all the pages of the added
block are in the same node. And that's a valid assumption. However at boot time
the call is made using the node's whole range, lowest address to highest address
for that node. In the case there are interleaved ranges, this means the
interleaved sections are registered for each nodes which is not correct.
>>>> In
>>>> that case, the system is able to boot but later hot-plug operation may lead
>>>> to this panic because the node's links are correctly broken:
>>>
>>> Correctly broken? Could you provide more details on the inconsistency
>>> please?
>>
>> laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
>> total 0
>> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
>> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
>> drwxr-xr-x 2 root root 0 Aug 24 05:27 power
>> -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
>> lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
>> -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
>> -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
>
> OK, so there are two nodes referenced here. Not terrible from the user
> point of view. Such a memory block will refuse to offline or online
> IIRC.
No the memory block is still owned by one node, only the sysfs representation is
wrong. So the memory block can be hot unplugged, but only one node's link will
be cleaned, and a '/syss/devices/system/node#/memory21' link will remain and
that will be detected later when that memory block is hot plugged again.
>
>>> Which physical memory range you are trying to add here and what is the
>>> node affinity?
>>
>> None is added, the root cause of the issue is happening at boot time.
>
> Let me clarify my question. The crash has clearly happened during the
> hotplug add_memory_resource - which is clearly not a boot time path.
> I was askin for more information about why this has failed. It is quite
> clear that sysfs machinery has failed and that led to BUG_ON but we are
> mising an information on why. What was the physical memory range to be
> added and why sysfs failed?
The BUG_ON is detecting a bad state generated earlier, at boot time because
register_mem_sect_under_node() didn't check for the block's node id.
>
>>>> ------------[ cut here ]------------
>>>> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
>>>> Oops: Exception in kernel mode, sig: 5 [#1]
>>>> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>>>> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
>>>> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
>>>> NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
>>>> REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
>>>> MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
>>>> CFAR: c000000000846d20 IRQMASK: 0
>>>> GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
>>>> GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
>>>> GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
>>>> GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
>>>> GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
>>>> GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
>>>> GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
>>>> GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
>>>> NIP [c000000000403f34] add_memory_resource+0x244/0x340
>>>> LR [c000000000403f2c] add_memory_resource+0x23c/0x340
>>>> Call Trace:
>>>> [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
>>>> [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
>>>> [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
>>>> [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
>>>> [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
>>>> [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
>>>> [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
>>>> [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
>>>> [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
>>>> [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
>>>> [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
>>>> [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
>>>> [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
>>>> Instruction dump:
>>>> 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
>>>> 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
>>>> ---[ end trace 562fd6c109cd0fb2 ]---
>>>
>>> The BUG_ON on failure is absolutely horrendous. There must be a better
>>> way to handle a failure like that. The failure means that
>>> sysfs_create_link_nowarn has failed. Please describe why that is the
>>> case.
>>>
>>>> This patch addresses the root cause by not relying on the system_state
>>>> value to detect whether the call is due to a hot-plug operation or not. An
>>>> additional parameter is added to link_mem_sections() to tell the context of
>>>> the call and this parameter is propagated to register_mem_sect_under_node()
>>>> throuugh the walk_memory_blocks()'s call.
>>>
>>> This looks like a hack to me and it deserves a better explanation. The
>>> existing code is a hack on its own and it is inconsistent with other
>>> boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
>>> places IIRC. Would it help to use the same here as well? Maybe we want to
>>> wrap that inside a helper (early_memory_init()) and use it at all
>>> places.
>>
>> I agree, this looks like a hack to check for the system_state value.
>> I'll follow the David's proposal and introduce an enum detailing when the
>> node id check has to be done or not.
>
> I am not sure an enum is going to make the existing situation less
> messy. Sure we somehow have to distinguish boot init and runtime hotplug
> because they have different constrains. I am arguing that a) we should
> have a consistent way to check for those and b) we shouldn't blow up
> easily just because sysfs infrastructure has failed to initialize.
For the point a, using the enum allows to know in register_mem_sect_under_node()
if the link operation is due to a hotplug operation or done at boot time.
For the point b, one option would be ignore the link error in the case the link
is already existing, but that BUG_ON() had the benefit to highlight the root issue.
Cheers,
Laurent.
>> I am not sure an enum is going to make the existing situation less
>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>> because they have different constrains. I am arguing that a) we should
>> have a consistent way to check for those and b) we shouldn't blow up
>> easily just because sysfs infrastructure has failed to initialize.
>
> For the point a, using the enum allows to know in register_mem_sect_under_node()
> if the link operation is due to a hotplug operation or done at boot time.
>
> For the point b, one option would be ignore the link error in the case the link
> is already existing, but that BUG_ON() had the benefit to highlight the root issue.
>
WARN_ON_ONCE() would be preferred - not crash the system but still
highlight the issue.
> Cheers,
> Laurent.
>
--
Thanks,
David / dhildenb
Le 09/09/2020 à 11:24, David Hildenbrand a écrit :
>>> I am not sure an enum is going to make the existing situation less
>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>>> because they have different constrains. I am arguing that a) we should
>>> have a consistent way to check for those and b) we shouldn't blow up
>>> easily just because sysfs infrastructure has failed to initialize.
>>
>> For the point a, using the enum allows to know in register_mem_sect_under_node()
>> if the link operation is due to a hotplug operation or done at boot time.
>>
>> For the point b, one option would be ignore the link error in the case the link
>> is already existing, but that BUG_ON() had the benefit to highlight the root issue.
>>
>
> WARN_ON_ONCE() would be preferred - not crash the system but still
> highlight the issue.
Indeed, calling sysfs_create_link() instead of sysfs_create_link_nowarn() in
register_mem_sect_under_node() and ignoring EEXIST returned value should do the job.
I'll do that in a separate patch.
On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> Le 09/09/2020 ? 11:09, Michal Hocko a ?crit?:
> > On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
> > > Le 09/09/2020 ? 09:40, Michal Hocko a ?crit?:
[...]
> > > > > In
> > > > > that case, the system is able to boot but later hot-plug operation may lead
> > > > > to this panic because the node's links are correctly broken:
> > > >
> > > > Correctly broken? Could you provide more details on the inconsistency
> > > > please?
> > >
> > > laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
> > > total 0
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
> > > drwxr-xr-x 2 root root 0 Aug 24 05:27 power
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
> > > lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
> > > -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
> > > -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
> >
> > OK, so there are two nodes referenced here. Not terrible from the user
> > point of view. Such a memory block will refuse to offline or online
> > IIRC.
>
> No the memory block is still owned by one node, only the sysfs
> representation is wrong. So the memory block can be hot unplugged, but only
> one node's link will be cleaned, and a '/syss/devices/system/node#/memory21'
> link will remain and that will be detected later when that memory block is
> hot plugged again.
OK, so you need to hotremove first and hotadd again to trigger the
problem. It is not like you would be a hot adding something new. This is
a useful information to have in the changelog.
> > > > Which physical memory range you are trying to add here and what is the
> > > > node affinity?
> > >
> > > None is added, the root cause of the issue is happening at boot time.
> >
> > Let me clarify my question. The crash has clearly happened during the
> > hotplug add_memory_resource - which is clearly not a boot time path.
> > I was askin for more information about why this has failed. It is quite
> > clear that sysfs machinery has failed and that led to BUG_ON but we are
> > mising an information on why. What was the physical memory range to be
> > added and why sysfs failed?
>
> The BUG_ON is detecting a bad state generated earlier, at boot time because
> register_mem_sect_under_node() didn't check for the block's node id.
>
> > > > > ------------[ cut here ]------------
> > > > > kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
> > > > > Oops: Exception in kernel mode, sig: 5 [#1]
> > > > > LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
> > > > > Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
> > > > > CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
> > > > > NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
> > > > > REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
> > > > > MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
> > > > > CFAR: c000000000846d20 IRQMASK: 0
> > > > > GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
> > > > > GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
> > > > > GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
> > > > > GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
> > > > > GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
> > > > > GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
> > > > > GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
> > > > > GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
> > > > > NIP [c000000000403f34] add_memory_resource+0x244/0x340
> > > > > LR [c000000000403f2c] add_memory_resource+0x23c/0x340
> > > > > Call Trace:
> > > > > [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
> > > > > [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
> > > > > [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
> > > > > [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
> > > > > [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
> > > > > [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
> > > > > [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
> > > > > [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
> > > > > [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
> > > > > [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
> > > > > [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
> > > > > [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
> > > > > [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
> > > > > Instruction dump:
> > > > > 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
> > > > > 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
> > > > > ---[ end trace 562fd6c109cd0fb2 ]---
> > > >
> > > > The BUG_ON on failure is absolutely horrendous. There must be a better
> > > > way to handle a failure like that. The failure means that
> > > > sysfs_create_link_nowarn has failed. Please describe why that is the
> > > > case.
> > > >
> > > > > This patch addresses the root cause by not relying on the system_state
> > > > > value to detect whether the call is due to a hot-plug operation or not. An
> > > > > additional parameter is added to link_mem_sections() to tell the context of
> > > > > the call and this parameter is propagated to register_mem_sect_under_node()
> > > > > throuugh the walk_memory_blocks()'s call.
> > > >
> > > > This looks like a hack to me and it deserves a better explanation. The
> > > > existing code is a hack on its own and it is inconsistent with other
> > > > boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
> > > > places IIRC. Would it help to use the same here as well? Maybe we want to
> > > > wrap that inside a helper (early_memory_init()) and use it at all
> > > > places.
> > >
> > > I agree, this looks like a hack to check for the system_state value.
> > > I'll follow the David's proposal and introduce an enum detailing when the
> > > node id check has to be done or not.
> >
> > I am not sure an enum is going to make the existing situation less
> > messy. Sure we somehow have to distinguish boot init and runtime hotplug
> > because they have different constrains. I am arguing that a) we should
> > have a consistent way to check for those and b) we shouldn't blow up
> > easily just because sysfs infrastructure has failed to initialize.
>
> For the point a, using the enum allows to know in
> register_mem_sect_under_node() if the link operation is due to a hotplug
> operation or done at boot time.
Yes, but let me repeat. We have a mess here and different paths check
for the very same condition by different ways. We need to unify those.
> For the point b, one option would be ignore the link error in the case the
> link is already existing, but that BUG_ON() had the benefit to highlight the
> root issue.
Yes BUG_ON is obviously an over-reaction. The system is not in a state
to die anytime soon.
--
Michal Hocko
SUSE Labs
On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
> >> I am not sure an enum is going to make the existing situation less
> >> messy. Sure we somehow have to distinguish boot init and runtime hotplug
> >> because they have different constrains. I am arguing that a) we should
> >> have a consistent way to check for those and b) we shouldn't blow up
> >> easily just because sysfs infrastructure has failed to initialize.
> >
> > For the point a, using the enum allows to know in register_mem_sect_under_node()
> > if the link operation is due to a hotplug operation or done at boot time.
> >
> > For the point b, one option would be ignore the link error in the case the link
> > is already existing, but that BUG_ON() had the benefit to highlight the root issue.
> >
>
> WARN_ON_ONCE() would be preferred - not crash the system but still
> highlight the issue.
Many many systems now run with 'panic on warn' enabled, so that wouldn't
change much :(
If you can warn, you can properly just print an error message and
recover from the problem.
thanks,
greg k-h
On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
>>>> I am not sure an enum is going to make the existing situation less
>>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>>>> because they have different constrains. I am arguing that a) we should
>>>> have a consistent way to check for those and b) we shouldn't blow up
>>>> easily just because sysfs infrastructure has failed to initialize.
>>>
>>> For the point a, using the enum allows to know in register_mem_sect_under_node()
>>> if the link operation is due to a hotplug operation or done at boot time.
>>>
>>> For the point b, one option would be ignore the link error in the case the link
>>> is already existing, but that BUG_ON() had the benefit to highlight the root issue.
>>>
>>
>> WARN_ON_ONCE() would be preferred - not crash the system but still
>> highlight the issue.
>
> Many many systems now run with 'panic on warn' enabled, so that wouldn't
> change much :(
>
> If you can warn, you can properly just print an error message and
> recover from the problem.
Maybe VM_WARN_ON_ONCE() then to detect this during testing?
(we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
used in production - behaves like BUG_ON and BUG_ON is frowned upon)
--
Thanks,
David / dhildenb
On Wed, Sep 09, 2020 at 02:32:57PM +0200, David Hildenbrand wrote:
> On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> > On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
> >>>> I am not sure an enum is going to make the existing situation less
> >>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
> >>>> because they have different constrains. I am arguing that a) we should
> >>>> have a consistent way to check for those and b) we shouldn't blow up
> >>>> easily just because sysfs infrastructure has failed to initialize.
> >>>
> >>> For the point a, using the enum allows to know in register_mem_sect_under_node()
> >>> if the link operation is due to a hotplug operation or done at boot time.
> >>>
> >>> For the point b, one option would be ignore the link error in the case the link
> >>> is already existing, but that BUG_ON() had the benefit to highlight the root issue.
> >>>
> >>
> >> WARN_ON_ONCE() would be preferred - not crash the system but still
> >> highlight the issue.
> >
> > Many many systems now run with 'panic on warn' enabled, so that wouldn't
> > change much :(
> >
> > If you can warn, you can properly just print an error message and
> > recover from the problem.
>
> Maybe VM_WARN_ON_ONCE() then to detect this during testing?
If you all use that, sure.
> (we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
> used in production - behaves like BUG_ON and BUG_ON is frowned upon)
Yes we have, but in the end, it's good, those things should be fixed and
not accessable by anything a user can trigger.
thanks,
greg k-h
Le 09/09/2020 à 12:59, Michal Hocko a écrit :
> On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
>> Le 09/09/2020 à 11:09, Michal Hocko a écrit :
>>> On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
>>>> Le 09/09/2020 à 09:40, Michal Hocko a écrit :
> [...]
>>>>>> In
>>>>>> that case, the system is able to boot but later hot-plug operation may lead
>>>>>> to this panic because the node's links are correctly broken:
>>>>>
>>>>> Correctly broken? Could you provide more details on the inconsistency
>>>>> please?
>>>>
>>>> laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
>>>> total 0
>>>> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node1 -> ../../node/node1
>>>> lrwxrwxrwx 1 root root 0 Aug 24 05:27 node2 -> ../../node/node2
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
>>>> drwxr-xr-x 2 root root 0 Aug 24 05:27 power
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
>>>> lrwxrwxrwx 1 root root 0 Aug 24 05:25 subsystem -> ../../../../bus/memory
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
>>>
>>> OK, so there are two nodes referenced here. Not terrible from the user
>>> point of view. Such a memory block will refuse to offline or online
>>> IIRC.
>>
>> No the memory block is still owned by one node, only the sysfs
>> representation is wrong. So the memory block can be hot unplugged, but only
>> one node's link will be cleaned, and a '/syss/devices/system/node#/memory21'
>> link will remain and that will be detected later when that memory block is
>> hot plugged again.
>
> OK, so you need to hotremove first and hotadd again to trigger the
> problem. It is not like you would be a hot adding something new. This is
> a useful information to have in the changelog.
>
>>>>> Which physical memory range you are trying to add here and what is the
>>>>> node affinity?
>>>>
>>>> None is added, the root cause of the issue is happening at boot time.
>>>
>>> Let me clarify my question. The crash has clearly happened during the
>>> hotplug add_memory_resource - which is clearly not a boot time path.
>>> I was askin for more information about why this has failed. It is quite
>>> clear that sysfs machinery has failed and that led to BUG_ON but we are
>>> mising an information on why. What was the physical memory range to be
>>> added and why sysfs failed?
>>
>> The BUG_ON is detecting a bad state generated earlier, at boot time because
>> register_mem_sect_under_node() didn't check for the block's node id.
>>
>>>>>> ------------[ cut here ]------------
>>>>>> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
>>>>>> Oops: Exception in kernel mode, sig: 5 [#1]
>>>>>> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>>>>>> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
>>>>>> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
>>>>>> NIP: c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
>>>>>> REGS: c0000004876e3660 TRAP: 0700 Not tainted (5.9.0-rc1+)
>>>>>> MSR: 800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24000448 XER: 20040000
>>>>>> CFAR: c000000000846d20 IRQMASK: 0
>>>>>> GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
>>>>>> GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
>>>>>> GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
>>>>>> GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
>>>>>> GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
>>>>>> GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
>>>>>> GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
>>>>>> GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
>>>>>> NIP [c000000000403f34] add_memory_resource+0x244/0x340
>>>>>> LR [c000000000403f2c] add_memory_resource+0x23c/0x340
>>>>>> Call Trace:
>>>>>> [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
>>>>>> [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
>>>>>> [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
>>>>>> [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
>>>>>> [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
>>>>>> [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
>>>>>> [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
>>>>>> [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
>>>>>> [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
>>>>>> [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
>>>>>> [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
>>>>>> [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
>>>>>> [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
>>>>>> Instruction dump:
>>>>>> 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
>>>>>> 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
>>>>>> ---[ end trace 562fd6c109cd0fb2 ]---
>>>>>
>>>>> The BUG_ON on failure is absolutely horrendous. There must be a better
>>>>> way to handle a failure like that. The failure means that
>>>>> sysfs_create_link_nowarn has failed. Please describe why that is the
>>>>> case.
>>>>>
>>>>>> This patch addresses the root cause by not relying on the system_state
>>>>>> value to detect whether the call is due to a hot-plug operation or not. An
>>>>>> additional parameter is added to link_mem_sections() to tell the context of
>>>>>> the call and this parameter is propagated to register_mem_sect_under_node()
>>>>>> throuugh the walk_memory_blocks()'s call.
>>>>>
>>>>> This looks like a hack to me and it deserves a better explanation. The
>>>>> existing code is a hack on its own and it is inconsistent with other
>>>>> boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
>>>>> places IIRC. Would it help to use the same here as well? Maybe we want to
>>>>> wrap that inside a helper (early_memory_init()) and use it at all
>>>>> places.
>>>>
>>>> I agree, this looks like a hack to check for the system_state value.
>>>> I'll follow the David's proposal and introduce an enum detailing when the
>>>> node id check has to be done or not.
>>>
>>> I am not sure an enum is going to make the existing situation less
>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>>> because they have different constrains. I am arguing that a) we should
>>> have a consistent way to check for those and b) we shouldn't blow up
>>> easily just because sysfs infrastructure has failed to initialize.
>>
>> For the point a, using the enum allows to know in
>> register_mem_sect_under_node() if the link operation is due to a hotplug
>> operation or done at boot time.
>
> Yes, but let me repeat. We have a mess here and different paths check
> for the very same condition by different ways. We need to unify those.
What are you suggesting to unify these checks (using a MP_* enum as suggested by
David, something else)?
>
>> For the point b, one option would be ignore the link error in the case the
>> link is already existing, but that BUG_ON() had the benefit to highlight the
>> root issue.
>
> Yes BUG_ON is obviously an over-reaction. The system is not in a state
> to die anytime soon.
>
On Wed 09-09-20 14:32:57, David Hildenbrand wrote:
> On 09.09.20 14:30, Greg Kroah-Hartman wrote:
> > On Wed, Sep 09, 2020 at 11:24:24AM +0200, David Hildenbrand wrote:
> >>>> I am not sure an enum is going to make the existing situation less
> >>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
> >>>> because they have different constrains. I am arguing that a) we should
> >>>> have a consistent way to check for those and b) we shouldn't blow up
> >>>> easily just because sysfs infrastructure has failed to initialize.
> >>>
> >>> For the point a, using the enum allows to know in register_mem_sect_under_node()
> >>> if the link operation is due to a hotplug operation or done at boot time.
> >>>
> >>> For the point b, one option would be ignore the link error in the case the link
> >>> is already existing, but that BUG_ON() had the benefit to highlight the root issue.
> >>>
> >>
> >> WARN_ON_ONCE() would be preferred - not crash the system but still
> >> highlight the issue.
> >
> > Many many systems now run with 'panic on warn' enabled, so that wouldn't
> > change much :(
> >
> > If you can warn, you can properly just print an error message and
> > recover from the problem.
>
> Maybe VM_WARN_ON_ONCE() then to detect this during testing?
>
> (we basically turned WARN_ON_ONCE() useless with 'panic on warn' getting
> used in production - behaves like BUG_ON and BUG_ON is frowned upon)
VM_WARN* is not that much different from panic on warn. Still one can
argue that many workloads enable it just because. And I would disagree
that we should care much about those because those are debugging
features and everybody has to take consequences.
On the other hand the question is whether WARN is giving us much. So
what is the advantage over a simple pr_err? We will get a backtrace.
Interesting but not really that useful because there are only few code
paths this can trigger from. Registers dump? Not really useful here.
Taint flag, probably useful because follow up problems might give us a
hint that this might be related. People tend to pay more attention to
WARN splat than a single line error. Well, not really a strong reason, I
would say.
So while I wouldn't argue against WARN* in general (just because somebody
might be setting the system to panic), I would also think of how much
useful the splat is.
--
Michal Hocko
SUSE Labs
On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> Le 09/09/2020 ? 12:59, Michal Hocko a ?crit?:
> > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
[...]
> > > For the point a, using the enum allows to know in
> > > register_mem_sect_under_node() if the link operation is due to a hotplug
> > > operation or done at boot time.
> >
> > Yes, but let me repeat. We have a mess here and different paths check
> > for the very same condition by different ways. We need to unify those.
>
> What are you suggesting to unify these checks (using a MP_* enum as
> suggested by David, something else)?
We do have system_state check spread at different places. I would use
this one and wrap it behind a helper. Or have I missed any reason why
that wouldn't work for this case?
--
Michal Hocko
SUSE Labs
Le 10/09/2020 à 09:23, Michal Hocko a écrit :
> On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
>> Le 09/09/2020 à 12:59, Michal Hocko a écrit :
>>> On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> [...]
>>>> For the point a, using the enum allows to know in
>>>> register_mem_sect_under_node() if the link operation is due to a hotplug
>>>> operation or done at boot time.
>>>
>>> Yes, but let me repeat. We have a mess here and different paths check
>>> for the very same condition by different ways. We need to unify those.
>>
>> What are you suggesting to unify these checks (using a MP_* enum as
>> suggested by David, something else)?
>
> We do have system_state check spread at different places. I would use
> this one and wrap it behind a helper. Or have I missed any reason why
> that wouldn't work for this case?
That would not work in that case because memory can be hot-added at the
SYSTEM_SCHEDULING system state and the regular memory is also registered at that
system state too. So system state is not enough to discriminate between the both.
I think I'll go with the option suggested by David, replacing the enum
memmap_context a new enum memplug_context and pass that context to
register_mem_sect_under_node() so that function will known when node id should
be checked or not.
Cheers,
Laurent.
On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
> Le 10/09/2020 ? 09:23, Michal Hocko a ?crit?:
> > On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
> > > Le 09/09/2020 ? 12:59, Michal Hocko a ?crit?:
> > > > On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
> > [...]
> > > > > For the point a, using the enum allows to know in
> > > > > register_mem_sect_under_node() if the link operation is due to a hotplug
> > > > > operation or done at boot time.
> > > >
> > > > Yes, but let me repeat. We have a mess here and different paths check
> > > > for the very same condition by different ways. We need to unify those.
> > >
> > > What are you suggesting to unify these checks (using a MP_* enum as
> > > suggested by David, something else)?
> >
> > We do have system_state check spread at different places. I would use
> > this one and wrap it behind a helper. Or have I missed any reason why
> > that wouldn't work for this case?
>
> That would not work in that case because memory can be hot-added at the
> SYSTEM_SCHEDULING system state and the regular memory is also registered at
> that system state too. So system state is not enough to discriminate between
> the both.
If that is really the case all other places need a fix as well.
Btw. could you be more specific about memory hotplug during early boot?
How that happens? I am only aware of https://lkml.kernel.org/r/[email protected]
and that doesn't happen as early as SYSTEM_SCHEDULING.
--
Michal Hocko
SUSE Labs
Le 10/09/2020 à 13:12, Michal Hocko a écrit :
> On Thu 10-09-20 09:51:39, Laurent Dufour wrote:
>> Le 10/09/2020 à 09:23, Michal Hocko a écrit :
>>> On Wed 09-09-20 18:07:15, Laurent Dufour wrote:
>>>> Le 09/09/2020 à 12:59, Michal Hocko a écrit :
>>>>> On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
>>> [...]
>>>>>> For the point a, using the enum allows to know in
>>>>>> register_mem_sect_under_node() if the link operation is due to a hotplug
>>>>>> operation or done at boot time.
>>>>>
>>>>> Yes, but let me repeat. We have a mess here and different paths check
>>>>> for the very same condition by different ways. We need to unify those.
>>>>
>>>> What are you suggesting to unify these checks (using a MP_* enum as
>>>> suggested by David, something else)?
>>>
>>> We do have system_state check spread at different places. I would use
>>> this one and wrap it behind a helper. Or have I missed any reason why
>>> that wouldn't work for this case?
>>
>> That would not work in that case because memory can be hot-added at the
>> SYSTEM_SCHEDULING system state and the regular memory is also registered at
>> that system state too. So system state is not enough to discriminate between
>> the both.
>
> If that is really the case all other places need a fix as well.
> Btw. could you be more specific about memory hotplug during early boot?
> How that happens? I am only aware of https://lkml.kernel.org/r/[email protected]
> and that doesn't happen as early as SYSTEM_SCHEDULING.
That points has been raised by David, quoting him here:
> IIRC, ACPI can hotadd memory while SCHEDULING, this patch would break that.
>
> Ccing Oscar, I think he mentioned recently that this is the case with ACPI.
Oscar told that he need to investigate further on that.
On my side I can't get these ACPI "early" hot-plug operations to happen so I
can't check that.
If this is clear that ACPI memory hotplug doesn't happen at SYSTEM_SCHEDULING,
the patch I proposed at first is enough to fix the issue.