Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753827AbbDBTPG (ORCPT ); Thu, 2 Apr 2015 15:15:06 -0400 Received: from mail-qc0-f170.google.com ([209.85.216.170]:33147 "EHLO mail-qc0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753025AbbDBTPD (ORCPT ); Thu, 2 Apr 2015 15:15:03 -0400 Message-ID: <551d9535.87628c0a.5324.7358@mx.google.com> Date: Thu, 02 Apr 2015 12:15:01 -0700 (PDT) From: Yasuaki Ishimatsu To: Dave Young Cc: Xishi Qiu , x86@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bhe@redhat.com, mingo@redhat.com, hpa@zytor.com, akpm@linux-foundation.org Subject: Re: [PATCH] x86/numa: kernel stack corruption fix In-Reply-To: <20150401074120.GF8680@dhcp-128-53.nay.redhat.com> References: <20150401045346.GA3461@dhcp-16-198.nay.redhat.com> <20150401051133.GC8680@dhcp-128-53.nay.redhat.com> <551B9DD7.5010603@huawei.com> <20150401074120.GF8680@dhcp-128-53.nay.redhat.com> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 9469 Lines: 169 On Wed, 1 Apr 2015 15:41:20 +0800 Dave Young wrote: > On 04/01/15 at 03:27pm, Xishi Qiu wrote: > > On 2015/4/1 13:11, Dave Young wrote: > > > > > Ccing Xishi Qiu who wrote the clear_kernel_node_hotplug code. > > > > > > On 04/01/15 at 12:53pm, Dave Young wrote: > > >> I got below kernel panic during kdump test on Thinkpad T420 laptop: > > >> > > >> [ 0.000000] No NUMA configuration found > > >> [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000037ba4fff] > > >> [ 0.000000] Kernel panic - not syncing: stack-protector: Kernel stack is cor > > >> upted in: ffffffff81d21910 r > > >> [ 0.000000] > > >> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.0.0-rc6+ #44 > > >> [ 0.000000] Hardware name: LENOVO 4236NUC/4236NUC, BIOS 83ET76WW (1.46 ) 07/ > > >> 5/2013 0 > > >> [ 0.000000] 0000000000000000 c70296ddd809e4f6 ffffffff81b67ce8 ffffffff817c > > >> a26 2 > > >> [ 0.000000] 0000000000000000 ffffffff81a61c90 ffffffff81b67d68 ffffffff817b > > >> 8d2 c > > >> [ 0.000000] 0000000000000010 ffffffff81b67d78 ffffffff81b67d18 c70296ddd809 > > >> 4f6 e > > >> [ 0.000000] Call Trace: > > >> [ 0.000000] [] dump_stack+0x45/0x57 > > >> [ 0.000000] [] panic+0xd0/0x204 > > >> [ 0.000000] [] ? numa_clear_kernel_node_hotplug+0xe6/0xf2 > > >> [ 0.000000] [] __stack_chk_fail+0x1b/0x20 > > >> [ 0.000000] [] numa_clear_kernel_node_hotplug+0xe6/0xf2 > > >> [ 0.000000] [] numa_init+0x1a5/0x520 > > >> [ 0.000000] [] x86_numa_init+0x19/0x3d > > >> [ 0.000000] [] initmem_init+0x9/0xb > > >> [ 0.000000] [] setup_arch+0x94f/0xc82 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] ? printk+0x55/0x6b > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] start_kernel+0xe8/0x4d6 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] x86_64_start_reservations+0x2a/0x2c > > >> [ 0.000000] [] x86_64_start_kernel+0x161/0x184 > > >> [ 0.000000] ---[ end Kernel panic - not syncing: stack-protector: Kernel sta > > >> k is corrupted in: ffffffff81d21910 c > > >> [ 0.000000] > > >> PANIC: early exception 0d rip 10:ffffffff8105d2a6 error 7eb cr2 ffff8800371dd00 > > >> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.0.0-rc6+ #44 0 > > >> [ 0.000000] Hardware name: LENOVO 4236NUC/4236NUC, BIOS 83ET76WW (1.46 ) 07/ > > >> 5/2013 0 > > >> [ 0.000000] 0000000000000000 c70296ddd809e4f6 ffffffff81b67c60 ffffffff817c > > >> a26 2 > > >> [ 0.000000] 0000000000000096 ffffffff81a61c90 ffffffff81b67d68 fffffff00000 > > >> 084 0000000000000a0d 0000000000000a00 0 > > >> [ 0.000000] Call Trace: > > >> [ 0.000000] [] dump_stack+0x45/0x57 > > >> [ 0.000000] [] early_idt_handler+0x90/0xb7 > > >> [ 0.000000] [] ? native_irq_enable+0x6/0x10 > > >> [ 0.000000] [] ? panic+0x1c3/0x204 > > >> [ 0.000000] [] ? numa_clear_kernel_node_hotplug+0xe6/0xf2 > > >> [ 0.000000] [] __stack_chk_fail+0x1b/0x20 > > >> [ 0.000000] [] numa_clear_kernel_node_hotplug+0xe6/0xf2 > > >> [ 0.000000] [] numa_init+0x1a5/0x520 > > >> [ 0.000000] [] x86_numa_init+0x19/0x3d > > >> [ 0.000000] [] initmem_init+0x9/0xb > > >> [ 0.000000] [] setup_arch+0x94f/0xc82 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] ? printk+0x55/0x6b > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] start_kernel+0xe8/0x4d6 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] ? early_idt_handlers+0x120/0x120 > > >> [ 0.000000] [] x86_64_start_reservations+0x2a/0x2c > > >> [ 0.000000] [] x86_64_start_kernel+0x161/0x184 > > >> [ 0.000000] RIP 0x46 > > >> > > >> This is caused by writing over end of numa mask bitmap. > > >> > > >> numa_clear_kernel_node try to set node id in a mask bitmap, it iterating all > > >> reserved region and assume every regions have valid nid. It is not true because > > >> There's an exception for graphic memory quirks. see function trim_snb_memory > > >> in arch/x86/kernel/setup.c > > >> > > >> It is easily to reproduce the bug in kdump kernel because kdump kernel use > > >> prereserved memory instead of whole memory, but kexec pass other reserved memory > > >> ranges to 2nd kernel as well. like below in my test: > > >> kdump kernel ram 0x2d000000 - 0x37bfffff > > >> One of the reserved regions: 0x40000000 - 0x40100000 > > >> > > >> The above reserved region includes 0x40004000, a page excluded in > > >> trim_snb_memory. For this memblock reserved region the nid is not set it is > > >> still default value MAX_NUMNODES. later node_set callback will set bit > > >> MAX_NUMNODES in nodemask bitmap thus stack corruption happen. > > >> > > > > Hi Dave, > > > > Is it means, first reserved region 0x40000000 - 0x40100000, then boot the kdump > > kernel, so this region is not include in "numa_meminfo", and memblock.reserved > > (0x40004000) is still MAX_NUMNODES from trim_snb_memory(). > > Right, btw, I booted kdump kernel with numa=off for saving memory. > > I suspect it will also be reproduced with mem=XYZ with normal kernel. Does the issue occur on your system with mem=0x40000000? I think the issue occurs when reserved memory range is not includes in system ram which informed by e820 or SRAT table. On your system, 0x40004000 is reserved by trim_snb_memory(). But if you use mem=0x40000000, the system ram is limited within 0x40000000. So the issue will occur. Thanks, Yasuaki Ishimatsu > > > > > numa_clear_kernel_node_hotplug > > { > > ... > > for (i = 0; i < numa_meminfo.nr_blks; i++) { > > struct numa_memblk *mb = &numa_meminfo.blk[i]; > > > > memblock_set_node(mb->start, mb->end - mb->start, > > &memblock.reserved, mb->nid); // this will not reset 0x40004000's node, right? > > } > > ... > > } > > > > Thanks > > Xishi Qiu > > > > >> Fixing this by adding a check, do not call node_set in case nid is MAX_NUMNODES. > > >> > > >> Signed-off-by: Dave Young > > >> --- > > >> arch/x86/mm/numa.c | 3 ++- > > >> 1 file changed, 2 insertions(+), 1 deletion(-) > > >> > > >> --- linux.orig/arch/x86/mm/numa.c > > >> +++ linux/arch/x86/mm/numa.c > > >> @@ -484,7 +484,8 @@ static void __init numa_clear_kernel_nod > > >> > > >> /* Mark all kernel nodes. */ > > >> for_each_memblock(reserved, r) > > >> - node_set(r->nid, numa_kernel_nodes); > > >> + if (r->nid != MAX_NUMNODES) > > >> + node_set(r->nid, numa_kernel_nodes); > > >> > > >> /* Clear MEMBLOCK_HOTPLUG flag for memory in kernel nodes. */ > > >> for (i = 0; i < numa_meminfo.nr_blks; i++) { > > >> > > > > > > . > > > > > > > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/