Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932283Ab3FMN2z (ORCPT ); Thu, 13 Jun 2013 09:28:55 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:34314 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1755675Ab3FMN2W (ORCPT ); Thu, 13 Jun 2013 09:28:22 -0400 X-IronPort-AV: E=Sophos;i="4.87,858,1363104000"; d="scan'208";a="7537720" From: Tang Chen To: tglx@linutronix.de, mingo@elte.hu, hpa@zytor.com, akpm@linux-foundation.org, tj@kernel.org, trenn@suse.de, yinghai@kernel.org, jiang.liu@huawei.com, wency@cn.fujitsu.com, laijs@cn.fujitsu.com, isimatu.yasuaki@jp.fujitsu.com, mgorman@suse.de, minchan@kernel.org, mina86@mina86.com, gong.chen@linux.intel.com, vasilis.liaskovitis@profitbricks.com, lwoodman@redhat.com, riel@redhat.com, jweiner@redhat.com, prarit@redhat.com Cc: x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [Part2 PATCH v4 12/15] x86, acpi, numa, mem-hotplug: Introduce MEMBLK_HOTPLUGGABLE to mark and reserve hotpluggable memory. Date: Thu, 13 Jun 2013 21:03:36 +0800 Message-Id: <1371128619-8987-13-git-send-email-tangchen@cn.fujitsu.com> X-Mailer: git-send-email 1.7.10.1 In-Reply-To: <1371128619-8987-1-git-send-email-tangchen@cn.fujitsu.com> References: <1371128619-8987-1-git-send-email-tangchen@cn.fujitsu.com> X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2013/06/13 20:58:46, Serialize by Router on mailserver/fnst(Release 8.5.3|September 15, 2011) at 2013/06/13 20:59:20, Serialize complete at 2013/06/13 20:59:20 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4493 Lines: 141 We mark out movable memory ranges and reserve them with MEMBLK_HOTPLUGGABLE flag in memblock.reserved. This should be done after the memory mapping is initialized because the kernel now supports allocate pagetable pages on local node, which are kernel pages. The reserved hotpluggable will be freed to buddy when memory initialization is done. And also, ensure all the nodes which the kernel resides in are un-hotpluggable. This idea is from Wen Congyang and Jiang Liu . Suggested-by: Jiang Liu Suggested-by: Wen Congyang Signed-off-by: Tang Chen Reviewed-by: Vasilis Liaskovitis --- arch/x86/mm/numa.c | 29 +++++++++++++++++++++++++++++ include/linux/memblock.h | 3 +++ mm/memblock.c | 18 ++++++++++++++++++ 3 files changed, 50 insertions(+), 0 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 2b5057f..31595c5 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -771,6 +771,33 @@ static void __init early_x86_numa_init_mapping(void) } #endif +#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP +static void __init early_mem_hotplug_init() +{ + int i, nid; + phys_addr_t start, end; + + if (!movablecore_enable_srat) + return; + + for (i = 0; i < numa_meminfo.nr_blks; i++) { + nid = numa_meminfo.blk[i].nid; + start = numa_meminfo.blk[i].start; + end = numa_meminfo.blk[i].end; + + if (!numa_meminfo.blk[i].hotpluggable || + memblock_is_kernel_node(nid)) + continue; + + memblock_reserve_hotpluggable(start, end - start, nid); + } +} +#else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +static inline void early_mem_hotplug_init() +{ +} +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ + void __init early_initmem_init(void) { early_x86_numa_init(); @@ -790,6 +817,8 @@ void __init early_initmem_init(void) load_cr3(swapper_pg_dir); __flush_tlb_all(); + early_mem_hotplug_init(); + early_memtest(0, max_pfn_mapped<