Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932176AbbFIDmW (ORCPT ); Mon, 8 Jun 2015 23:42:22 -0400 Received: from cn.fujitsu.com ([59.151.112.132]:50515 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752685AbbFIDmO (ORCPT ); Mon, 8 Jun 2015 23:42:14 -0400 X-IronPort-AV: E=Sophos;i="5.01,1,1399996800"; d="scan'208";a="96699196" Message-ID: <55766068.9090809@cn.fujitsu.com> Date: Tue, 9 Jun 2015 11:41:28 +0800 From: Zhu Guihua User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Andrew Morton CC: , , , , , , , Subject: Re: [PATCH] mm/memory hotplug: print the last vmemmap region at the end of hot add memory References: <1433745881-7179-1-git-send-email-zhugh.fnst@cn.fujitsu.com> <20150608163053.c481d9a5057513130f760910@linux-foundation.org> In-Reply-To: <20150608163053.c481d9a5057513130f760910@linux-foundation.org> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.167.226.252] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2827 Lines: 66 On 06/09/2015 07:30 AM, Andrew Morton wrote: > On Mon, 8 Jun 2015 14:44:41 +0800 Zhu Guihua wrote: > >> When hot add two nodes continuously, we found the vmemmap region info is a >> bit messed. The last region of node 2 is printed when node 3 hot added, >> like the following: >> Initmem setup node 2 [mem 0x0000000000000000-0xffffffffffffffff] >> On node 2 totalpages: 0 >> Built 2 zonelists in Node order, mobility grouping on. Total pages: 16090539 >> Policy zone: Normal >> init_memory_mapping: [mem 0x40000000000-0x407ffffffff] >> [mem 0x40000000000-0x407ffffffff] page 1G >> [ffffea1000000000-ffffea10001fffff] PMD -> [ffff8a077d800000-ffff8a077d9fffff] on node 2 >> [ffffea1000200000-ffffea10003fffff] PMD -> [ffff8a077de00000-ffff8a077dffffff] on node 2 >> ... >> [ffffea101f600000-ffffea101f9fffff] PMD -> [ffff8a074ac00000-ffff8a074affffff] on node 2 >> [ffffea101fa00000-ffffea101fdfffff] PMD -> [ffff8a074a800000-ffff8a074abfffff] on node 2 >> Initmem setup node 3 [mem 0x0000000000000000-0xffffffffffffffff] >> On node 3 totalpages: 0 >> Built 3 zonelists in Node order, mobility grouping on. Total pages: 16090539 >> Policy zone: Normal >> init_memory_mapping: [mem 0x60000000000-0x607ffffffff] >> [mem 0x60000000000-0x607ffffffff] page 1G >> [ffffea101fe00000-ffffea101fffffff] PMD -> [ffff8a074a400000-ffff8a074a5fffff] on node 2 <=== node 2 ??? >> [ffffea1800000000-ffffea18001fffff] PMD -> [ffff8a074a600000-ffff8a074a7fffff] on node 3 >> [ffffea1800200000-ffffea18005fffff] PMD -> [ffff8a074a000000-ffff8a074a3fffff] on node 3 >> [ffffea1800600000-ffffea18009fffff] PMD -> [ffff8a0749c00000-ffff8a0749ffffff] on node 3 >> ... >> >> The cause is the last region was missed at the and of hot add memory, and >> p_start, p_end, node_start were not reset, so when hot add memory to a new >> node, it will consider they are not contiguous blocks and print the >> previous one. So we print the last vmemmap region at the end of hot add >> memory to avoid the confusion. >> >> ... >> >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -513,6 +513,7 @@ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn, >> break; >> err = 0; >> } >> + vmemmap_populate_print_last(); >> >> return err; >> } > vmemmap_populate_print_last() is only available on x86_64, when > CONFIG_SPARSEMEM_VMEMMAP=y. Are you sure this won't break builds? I tried this on i386 and on x86_64 when CONFIG_SPARSEMEM_VMEMMAP=n , it builds ok. Thanks, Zhu > > . > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/