Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752544AbbFHXbQ (ORCPT ); Mon, 8 Jun 2015 19:31:16 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59379 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932241AbbFHXaz (ORCPT ); Mon, 8 Jun 2015 19:30:55 -0400 Date: Mon, 8 Jun 2015 16:30:53 -0700 From: Andrew Morton To: Zhu Guihua Cc: , , , , , , , Subject: Re: [PATCH] mm/memory hotplug: print the last vmemmap region at the end of hot add memory Message-Id: <20150608163053.c481d9a5057513130f760910@linux-foundation.org> In-Reply-To: <1433745881-7179-1-git-send-email-zhugh.fnst@cn.fujitsu.com> References: <1433745881-7179-1-git-send-email-zhugh.fnst@cn.fujitsu.com> X-Mailer: Sylpheed 3.4.1 (GTK+ 2.24.23; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2603 Lines: 55 On Mon, 8 Jun 2015 14:44:41 +0800 Zhu Guihua wrote: > When hot add two nodes continuously, we found the vmemmap region info is a > bit messed. The last region of node 2 is printed when node 3 hot added, > like the following: > Initmem setup node 2 [mem 0x0000000000000000-0xffffffffffffffff] > On node 2 totalpages: 0 > Built 2 zonelists in Node order, mobility grouping on. Total pages: 16090539 > Policy zone: Normal > init_memory_mapping: [mem 0x40000000000-0x407ffffffff] > [mem 0x40000000000-0x407ffffffff] page 1G > [ffffea1000000000-ffffea10001fffff] PMD -> [ffff8a077d800000-ffff8a077d9fffff] on node 2 > [ffffea1000200000-ffffea10003fffff] PMD -> [ffff8a077de00000-ffff8a077dffffff] on node 2 > ... > [ffffea101f600000-ffffea101f9fffff] PMD -> [ffff8a074ac00000-ffff8a074affffff] on node 2 > [ffffea101fa00000-ffffea101fdfffff] PMD -> [ffff8a074a800000-ffff8a074abfffff] on node 2 > Initmem setup node 3 [mem 0x0000000000000000-0xffffffffffffffff] > On node 3 totalpages: 0 > Built 3 zonelists in Node order, mobility grouping on. Total pages: 16090539 > Policy zone: Normal > init_memory_mapping: [mem 0x60000000000-0x607ffffffff] > [mem 0x60000000000-0x607ffffffff] page 1G > [ffffea101fe00000-ffffea101fffffff] PMD -> [ffff8a074a400000-ffff8a074a5fffff] on node 2 <=== node 2 ??? > [ffffea1800000000-ffffea18001fffff] PMD -> [ffff8a074a600000-ffff8a074a7fffff] on node 3 > [ffffea1800200000-ffffea18005fffff] PMD -> [ffff8a074a000000-ffff8a074a3fffff] on node 3 > [ffffea1800600000-ffffea18009fffff] PMD -> [ffff8a0749c00000-ffff8a0749ffffff] on node 3 > ... > > The cause is the last region was missed at the and of hot add memory, and > p_start, p_end, node_start were not reset, so when hot add memory to a new > node, it will consider they are not contiguous blocks and print the > previous one. So we print the last vmemmap region at the end of hot add > memory to avoid the confusion. > > ... > > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -513,6 +513,7 @@ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn, > break; > err = 0; > } > + vmemmap_populate_print_last(); > > return err; > } vmemmap_populate_print_last() is only available on x86_64, when CONFIG_SPARSEMEM_VMEMMAP=y. Are you sure this won't break builds? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/