Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp10426312imu; Thu, 6 Dec 2018 00:29:04 -0800 (PST) X-Google-Smtp-Source: AFSGD/U16mvARmfVjceYQKGm5DAOW5l30gJRrRwNYUWuPufaGeulotEP+c//dbeXQXlksCF6AuZz X-Received: by 2002:a17:902:96a:: with SMTP id 97mr26583212plm.45.1544084944747; Thu, 06 Dec 2018 00:29:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544084944; cv=none; d=google.com; s=arc-20160816; b=k7WhnHEDSHGYnBtcxvG2cZTgysC9H3NBjhaO1SxS135kyzJKhV41koZT1fInQ/Z0GA 2AbhZikPITFhiYw5+qTKPFdHJMVsgADXN7HkiAgQ3jCpFjKDd/kZgaXeJ1YcqWN4EJaR yttW+iBTq4lVMfZHSKVSjZ9UFdHBF21+anwF0kmkmGj98Ioh742YNfLwxoDGSIAYCke8 4MGByDVRZB0bIcvMuLzVD2MmMh27BtzFefuh+O8I/VZu5niCxPoGLD/wquPdlIGJJqRQ u5tcvdXOT9Sf6x99IeSJSALppr8bF/3igWoIzaxiB8lMo5RIWlTdJgesJnwdVU12DoBx Ll8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=6bgAIMucz7BJHtAIVvCBrHRnmWq4X5w2UCHzf3JkQK0=; b=ZvajVt1zIubJOiHASZPCyUB59Nx8FtRSns9P9td/HsoK08hSL8LeAZCixoCuyBnF3W M9pjF25x3+zEbxZliVWxOvo+dTsfnerxgmjPrY8Q6cCU4ZCX5mdjkUV+CkdPI6JfkzmN 8SLIE+o91UhSTeh8ZxgX0MRhLeNc95g8tyiweLtZ1Qr7iSNHNBI+/D2KmtjlN5hsYPJ/ k0ZgUpuw92xlGJjTCIQoKSxhBQlT5jh4lfr5Zqr91rTnQwcL4O4F5Vlyi2/w1uQSEIh3 +F55pUfpHIRjj2kCUnkaJN7EIdkLEoYyyMi0MXwuzYdpHfkpl3DfHOS+LIAtNcTMMewR RMxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i12si20941852plt.213.2018.12.06.00.28.48; Thu, 06 Dec 2018 00:29:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729000AbeLFI2L (ORCPT + 99 others); Thu, 6 Dec 2018 03:28:11 -0500 Received: from mx2.suse.de ([195.135.220.15]:43382 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727575AbeLFI2L (ORCPT ); Thu, 6 Dec 2018 03:28:11 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 43833AD30; Thu, 6 Dec 2018 08:28:08 +0000 (UTC) Date: Thu, 6 Dec 2018 09:28:06 +0100 From: Michal Hocko To: Pingfan Liu Cc: Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Mike Rapoport , Bjorn Helgaas , Jonathan Cameron Subject: Re: [PATCH] mm/alloc: fallback to first node if the wanted node offline Message-ID: <20181206082806.GB1286@dhcp22.suse.cz> References: <1543892757-4323-1-git-send-email-kernelfans@gmail.com> <20181204072251.GT31738@dhcp22.suse.cz> <20181204085601.GC1286@dhcp22.suse.cz> <20181205092148.GA1286@dhcp22.suse.cz> <186b1804-3b1e-340e-f73b-f3c7e69649f5@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 06-12-18 11:07:33, Pingfan Liu wrote: > On Wed, Dec 5, 2018 at 5:40 PM Vlastimil Babka wrote: > > > > On 12/5/18 10:29 AM, Pingfan Liu wrote: > > >> [ 0.007418] Early memory node ranges > > >> [ 0.007419] node 1: [mem 0x0000000000001000-0x000000000008efff] > > >> [ 0.007420] node 1: [mem 0x0000000000090000-0x000000000009ffff] > > >> [ 0.007422] node 1: [mem 0x0000000000100000-0x000000005c3d6fff] > > >> [ 0.007422] node 1: [mem 0x00000000643df000-0x0000000068ff7fff] > > >> [ 0.007423] node 1: [mem 0x000000006c528000-0x000000006fffffff] > > >> [ 0.007424] node 1: [mem 0x0000000100000000-0x000000047fffffff] > > >> [ 0.007425] node 5: [mem 0x0000000480000000-0x000000087effffff] > > >> > > >> There is clearly no node2. Where did the driver get the node2 from? > > > > I don't understand these tables too much, but it seems the other nodes > > exist without them: > > > > [ 0.007393] SRAT: PXM 2 -> APIC 0x20 -> Node 2 > > > > Maybe the nodes are hotplugable or something? > > > I also not sure about it, and just have a hurry look at acpi spec. I > will reply it on another email, and Cced some acpi guys about it > > > > Since using nr_cpus=4 , the node2 is not be instanced by x86 initalizing code. > > > > Indeed, nr_cpus seems to restrict what nodes we allocate and populate > > zonelists for. > > Yes, in init_cpu_to_node(), since nr_cpus limits the possible cpu, > which affects the loop for_each_possible_cpu(cpu) and skip the node2 > in this case. THanks for pointing this out. It made my life easier. So It think the bug is that we call init_memory_less_node from this path. I suspect numa_register_memblks is the right place to do this. So I admit I am not 100% sure but could you give this a try please? diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 1308f5408bf7..4575ae4d5449 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -527,6 +527,19 @@ static void __init numa_clear_kernel_node_hotplug(void) } } +static void __init init_memory_less_node(int nid) +{ + unsigned long zones_size[MAX_NR_ZONES] = {0}; + unsigned long zholes_size[MAX_NR_ZONES] = {0}; + + free_area_init_node(nid, zones_size, 0, zholes_size); + + /* + * All zonelists will be built later in start_kernel() after per cpu + * areas are initialized. + */ +} + static int __init numa_register_memblks(struct numa_meminfo *mi) { unsigned long uninitialized_var(pfn_align); @@ -592,6 +605,8 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) continue; alloc_node_data(nid); + if (!end) + init_memory_less_node(nid); } /* Dump memblock with node info and return. */ @@ -721,21 +736,6 @@ void __init x86_numa_init(void) numa_init(dummy_numa_init); } -static void __init init_memory_less_node(int nid) -{ - unsigned long zones_size[MAX_NR_ZONES] = {0}; - unsigned long zholes_size[MAX_NR_ZONES] = {0}; - - /* Allocate and initialize node data. Memory-less node is now online.*/ - alloc_node_data(nid); - free_area_init_node(nid, zones_size, 0, zholes_size); - - /* - * All zonelists will be built later in start_kernel() after per cpu - * areas are initialized. - */ -} - /* * Setup early cpu_to_node. * @@ -763,9 +763,6 @@ void __init init_cpu_to_node(void) if (node == NUMA_NO_NODE) continue; - if (!node_online(node)) - init_memory_less_node(node); - numa_set_node(cpu, node); } } -- Michal Hocko SUSE Labs