Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3591157imu; Mon, 10 Dec 2018 04:53:55 -0800 (PST) X-Google-Smtp-Source: AFSGD/VyENiyuvJuZLmxmfi+MGqyMEQXPN3yiFUb2U/2mdVFPGv6em8bklaayIh0sfeRtn12GV/r X-Received: by 2002:a17:902:e085:: with SMTP id cb5mr11879759plb.24.1544446435488; Mon, 10 Dec 2018 04:53:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544446435; cv=none; d=google.com; s=arc-20160816; b=NWxTnLmzAKrAf6E8YLu+dJEhL+t+AX970rJvVkRVowf2DRmK0QXDLyu/sixIzbBu1z 9lPYQXQDgtkMCbS/DBi58EON4zg6urx/2IETXKSBSeWN1hiYP0Wwp++j2jPWtt90UUHY tBKBCwY86F/5Rj1VRIRudVyvzqQqfy+jBVJj1+uWqZN0O01XrwibA5XDgvrkpuQBL92y 26MkR37e0UcMM9XB2WHI4cFzauOMFlJtsyqPVyj4Mo5v6xVdzxpzjyGSBQbeTtLz0rVC uO5/IG2h7hGOWrGSdp7rKsdfogiP+oBW1ZMc6mZtgE5CO+HfX+k/Zm05xkwUgcTRGKMu Ls8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=NurraeoTV2/y4MMYzIRLYR6Jj6sRqLGLhph/DLhJdU0=; b=hc3STt2ZDJjg5CgwMRwWHnS2zLQNYM0odAOMtvAYO9jcvBT/ijCZboKQO6ZT+dPXqW 8F5RLDiYjPoiHlljqurcCsfiAwUcugS4iNM4XP3Y8ZXFwgrGEEF5f1FDvPsw+NwwFkNR uaR2h5WYItPRiS1M6E1F0PNgtiQuxBYy8mb4SGmoVXd36QNlje7Sow6L7Kro8l0SLM81 sPC4DXpohxYf/TJ3/RiiJgFONLzG2HqaBSCU+I13mR5DD7GjWG+MQeAtbOILFxBIec03 yp6HwcSeSEBWKVWro9n/JvWIYjAzUYOy8gQ6mki75InY0e3sIHtBMNcxdY6c9ILmV0pQ I3+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e4si9522749pgl.570.2018.12.10.04.53.40; Mon, 10 Dec 2018 04:53:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727378AbeLJMhm (ORCPT + 99 others); Mon, 10 Dec 2018 07:37:42 -0500 Received: from mx2.suse.de ([195.135.220.15]:39186 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726292AbeLJMhm (ORCPT ); Mon, 10 Dec 2018 07:37:42 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 28E54AF67; Mon, 10 Dec 2018 12:37:40 +0000 (UTC) Date: Mon, 10 Dec 2018 13:37:38 +0100 From: Michal Hocko To: Pingfan Liu Cc: Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Mike Rapoport , Bjorn Helgaas , Jonathan Cameron Subject: Re: [PATCH] mm/alloc: fallback to first node if the wanted node offline Message-ID: <20181210123738.GN1286@dhcp22.suse.cz> References: <20181206121152.GH1286@dhcp22.suse.cz> <20181207075322.GS1286@dhcp22.suse.cz> <20181207113044.GB1286@dhcp22.suse.cz> <20181207142240.GC1286@dhcp22.suse.cz> <20181207155627.GG1286@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181207155627.GG1286@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 07-12-18 16:56:27, Michal Hocko wrote: > On Fri 07-12-18 22:27:13, Pingfan Liu wrote: > [...] > > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c > > index 1308f54..4dc497d 100644 > > --- a/arch/x86/mm/numa.c > > +++ b/arch/x86/mm/numa.c > > @@ -754,18 +754,23 @@ void __init init_cpu_to_node(void) > > { > > int cpu; > > u16 *cpu_to_apicid = early_per_cpu_ptr(x86_cpu_to_apicid); > > + int node, nr; > > > > BUG_ON(cpu_to_apicid == NULL); > > + nr = cpumask_weight(cpu_possible_mask); > > + > > + /* bring up all possible node, since dev->numa_node */ > > + //should check acpi works for node possible, > > + for_each_node(node) > > + if (!node_online(node)) > > + init_memory_less_node(node); > > I suspect there is no change if you replace for_each_node by > for_each_node_mask(nid, node_possible_map) > > here. If that is the case then we are probably calling > free_area_init_node too early. I do not see it yet though. OK, so it is not about calling it late or soon. It is just that node_possible_map is a misnomer and it has a different semantic than I've expected. numa_nodemask_from_meminfo simply considers only nodes with some memory. So my patch didn't really make any difference and the node stayed uninialized. In other words. Does the following work? I am sorry to wildguess this way but I am not able to recreate your setups to play with this myself. diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 1308f5408bf7..d51643e10d00 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -216,8 +216,6 @@ static void __init alloc_node_data(int nid) node_data[nid] = nd; memset(NODE_DATA(nid), 0, sizeof(pg_data_t)); - - node_set_online(nid); } /** @@ -527,6 +525,19 @@ static void __init numa_clear_kernel_node_hotplug(void) } } +static void __init init_memory_less_node(int nid) +{ + unsigned long zones_size[MAX_NR_ZONES] = {0}; + unsigned long zholes_size[MAX_NR_ZONES] = {0}; + + free_area_init_node(nid, zones_size, 0, zholes_size); + + /* + * All zonelists will be built later in start_kernel() after per cpu + * areas are initialized. + */ +} + static int __init numa_register_memblks(struct numa_meminfo *mi) { unsigned long uninitialized_var(pfn_align); @@ -570,7 +581,7 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) return -EINVAL; /* Finally register nodes. */ - for_each_node_mask(nid, node_possible_map) { + for_each_node(nid) { u64 start = PFN_PHYS(max_pfn); u64 end = 0; @@ -592,6 +603,10 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) continue; alloc_node_data(nid); + if (!end) + init_memory_less_node(nid); + else + node_set_online(nid); } /* Dump memblock with node info and return. */ @@ -721,21 +736,6 @@ void __init x86_numa_init(void) numa_init(dummy_numa_init); } -static void __init init_memory_less_node(int nid) -{ - unsigned long zones_size[MAX_NR_ZONES] = {0}; - unsigned long zholes_size[MAX_NR_ZONES] = {0}; - - /* Allocate and initialize node data. Memory-less node is now online.*/ - alloc_node_data(nid); - free_area_init_node(nid, zones_size, 0, zholes_size); - - /* - * All zonelists will be built later in start_kernel() after per cpu - * areas are initialized. - */ -} - /* * Setup early cpu_to_node. * @@ -763,9 +763,6 @@ void __init init_cpu_to_node(void) if (node == NUMA_NO_NODE) continue; - if (!node_online(node)) - init_memory_less_node(node); - numa_set_node(cpu, node); } } -- Michal Hocko SUSE Labs