Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp8000960imu; Tue, 4 Dec 2018 00:57:31 -0800 (PST) X-Google-Smtp-Source: AFSGD/V8/aMRzZ8Z4hA+OF0PAo2SDwBKJLkFp+k9eOYoNnVbZnW7u1KG4IUr5GiAidmAKHYri9xx X-Received: by 2002:a17:902:1745:: with SMTP id i63mr18768146pli.145.1543913851494; Tue, 04 Dec 2018 00:57:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543913851; cv=none; d=google.com; s=arc-20160816; b=prfy7Ye+ICQq1F4WphpQxIAgeaK/IelOdyUy27n7ARQQOKswTNWHJ6vSistLjqUTxn Dn46HKVenBgxTR+UDjCdpO96cuNFfw7iqnxFoWPYXCFwIs73f3fIGnIOmMzm4Qps73q/ BnIo8dr9Mz0229weaDE2fo0+YUkp1Pt+rWyAdMFeDV2u8VYbsI8LKClM6jXaOm7SXUwz aSXSCcf8JWATujk22yLzY5Vtrzr3iEyIu0Cy/wbP15mKgzF5H5OrvwR5Au2tZGw/l+bt gyaAy/sZ9GsyA0TC2uNY58PEZH7qG8dgdOGoMFJIaWzET982Au2hYvOrwNGFDAr5DR3h /Iog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=wrdfYJDtyTl0z3BE1LtpxRCMcfrhLr/3V83oocqgaLc=; b=gJFm90OAfMUlMU9cIhcnB3MdNbmMbuXiHwjWAvDFuWbQ8zoiUlcxtSGu8OQBSjiQRk VVulel7nTL2XQTknjYmEQlPcsHNrMIkzagIXfhpVbDnzQu2QBzNkFSScmh+xv6ewAdyC rCZ+yG5GR0ThSy1mgVv5ScShg2WZ1iFLXKgvDmqjPqBqS763nq1Fs/jD5UQ44aKMlk69 DqUuEYC73/wEx/BU9SJaP+wsMSajypHNFO4LG58Bgm9HCE+cQIV6X4xOvYwOku3vrQqp SFBHBZhh2zI8Lx4ki8svamEEXH1919JpN3h/KQRUGOoAINNet96tPkjjNpEGiBHtPO/t osnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a13si18254852pfd.3.2018.12.04.00.57.16; Tue, 04 Dec 2018 00:57:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725935AbeLDI4E (ORCPT + 99 others); Tue, 4 Dec 2018 03:56:04 -0500 Received: from mx2.suse.de ([195.135.220.15]:52918 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725613AbeLDI4E (ORCPT ); Tue, 4 Dec 2018 03:56:04 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2E460ADE2; Tue, 4 Dec 2018 08:56:02 +0000 (UTC) Date: Tue, 4 Dec 2018 09:56:01 +0100 From: Michal Hocko To: Pingfan Liu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Vlastimil Babka , Mike Rapoport , Bjorn Helgaas , Jonathan Cameron Subject: Re: [PATCH] mm/alloc: fallback to first node if the wanted node offline Message-ID: <20181204085601.GC1286@dhcp22.suse.cz> References: <1543892757-4323-1-git-send-email-kernelfans@gmail.com> <20181204072251.GT31738@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 04-12-18 16:20:32, Pingfan Liu wrote: > On Tue, Dec 4, 2018 at 3:22 PM Michal Hocko wrote: > > > > On Tue 04-12-18 11:05:57, Pingfan Liu wrote: > > > During my test on some AMD machine, with kexec -l nr_cpus=x option, the > > > kernel failed to bootup, because some node's data struct can not be allocated, > > > e.g, on x86, initialized by init_cpu_to_node()->init_memory_less_node(). But > > > device->numa_node info is used as preferred_nid param for > > > __alloc_pages_nodemask(), which causes NULL reference > > > ac->zonelist = node_zonelist(preferred_nid, gfp_mask); > > > This patch tries to fix the issue by falling back to the first online node, > > > when encountering such corner case. > > > > We have seen similar issues already and the bug was usually that the > > zonelists were not initialized yet or the node is completely bogus. > > Zonelists should be initialized by build_all_zonelists quite early so I > > am wondering whether the later is the case. What is the actual node > > number the device is associated with? > > > The device's node num is 2. And in my case, I used nr_cpus param. Due > to init_cpu_to_node() initialize all the possible node. It is hard > for me to figure out without this param, how zonelists is accessed > before page allocator works. I believe we should focus on this. Why does the node have no zonelist even though all zonelists should be initialized already? Maybe this is nr_cpus pecularity and we do not initialize all the existing numa nodes. Or maybe the device is associated to a non-existing node with that setup. A full dmesg might help us here. > > Your patch is not correct btw, because we want to fallback into the node in > > the distance order rather into the first online node. > > -- > What about this: > +extern int find_next_best_node(int node, nodemask_t *used_node_mask); > + > /* > * We get the zone list from the current node and the gfp_mask. > * This zone list contains a maximum of MAXNODES*MAX_NR_ZONES zones. > @@ -453,6 +455,11 @@ static inline int gfp_zonelist(gfp_t flags) > */ > static inline struct zonelist *node_zonelist(int nid, gfp_t flags) > { > + if (unlikely(!node_online(nid))) { > + nodemask_t used_mask; > + nodes_complement(used_mask, node_online_map); > + nid = find_next_best_node(nid, &used_mask); > + } > return NODE_DATA(nid)->node_zonelists + gfp_zonelist(flags); > } > > I just finished the compiling, not test it yet, since the machine is > not on hand yet. It needs some time to get it again. This is clearly a no-go. nodemask_t can be giant and you cannot have it on the stack for allocation paths which might be called from a deep stack already. Also this is called from the allocator hot paths and each branch counts. -- Michal Hocko SUSE Labs