Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4079508imm; Mon, 11 Jun 2018 06:43:55 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLlEZJbmtKQ6NnAepo+1ScRCEz73BMYlR+rtPsyuteDx8Kb6sPlgIHkvuGlqYe7s9Xi2+oq X-Received: by 2002:a63:b646:: with SMTP id v6-v6mr14604200pgt.276.1528724635770; Mon, 11 Jun 2018 06:43:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528724635; cv=none; d=google.com; s=arc-20160816; b=nqj2JZsSMMuPRw06Ih4XljRN4zREi8sZP0GqJf/RNve6z9SRxLpP8LrhYhl9U7U/OS z8ajy5bpps6edHWfChSJB2gkyZbo3rGYitwMQgB51HFWYUR5E7muY/PGnDtrXU3AhwY3 WtDZKYQC+FICMCt7ngedvJ0PJ7el1DkdkJQ8qnpLNXHqCwjPvMevHGlcA/cZg3xvTm2Z racD5+tvZ6pIaE77Nu//lT0N4GC51qLuP9dikp8n0l0TQcaQj5cVCJ9D7eGb8ISjADmq MyEZ/j/UcNAj4I8H8V5fpfHK7H8L9hBdLZAbWQFDH6/PoteknnNdwxyXZnzPy2uzJF8Z KOtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=ydeKjiOwXpS/NgY9pIk/02VVo4wh14iisdtSRP6j0HM=; b=cKMPNSjkZ6IV3IzpA2MkAkM6vnK//PwCRoxpXBSdZfFdzmsXLH05Ux7ulycWt5Q3h/ Q12pC/7hzl0accTteEglDxNNK714TlXOGsMM6/esit7Nj6UntvaC6cjpw6EwpU6Rkb9B doop/8Z0SUF4/Yhh9Uv2MlAv3DH6/0X8ABfyRt85vrG/U1PTOE9y/C70ynOQNM4TlyIb lHCLDyT08YPQotSOzguhAEghKwrREJ9g3fivuvwZxWRZtUiv+5PsFkY47KT5jhWdYv9Y 1hpt/jIuiELeN355SKxu/R3NaWFfr95kDbwV5HRHPVqnxB7N09qCRpGNZGPzV5ipOJe+ M1Gw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Vvp5ErHn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w25-v6si20326191pgc.628.2018.06.11.06.43.41; Mon, 11 Jun 2018 06:43:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Vvp5ErHn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933318AbeFKNnP (ORCPT + 99 others); Mon, 11 Jun 2018 09:43:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:48076 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932913AbeFKNnF (ORCPT ); Mon, 11 Jun 2018 09:43:05 -0400 Received: from localhost (173-25-171-118.client.mchsi.com [173.25.171.118]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E9C6E208AF; Mon, 11 Jun 2018 13:43:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1528724585; bh=JsiVX1TqG5XNhZwIMxUDuzOOv7K2G5GbCT2GIox1Ots=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Vvp5ErHnBDdd398z4J8wXrP+rkRKe2BJjakv4w6l+Yymn2X9kk+eYkiS8JmUogcmX AmylBQRpV1lvILRTxFgosseU1dLmpkkh0TOP5ePdbP35EXe38Ek6Wf2uaZiC6Vl+eo /ROouxi1E4us0oLeXAw+m2I3tWM4oeSYleOnlnn8= Date: Mon, 11 Jun 2018 08:43:03 -0500 From: Bjorn Helgaas To: Xie XiuQi Cc: Michal Hocko , Hanjun Guo , tnowicki@caviumnetworks.com, linux-pci@vger.kernel.org, Catalin Marinas , "Rafael J. Wysocki" , Will Deacon , Linux Kernel Mailing List , Jarkko Sakkinen , linux-mm@kvack.org, wanghuiqiang@huawei.com, Greg Kroah-Hartman , Bjorn Helgaas , Andrew Morton , zhongjiang , linux-arm Subject: Re: [PATCH 1/2] arm64: avoid alloc memory on offline node Message-ID: <20180611134303.GC75679@bhelgaas-glaptop.roam.corp.google.com> References: <1527768879-88161-1-git-send-email-xiexiuqi@huawei.com> <1527768879-88161-2-git-send-email-xiexiuqi@huawei.com> <20180606154516.GL6631@arm.com> <20180607105514.GA13139@dhcp22.suse.cz> <5ed798a0-6c9c-086e-e5e8-906f593ca33e@huawei.com> <20180607122152.GP32433@dhcp22.suse.cz> <20180611085237.GI13364@dhcp22.suse.cz> <16c4db2f-bc70-d0f2-fb38-341d9117ff66@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <16c4db2f-bc70-d0f2-fb38-341d9117ff66@huawei.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 11, 2018 at 08:32:10PM +0800, Xie XiuQi wrote: > Hi Michal, > > On 2018/6/11 16:52, Michal Hocko wrote: > > On Mon 11-06-18 11:23:18, Xie XiuQi wrote: > >> Hi Michal, > >> > >> On 2018/6/7 20:21, Michal Hocko wrote: > >>> On Thu 07-06-18 19:55:53, Hanjun Guo wrote: > >>>> On 2018/6/7 18:55, Michal Hocko wrote: > >>> [...] > >>>>> I am not sure I have the full context but pci_acpi_scan_root calls > >>>>> kzalloc_node(sizeof(*info), GFP_KERNEL, node) > >>>>> and that should fall back to whatever node that is online. Offline node > >>>>> shouldn't keep any pages behind. So there must be something else going > >>>>> on here and the patch is not the right way to handle it. What does > >>>>> faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? > >>>> > >>>> The whole context is: > >>>> > >>>> The system is booted with a NUMA node has no memory attaching to it > >>>> (memory-less NUMA node), also with NR_CPUS less than CPUs presented > >>>> in MADT, so CPUs on this memory-less node are not brought up, and > >>>> this NUMA node will not be online (but SRAT presents this NUMA node); > >>>> > >>>> Devices attaching to this NUMA node such as PCI host bridge still > >>>> return the valid NUMA node via _PXM, but actually that valid NUMA node > >>>> is not online which lead to this issue. > >>> > >>> But we should have other numa nodes on the zonelists so the allocator > >>> should fall back to other node. If the zonelist is not intiailized > >>> properly, though, then this can indeed show up as a problem. Knowing > >>> which exact place has blown up would help get a better picture... > >>> > >> > >> I specific a non-exist node to allocate memory using kzalloc_node, > >> and got this following error message. > >> > >> And I found out there is just a VM_WARN, but it does not prevent the memory > >> allocation continue. > >> > >> This nid would be use to access NODE_DADA(nid), so if nid is invalid, > >> it would cause oops here. > >> > >> 459 /* > >> 460 * Allocate pages, preferring the node given as nid. The node must be valid and > >> 461 * online. For more general interface, see alloc_pages_node(). > >> 462 */ > >> 463 static inline struct page * > >> 464 __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) > >> 465 { > >> 466 VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > >> 467 VM_WARN_ON(!node_online(nid)); > >> 468 > >> 469 return __alloc_pages(gfp_mask, order, nid); > >> 470 } > >> 471 > >> > >> (I wrote a ko, to allocate memory on a non-exist node using kzalloc_node().) > > > > OK, so this is an artificialy broken code, right. You shouldn't get a > > non-existent node via standard APIs AFAICS. The original report was > > about an existing node which is offline AFAIU. That would be a different > > case. If I am missing something and there are legitimate users that try > > to allocate from non-existing nodes then we should handle that in > > node_zonelist. > > I think hanjun's comments may help to understood this question: > - NUMA node will be built if CPUs and (or) memory are valid on this NUMA > node; > > - But if we boot the system with memory-less node and also with > CONFIG_NR_CPUS less than CPUs in SRAT, for example, 64 CPUs total with 4 > NUMA nodes, 16 CPUs on each NUMA node, if we boot with > CONFIG_NR_CPUS=48, then we will not built numa node for node 3, but with > devices on that numa node, alloc memory will be panic because NUMA node > 3 is not a valid node. > > I triggered this BUG on arm64 platform, and I found a similar bug has > been fixed on x86 platform. So I sent a similar patch for this bug. > > Or, could we consider to fix it in the mm subsystem? The patch below (b755de8dfdfe) seems like totally the wrong direction. I don't think we want every caller of kzalloc_node() to have check for node_online(). Why would memory on an off-line node even be in the allocation pool? I wouldn't expect that memory to be put in the pool until the node comes online and the memory is accessible, so this sounds like some kind of setup issue. But I'm definitely not an mm person. > From b755de8dfdfef97effaa91379ffafcb81f4d62a1 Mon Sep 17 00:00:00 2001 > From: Yinghai Lu > Date: Wed, 20 Feb 2008 12:41:52 -0800 > Subject: [PATCH] x86: make dev_to_node return online node > > a numa system (with multi HT chains) may return node without ram. Aka it > is not online. Try to get an online node, otherwise return -1. > > Signed-off-by: Yinghai Lu > Signed-off-by: Ingo Molnar > Signed-off-by: Thomas Gleixner > --- > arch/x86/pci/acpi.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c > index d95de2f..ea8685f 100644 > --- a/arch/x86/pci/acpi.c > +++ b/arch/x86/pci/acpi.c > @@ -172,6 +172,9 @@ struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_device *device, int do > set_mp_bus_to_node(busnum, node); > else > node = get_mp_bus_to_node(busnum); > + > + if (node != -1 && !node_online(node)) > + node = -1; > #endif > > /* Allocate per-root-bus (not per bus) arch-specific data.