Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1822811imm; Thu, 24 May 2018 01:11:53 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpURFt4fkbvE/pAzTmXjk+Dy+d3W8z/4buTrpQB9JSrGMulJIEbEnR564m2VZToQU7/HNTt X-Received: by 2002:a62:11dc:: with SMTP id 89-v6mr6260734pfr.18.1527149513481; Thu, 24 May 2018 01:11:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527149513; cv=none; d=google.com; s=arc-20160816; b=neDVz1NtvEtERdcu7FZ3Zjb+tHTth+iYgjzVyuWjBrynKAsZDOdCv276XzAuWCvYpU DOdL1XFz8VfCaIvObYkeTU2HqyfD/VN1O1GkH9X/MD4zF5FionTjU19MkcFPZfdwAGZY iwT2BYcyCDRR5C5Fip1joIT71r4WxFnr0AhJb2gKg5WVWNcAb6PYoYQM4ykGP1PJcl9p iG+O92sKOgP1ilAcF0225LVubLl+/Hg6GFMQd7AH/nNMy1uYS5TUftD07SDhdlsoQdKy CRl4a1l3kOjtW5GebPBNKlaLvhgZrcVpE41+jNUGE9Ej7hOzHSuyEhxAwsFt5mSxfqy1 UWSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=QmU97i3e7jwZ9tv+h7WWq2DIUsQGwxsNX6dm42eha9A=; b=PD58V0CiIPnmPT1LMk4UjPolJvFb8KNSMv25s3G84U0bvDpujl6pnlSjocqDxPIH68 /jDU8Toi1J8oAKa7v+PzOge/E+jcXVPVjZBfbzfs4lIocnHJlDI9sofL070QXLE2Ni1v Tn+5J7uzBWMQAc7IRGniS/X3QFRmoSe75m0uiWrUq8QOZHOY4vARPHr/UXeji9LT9San Wzj+noJPB9h91ZrcD33AWsM4iQXpU0NPimfuC5jO8VLDnQxjQJUatJZDWaXlywzFu8u0 qykG57tZaRZz+CrlJ0JlbgAMPaNoJuNqikyYn2HaF3/YLvVUe9YCnOEFJf4OwkBCyHgk mr+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6-v6si20213787plz.238.2018.05.24.01.11.08; Thu, 24 May 2018 01:11:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965135AbeEXIAP (ORCPT + 99 others); Thu, 24 May 2018 04:00:15 -0400 Received: from mx2.suse.de ([195.135.220.15]:49625 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935508AbeEXIAN (ORCPT ); Thu, 24 May 2018 04:00:13 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 6AE84AEE8; Thu, 24 May 2018 08:00:12 +0000 (UTC) Date: Thu, 24 May 2018 10:00:11 +0200 From: Michal Hocko To: Anshuman Khandual Cc: Andrew Morton , Oscar Salvador , Vlastimil Babka , Pavel Tatashin , Reza Arbab , Igor Mammedov , Vitaly Kuznetsov , LKML , linux-mm@kvack.org Subject: Re: [PATCH 2/2] mm: do not warn on offline nodes unless the specific node is explicitly requested Message-ID: <20180524080011.GV20441@dhcp22.suse.cz> References: <20180523125555.30039-1-mhocko@kernel.org> <20180523125555.30039-3-mhocko@kernel.org> <11e26a4e-552e-b1dc-316e-ce3e92973556@linux.vnet.ibm.com> <20180523140601.GQ20441@dhcp22.suse.cz> <094afec3-5682-f99d-81bb-230319c78d5d@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <094afec3-5682-f99d-81bb-230319c78d5d@linux.vnet.ibm.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 24-05-18 08:52:14, Anshuman Khandual wrote: > On 05/23/2018 07:36 PM, Michal Hocko wrote: > > On Wed 23-05-18 19:15:51, Anshuman Khandual wrote: > >> On 05/23/2018 06:25 PM, Michal Hocko wrote: > >>> when adding memory to a node that is currently offline. > >>> > >>> The VM_WARN_ON is just too loud without a good reason. In this > >>> particular case we are doing > >>> alloc_pages_node(node, GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN, order) > >>> > >>> so we do not insist on allocating from the given node (it is more a > >>> hint) so we can fall back to any other populated node and moreover we > >>> explicitly ask to not warn for the allocation failure. > >>> > >>> Soften the warning only to cases when somebody asks for the given node > >>> explicitly by __GFP_THISNODE. > >> > >> node hint passed here eventually goes into __alloc_pages_nodemask() > >> function which then picks up the applicable zonelist irrespective of > >> the GFP flag __GFP_THISNODE. > > > > __GFP_THISNODE should enforce the given node without any fallbacks > > unless something has changed recently. > > Right. I was just saying requiring given preferred node to be online > whose zonelist (hence allocation zone fallback order) is getting picked > up during allocation and warning when that is not online still makes > sense. Why? We have a fallback and that is expected to be used. How does offline differ from depleted node from the semantical point of view? > We should only hide the warning if the allocation request has > __GFP_NOWARN. > > > > >> Though we can go into zones of other > >> nodes if the present node (whose zonelist got picked up) does not > >> have any memory in it's zones. So warning here might not be without > >> any reason. > > > > I am not sure I follow. Are you suggesting a different VM_WARN_ON? > > I am just suggesting this instead. > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 036846fc00a6..7f860ea29ec6 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -464,7 +464,7 @@ static inline struct page * > __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) > { > VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); > - VM_WARN_ON(!node_online(nid)); > + VM_WARN_ON(!(gfp_mask & __GFP_NOWARN) && !node_online(nid)); > > return __alloc_pages(gfp_mask, order, nid); > } I have considered that but I fail to see why should we warn about regular GFP_KERNEL allocations as mentioned above. Just consider an allocation for the preffered node. Do you want to warn just because that node went offline? -- Michal Hocko SUSE Labs