Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp881089rwb; Fri, 13 Jan 2023 05:26:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXt1DtEPYzXarcNeloo4n8fNg4DQp7mTHsMgJn9aZ5TinjMYUN2jfXEEPsuwyX9pqv0FmHiL X-Received: by 2002:a05:6a20:7b28:b0:ac:184:d297 with SMTP id s40-20020a056a207b2800b000ac0184d297mr81588729pzh.38.1673616400000; Fri, 13 Jan 2023 05:26:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673616399; cv=none; d=google.com; s=arc-20160816; b=FTXvJnYsx/ZnMA+3v831OHCi0E303MAC3/7hyuk6z0sUpFrN9D4beChMfsW/PwDDiA 2tHc/URIUUDOvAqzrma3jA1Qgvw/Dt4lGEgvu6+R1i6KY2ybNfkqS/TSs12N6BfrYNyJ EBCoF8Suv6mI3p92d0h0JgM4YXWtUU1ktO+tuECcXv5Gdri+pG9rwDS/Twi6Df7ofI2O Clp6KCkFFK0bwpyaDMMo773s1OzVEIJmwvmmR+Iddaf3ryoyTl4CkPgnMRkaj4eX+cj0 GABNT4eAU2tR/2gZOyE5jRaKl2t29ry2r+QEUkO34gfx/NBYPJl7ziQJLOirZK7HQ6Eh 6T4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=b4BbmbSd3UvEy9SdkA+0yT3JqRozaBsHVEN+OsDtaAg=; b=A2UQn8PuZkc1MuJV6bD2pbRmVvuKt1Ttv2w/3cCKoIEcejVsa8QozzYIDKGBJ0GPvo hccLhIdAZVfXP7iKSz+nBzKQRwSCLx6Ikw1lHydX9NG9hquTXKrEHNLwqj4HIALmplwE EZQKWnv/cklcB71NHz5ihc1SrEbV3UUSARqJiNd5kGJgnh/5M8wIfDFLllI9s0w7cgcG R7qGa6Te0Bgv6LYwK58ZD1pbZ09ZqGU9Jj/LRdhu1DjGi9sBmszIcDz7HeZ4609UGWDc ZCSlbrv3szuKyOJf0BbFUeOygoCmofABxCMQMMyQCcJ8qGQ9Gzy8zzk5KYNaP19V0UeH Pa/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=pnwi8fP7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bm18-20020a656e92000000b004b4ee249a2dsi11744148pgb.479.2023.01.13.05.26.31; Fri, 13 Jan 2023 05:26:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=pnwi8fP7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237550AbjAMNQU (ORCPT + 53 others); Fri, 13 Jan 2023 08:16:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233016AbjAMNP4 (ORCPT ); Fri, 13 Jan 2023 08:15:56 -0500 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B579559CC for ; Fri, 13 Jan 2023 05:06:14 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 081176B1FA; Fri, 13 Jan 2023 13:06:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1673615173; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=b4BbmbSd3UvEy9SdkA+0yT3JqRozaBsHVEN+OsDtaAg=; b=pnwi8fP7mwxBuE0MWRu3DZQAlZVkba6fMpx0R0NjF0CkYpE74qYdlDek2KBfSONWUlepYz zNIBNR42R5VatgQ/Tsl/PO60UZayrT1s2bjMYFFY/M17lC9tI7OxBqRtcaBCEyakHoXBMW BKOGXpvWoEVr/EXngCsU8a+mYfp/1uU= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DBC7B13913; Fri, 13 Jan 2023 13:06:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id Bd/JMkRXwWOuagAAMHmgww (envelope-from ); Fri, 13 Jan 2023 13:06:12 +0000 Date: Fri, 13 Jan 2023 14:06:11 +0100 From: Michal Hocko To: Mel Gorman Cc: Andrew Morton , NeilBrown , Thierry Reding , Matthew Wilcox , Vlastimil Babka , Linux-MM , LKML Subject: Re: [PATCH 5/6] mm/page_alloc: Explicitly define how __GFP_HIGH non-blocking allocations accesses reserves Message-ID: References: <20230113111217.14134-1-mgorman@techsingularity.net> <20230113111217.14134-6-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230113111217.14134-6-mgorman@techsingularity.net> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri 13-01-23 11:12:16, Mel Gorman wrote: > GFP_ATOMIC allocations get flagged ALLOC_HARDER which is a vague > description. In preparation for the removal of GFP_ATOMIC redefine > __GFP_ATOMIC to simply mean non-blocking and renaming ALLOC_HARDER to > ALLOC_NON_BLOCK accordingly. __GFP_HIGH is required for access to reserves > but non-blocking is granted more access. For example, GFP_NOWAIT is > non-blocking but has no special access to reserves. A __GFP_NOFAIL > blocking allocation is granted access similar to __GFP_HIGH if the > only alternative is an OOM kill. > > Signed-off-by: Mel Gorman Acked-by: Michal Hocko Thanks! > --- > mm/internal.h | 7 +++++-- > mm/page_alloc.c | 44 ++++++++++++++++++++++++-------------------- > 2 files changed, 29 insertions(+), 22 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 8706d46863df..23a37588073a 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -735,7 +735,10 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > #define ALLOC_OOM ALLOC_NO_WATERMARKS > #endif > > -#define ALLOC_HARDER 0x10 /* try to alloc harder */ > +#define ALLOC_NON_BLOCK 0x10 /* Caller cannot block. Allow access > + * to 25% of the min watermark or > + * 62.5% if __GFP_HIGH is set. > + */ > #define ALLOC_MIN_RESERVE 0x20 /* __GFP_HIGH set. Allow access to 50% > * of the min watermark. > */ > @@ -750,7 +753,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ > > /* Flags that allow allocations below the min watermark. */ > -#define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) > +#define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) > > enum ttu_flags; > struct tlbflush_unmap_batch; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 6f41b84a97ac..b9ae0ba0a2ab 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3989,18 +3989,19 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, > * __GFP_HIGH allows access to 50% of the min reserve as well > * as OOM. > */ > - if (alloc_flags & ALLOC_MIN_RESERVE) > + if (alloc_flags & ALLOC_MIN_RESERVE) { > min -= min / 2; > > - /* > - * Non-blocking allocations can access some of the reserve > - * with more access if also __GFP_HIGH. The reasoning is that > - * a non-blocking caller may incur a more severe penalty > - * if it cannot get memory quickly, particularly if it's > - * also __GFP_HIGH. > - */ > - if (alloc_flags & ALLOC_HARDER) > - min -= min / 4; > + /* > + * Non-blocking allocations (e.g. GFP_ATOMIC) can > + * access more reserves than just __GFP_HIGH. Other > + * non-blocking allocations requests such as GFP_NOWAIT > + * or (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) do not get > + * access to the min reserve. > + */ > + if (alloc_flags & ALLOC_NON_BLOCK) > + min -= min / 4; > + } > > /* > * OOM victims can try even harder than the normal reserve > @@ -4851,28 +4852,30 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) > * The caller may dip into page reserves a bit more if the caller > * cannot run direct reclaim, or if the caller has realtime scheduling > * policy or is asking for __GFP_HIGH memory. GFP_ATOMIC requests will > - * set both ALLOC_HARDER (__GFP_ATOMIC) and ALLOC_MIN_RESERVE(__GFP_HIGH). > + * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). > */ > alloc_flags |= (__force int) > (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); > > - if (gfp_mask & __GFP_ATOMIC) { > + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { > /* > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even > * if it can't schedule. > */ > if (!(gfp_mask & __GFP_NOMEMALLOC)) { > - alloc_flags |= ALLOC_HARDER; > + alloc_flags |= ALLOC_NON_BLOCK; > > if (order > 0) > alloc_flags |= ALLOC_HIGHATOMIC; > } > > /* > - * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the > - * comment for __cpuset_node_allowed(). > + * Ignore cpuset mems for non-blocking __GFP_HIGH (probably > + * GFP_ATOMIC) rather than fail, see the comment for > + * __cpuset_node_allowed(). > */ > - alloc_flags &= ~ALLOC_CPUSET; > + if (alloc_flags & ALLOC_MIN_RESERVE) > + alloc_flags &= ~ALLOC_CPUSET; > } else if (unlikely(rt_task(current)) && in_task()) > alloc_flags |= ALLOC_MIN_RESERVE; > > @@ -5303,12 +5306,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > WARN_ON_ONCE_GFP(costly_order, gfp_mask); > > /* > - * Help non-failing allocations by giving them access to memory > - * reserves but do not use ALLOC_NO_WATERMARKS because this > + * Help non-failing allocations by giving some access to memory > + * reserves normally used for high priority non-blocking > + * allocations but do not use ALLOC_NO_WATERMARKS because this > * could deplete whole memory reserves which would just make > - * the situation worse > + * the situation worse. > */ > - page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_HARDER, ac); > + page = __alloc_pages_cpuset_fallback(gfp_mask, order, ALLOC_MIN_RESERVE, ac); > if (page) > goto got_pg; > > -- > 2.35.3 -- Michal Hocko SUSE Labs