Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1339580rwb; Thu, 8 Dec 2022 09:21:56 -0800 (PST) X-Google-Smtp-Source: AA0mqf4dwFY1RBfabWFRdklVBxl0ZNneY+THas9znhr0aNnIzUQ6XtWcUcGH1Bv8gGmnGuaxSHTW X-Received: by 2002:a17:907:9555:b0:7c0:b87f:95c5 with SMTP id ex21-20020a170907955500b007c0b87f95c5mr24158530ejc.86.1670520116498; Thu, 08 Dec 2022 09:21:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670520116; cv=none; d=google.com; s=arc-20160816; b=qRh2rF7oZp2kSPW0jlLHbUSavKu+vi9sFu+vG4lzHmZrLv2Q+rP5x9FDzbPZxJFTd0 /aBGpGZ6TvJN74HanQ3D8zPrwtEUjT+oEyEjqNh+GVX6ziii591xfpQYgZdTNeTLevlb Mi9OCFUODd94lnRhAorGlkEAeaC5Z10tjXYky0tjautfxrL1MZT6apfg0Rffs9/C0OUq hRP07jQAzFm73ZGIYCl0Iz47mZ5W5uhjaZ+mD7aTwfYUVowM4HyVHbHBxHsXCCt/wM4T wIXt5fsYqsYjoPhD5i96IBm5566PD6bQ8Czaty9W0qeK7q7xMXTjRWYGsYy+JqSplNO0 wPZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature:dkim-signature; bh=Saz5QgfCYR/nWPEclN397UURX/bkVNc1D/XuX66CnZI=; b=LgHmbAnpNP2IaWSYvRPzTUFvE0iV3PiihEWjW5fHvdAg5lzJjGxM4yDyRuySlXHYjL nIeGmdmx1Vi7EhvUFw8Z3DAvAZnrvJImM2w1SiDxu26AqtckJtTxdQsstirWsdTJXAQh pSHh9wqbIJo7p2OR4Z7s0I1h2/dGLYo4iD/5I+3kkwnBRYARcv19Zua3wxd4H0KZl0L8 FnyiaiC3aq1hCu4M5/emkRkBsOxw/lldqmVhU8gGjFgjn4L+icUwZqhM+wDix6Y/oUcA piSRTkUfLm5x0DtXIVIW2BccfLXAAu/ntqLcM1ndn0DNbtQiilo9IrRL4on/Fmi3mTBf PJOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=qnsBLEQT; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=+cDGWmUB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji1-20020a170907980100b007be1d7d1c3csi20581530ejc.386.2022.12.08.09.21.37; Thu, 08 Dec 2022 09:21:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=qnsBLEQT; dkim=neutral (no key) header.i=@suse.cz header.s=susede2_ed25519 header.b=+cDGWmUB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229979AbiLHQvR (ORCPT + 73 others); Thu, 8 Dec 2022 11:51:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbiLHQvP (ORCPT ); Thu, 8 Dec 2022 11:51:15 -0500 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A7A9AE4ED for ; Thu, 8 Dec 2022 08:51:13 -0800 (PST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 489FE20344; Thu, 8 Dec 2022 16:51:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1670518272; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Saz5QgfCYR/nWPEclN397UURX/bkVNc1D/XuX66CnZI=; b=qnsBLEQT3iDoKkDNafEf7lPX9FmGREx7SafHUyPckD1t/K2tpF0WPM61qWa0DNHPcm9vOc yWR2KVVo1PXYJvxB+CwanAlZoRPKRXGp6Y6p8029qoxDdUkQTNnbI3SJQCmzujCVFHyFqp tI/M54xm4o4bvtlasJp/Fn4WnJnKebI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1670518272; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Saz5QgfCYR/nWPEclN397UURX/bkVNc1D/XuX66CnZI=; b=+cDGWmUBtam63br8ghz04WYngLeKSDU06IRsLLVWzR2SivnvOwWrf0FioS0xRKosTNlw6+ Y3WF0Yk92re9nCBA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 28A5513416; Thu, 8 Dec 2022 16:51:12 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id k7E/CQAWkmPgMQAAMHmgww (envelope-from ); Thu, 08 Dec 2022 16:51:12 +0000 Message-ID: <915a5034-53e6-9464-3fc7-4d1b5a0aa26d@suse.cz> Date: Thu, 8 Dec 2022 17:51:11 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH 3/6] mm/page_alloc: Explicitly record high-order atomic allocations in alloc_flags Content-Language: en-US To: Mel Gorman , Linux-MM Cc: Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , LKML References: <20221129151701.23261-1-mgorman@techsingularity.net> <20221129151701.23261-4-mgorman@techsingularity.net> From: Vlastimil Babka In-Reply-To: <20221129151701.23261-4-mgorman@techsingularity.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/29/22 16:16, Mel Gorman wrote: > A high-order ALLOC_HARDER allocation is assumed to be atomic. While that > is accurate, it changes later in the series. In preparation, explicitly > record high-order atomic allocations in gfp_to_alloc_flags(). > > Signed-off-by: Mel Gorman > --- > mm/internal.h | 1 + > mm/page_alloc.c | 19 +++++++++++++------ > 2 files changed, 14 insertions(+), 6 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index d503e57a57a1..9a9d9b5ee87f 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -754,6 +754,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > #else > #define ALLOC_NOFRAGMENT 0x0 > #endif > +#define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ > #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ > > enum ttu_flags; > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index da746e9eb2cf..e2b65767dda0 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3710,7 +3710,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, > * reserved for high-order atomic allocation, so order-0 > * request should skip it. > */ > - if (order > 0 && alloc_flags & ALLOC_HARDER) > + if (alloc_flags & ALLOC_HIGHATOMIC) > page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); > if (!page) { > page = __rmqueue(zone, order, migratetype, alloc_flags); > @@ -4028,8 +4028,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, > return true; > } > #endif > - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) > + if ((alloc_flags & ALLOC_HIGHATOMIC) && > + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { > return true; alloc_harder is defined as (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); AFAICS this means we no longer allow ALLOC_OOM to use the highatomic reserve. Isn't that a risk? > + } > } > return false; > } > @@ -4291,7 +4293,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, > * If this is a high-order atomic allocation then check > * if the pageblock should be reserved for the future > */ > - if (unlikely(order && (alloc_flags & ALLOC_HARDER))) > + if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) > reserve_highatomic_pageblock(page, zone, order); > > return page; > @@ -4818,7 +4820,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask, > } > > static inline unsigned int > -gfp_to_alloc_flags(gfp_t gfp_mask) > +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) > { > unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET; > > @@ -4844,8 +4846,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask) > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even > * if it can't schedule. > */ > - if (!(gfp_mask & __GFP_NOMEMALLOC)) > + if (!(gfp_mask & __GFP_NOMEMALLOC)) { > alloc_flags |= ALLOC_HARDER; > + > + if (order > 0) > + alloc_flags |= ALLOC_HIGHATOMIC; > + } > + > /* > * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the > * comment for __cpuset_node_allowed(). > @@ -5053,7 +5060,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > * kswapd needs to be woken up, and to avoid the cost of setting up > * alloc_flags precisely. So we do that now. > */ > - alloc_flags = gfp_to_alloc_flags(gfp_mask); > + alloc_flags = gfp_to_alloc_flags(gfp_mask, order); > > /* > * We need to recalculate the starting point for the zonelist iterator