Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp685408iog; Mon, 13 Jun 2022 10:36:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyUcYZUxR+MxcTarqwoQxgv8V+CcUTPmqfsWedLa3G6nHJWxtVL/OSO9i1LFNhCmRQOjp2W X-Received: by 2002:a17:902:d2ce:b0:164:be8:33f9 with SMTP id n14-20020a170902d2ce00b001640be833f9mr243872plc.8.1655141764697; Mon, 13 Jun 2022 10:36:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655141764; cv=none; d=google.com; s=arc-20160816; b=nEhtFUjFJZHpa7xc7EwcaRoGXdkSLfhn6L9NYng8rtVN8LDooVXUZH3GUL7dmYk5td XaFW4143jVpd9tZXfUwnx/Ril2NQB5yDWJutdbvIQbYL2KFgkEebxU2sO0hakfWAsjM0 vcztjRIlNrn7E8cfJiOxhV2HoQmaIu1CaMFcszoQZRe//nYmqvCzPThUAMK2ZdyocMt+ 2x5VBfP1xkOU9sj8DkJCTNMaTjl8LegmIWRUqxP+sWvQZ6retMR1YWBZ3fypTZoso/iw a63JjEo8Isj6tW7Z9Mw64IkztLYfvfAvMoqR4nvl9hR0FyVVWQBk0ZWCHDe8dpfUTys3 qDoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=v818/rSVUFtM5Aui90TwZmp3kJVhKKoVc8bks+ZdaDQ=; b=EOrzUTvRll8A3twRnU1MaowK25V9Pu8FA5N+cy6wyvoJNSMsCBIx7RlicIRpGfcpEb 06UX+oPHg6XD4QFHgf2RDq9ii9OSsPt9snaasAa+lRTbOKFj/ati4+CKDGt/PQRpuVLQ mbVwSOsQT/mFdnWX4yXM9mr5g6OfS8L6h3uahtFuHIficCiTfijM5qaF3BVNfyGFiobd 3s3r3ZUPza/24ZFVFMWLwJs0Pb4g0sS32YrirLTIpzIG/1/Jhmc2nncoCC7PyJJkgxAo BZwjY5CPFGNzwjCy3nbytWjLczXdrXNLhvlNf/olbj9AZjwpEpqDgqqHLB+pKVipCt7v 3VmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h7-20020a056a00230700b004fa3a8dffa2si10832283pfh.89.2022.06.13.10.35.51; Mon, 13 Jun 2022 10:36:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240787AbiFMRcO (ORCPT + 99 others); Mon, 13 Jun 2022 13:32:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241453AbiFMRbq (ORCPT ); Mon, 13 Jun 2022 13:31:46 -0400 Received: from outbound-smtp60.blacknight.com (outbound-smtp60.blacknight.com [46.22.136.244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9A5365400 for ; Mon, 13 Jun 2022 05:57:05 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp60.blacknight.com (Postfix) with ESMTPS id 6B2AAFA76F for ; Mon, 13 Jun 2022 13:57:04 +0100 (IST) Received: (qmail 28904 invoked from network); 13 Jun 2022 12:57:04 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jun 2022 12:57:03 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , LKML , Linux-MM , Mel Gorman Subject: [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Date: Mon, 13 Jun 2022 13:56:18 +0100 Message-Id: <20220613125622.18628-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220613125622.18628-1-mgorman@techsingularity.net> References: <20220613125622.18628-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is a preparation page to allow the buddy removal code to be reused in a later patch. No functional change. Signed-off-by: Mel Gorman Tested-by: Minchan Kim Acked-by: Minchan Kim Reviewed-by: Nicolas Saenz Julienne Acked-by: Vlastimil Babka --- mm/page_alloc.c | 81 ++++++++++++++++++++++++++++--------------------- 1 file changed, 47 insertions(+), 34 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index febd97f4a2fc..44d198af4b35 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3637,6 +3637,43 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z, #endif } +static __always_inline +struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, + unsigned int order, unsigned int alloc_flags, + int migratetype) +{ + struct page *page; + unsigned long flags; + + do { + page = NULL; + spin_lock_irqsave(&zone->lock, flags); + /* + * order-0 request can reach here when the pcplist is skipped + * due to non-CMA allocation context. HIGHATOMIC area is + * reserved for high-order atomic allocation, so order-0 + * request should skip it. + */ + if (order > 0 && alloc_flags & ALLOC_HARDER) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { + page = __rmqueue(zone, order, migratetype, alloc_flags); + if (!page) { + spin_unlock_irqrestore(&zone->lock, flags); + return NULL; + } + } + __mod_zone_freepage_state(zone, -(1 << order), + get_pcppage_migratetype(page)); + spin_unlock_irqrestore(&zone->lock, flags); + } while (check_new_pages(page, order)); + + __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); + zone_statistics(preferred_zone, zone, 1); + + return page; +} + /* Remove page from the per-cpu list, caller must protect the list */ static inline struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, @@ -3717,9 +3754,14 @@ struct page *rmqueue(struct zone *preferred_zone, gfp_t gfp_flags, unsigned int alloc_flags, int migratetype) { - unsigned long flags; struct page *page; + /* + * We most definitely don't want callers attempting to + * allocate greater than order-1 page units with __GFP_NOFAIL. + */ + WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); + if (likely(pcp_allowed_order(order))) { /* * MIGRATE_MOVABLE pcplist could have the pages on CMA area and @@ -3733,35 +3775,10 @@ struct page *rmqueue(struct zone *preferred_zone, } } - /* - * We most definitely don't want callers attempting to - * allocate greater than order-1 page units with __GFP_NOFAIL. - */ - WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); - - do { - page = NULL; - spin_lock_irqsave(&zone->lock, flags); - /* - * order-0 request can reach here when the pcplist is skipped - * due to non-CMA allocation context. HIGHATOMIC area is - * reserved for high-order atomic allocation, so order-0 - * request should skip it. - */ - if (order > 0 && alloc_flags & ALLOC_HARDER) - page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); - if (!page) { - page = __rmqueue(zone, order, migratetype, alloc_flags); - if (!page) - goto failed; - } - __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); - spin_unlock_irqrestore(&zone->lock, flags); - } while (check_new_pages(page, order)); - - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); - zone_statistics(preferred_zone, zone, 1); + page = rmqueue_buddy(preferred_zone, zone, order, alloc_flags, + migratetype); + if (unlikely(!page)) + return NULL; out: /* Separate test+clear to avoid unnecessary atomics */ @@ -3772,10 +3789,6 @@ struct page *rmqueue(struct zone *preferred_zone, VM_BUG_ON_PAGE(page && bad_range(zone, page), page); return page; - -failed: - spin_unlock_irqrestore(&zone->lock, flags); - return NULL; } #ifdef CONFIG_FAIL_PAGE_ALLOC -- 2.35.3