Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1396887pxj; Fri, 21 May 2021 13:13:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxSAob87k8pBBYhj7TzDbfi3l6XQnsx8IVqsWIeL9d5Vb8EvESfAdVe9/fHxBUw5ZqzwiA5 X-Received: by 2002:aa7:d786:: with SMTP id s6mr12796344edq.239.1621628024907; Fri, 21 May 2021 13:13:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621628024; cv=none; d=google.com; s=arc-20160816; b=R8CDsG0wqqFqeUEGBwcgDiGpXqF+N4i1mBDJqPOM3jva0HnwgtBNmetrnKe23UiWKt RggyYsyngGqfjCU/2ZDjhDOQZohAvOlvAFQTQAFAOIy0atm57YTjObrvVySf0GaC8Nst y03hRGe/xVPUy2s9PPxyXwnECe4LZW0MNdbw0N2p/sNTlSuLe7iAmrW2Y6UoxmfYNayB ABPMQE912bjfQ5byKDyGgUjxKeEwmfrwirGneMrNjvRzciK/U1yP6+f7fQPRavPbjE4L U+pMdWJUOVUeUN/SLxTRKMjxHztiAVd90CFp2fzUnrpeX/WcKT5ER2C6UMe7ifkbhO9n /R3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=nyoNuuU3xrnUjIq3iAGqIi6iI1GU08WO7+CIacVtzl8=; b=lx4glV9hCrRT0A6cPys0XAk5Y5OjDcJ3/QAzy4rF1pWBA41ng6kj0abiXCqRLW7M85 MqB3zpchATQAFwR84e0pIqv6S3BsL8aTkj0slvQ0ZtPDJMTTzAjMk8t1zEufG0P6umFE rcggd+dkDYesn0Tj/o8/OoEySFfmKDUCZqpe6w3LuZn5xP3og5EHsq+hIPLXEWnqfoXu aS3WFUJrHA6wNzpzdXIeLTK+I7/ZsxIQUtxUeHrtmOfoKafob2gKw40U5L7CKUycp4Cq b3XsO6+2Tfu74B+E9PzirkxlMRYy7qkrjRTGWPb/aDACREcuUjSOjhX0Y9uwx50QUWig sMxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y97si6401616ede.37.2021.05.21.13.13.22; Fri, 21 May 2021 13:13:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234595AbhEUKcj (ORCPT + 99 others); Fri, 21 May 2021 06:32:39 -0400 Received: from outbound-smtp38.blacknight.com ([46.22.139.221]:36575 "EHLO outbound-smtp38.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230378AbhEUKa4 (ORCPT ); Fri, 21 May 2021 06:30:56 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp38.blacknight.com (Postfix) with ESMTPS id 7C9941A29 for ; Fri, 21 May 2021 11:29:27 +0100 (IST) Received: (qmail 23960 invoked from network); 21 May 2021 10:29:27 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPA; 21 May 2021 10:29:27 -0000 From: Mel Gorman To: Linux-MM Cc: Dave Hansen , Matthew Wilcox , Vlastimil Babka , Michal Hocko , Nicholas Piggin , LKML , Mel Gorman Subject: [PATCH 5/6] mm/page_alloc: Limit the number of pages on PCP lists when reclaim is active Date: Fri, 21 May 2021 11:28:25 +0100 Message-Id: <20210521102826.28552-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210521102826.28552-1-mgorman@techsingularity.net> References: <20210521102826.28552-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When kswapd is active then direct reclaim is potentially active. In either case, it is possible that a zone would be balanced if pages were not trapped on PCP lists. Instead of draining remote pages, simply limit the size of the PCP lists while kswapd is active. Signed-off-by: Mel Gorman --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 19 ++++++++++++++++++- mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++++ 3 files changed, 54 insertions(+), 1 deletion(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 92182e0299b2..a0606239a167 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -647,6 +647,7 @@ enum zone_flags { ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks. * Cleared when kswapd is woken. */ + ZONE_RECLAIM_ACTIVE, /* kswapd may be scanning the zone. */ }; static inline unsigned long zone_managed_pages(struct zone *zone) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c3da6401f138..d8f8044781c4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3286,6 +3286,23 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch) return batch; } +static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone) +{ + int high = READ_ONCE(pcp->high); + + if (unlikely(!high)) + return 0; + + if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) + return high; + + /* + * If reclaim is active, limit the number of pages that can be + * stored on pcp lists + */ + return READ_ONCE(pcp->batch) << 2; +} + static void free_unref_page_commit(struct page *page, unsigned long pfn, int migratetype) { @@ -3297,7 +3314,7 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn, pcp = this_cpu_ptr(zone->per_cpu_pageset); list_add(&page->lru, &pcp->lists[migratetype]); pcp->count++; - high = READ_ONCE(pcp->high); + high = nr_pcp_high(pcp, zone); if (pcp->count >= high) { int batch = READ_ONCE(pcp->batch); diff --git a/mm/vmscan.c b/mm/vmscan.c index 5199b9696bab..c3c2100a80b8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3722,6 +3722,38 @@ static bool kswapd_shrink_node(pg_data_t *pgdat, return sc->nr_scanned >= sc->nr_to_reclaim; } +/* Page allocator PCP high watermark is lowered if reclaim is active. */ +static inline void +update_reclaim_active(pg_data_t *pgdat, int highest_zoneidx, bool active) +{ + int i; + struct zone *zone; + + for (i = 0; i <= highest_zoneidx; i++) { + zone = pgdat->node_zones + i; + + if (!managed_zone(zone)) + continue; + + if (active) + set_bit(ZONE_RECLAIM_ACTIVE, &zone->flags); + else + clear_bit(ZONE_RECLAIM_ACTIVE, &zone->flags); + } +} + +static inline void +set_reclaim_active(pg_data_t *pgdat, int highest_zoneidx) +{ + update_reclaim_active(pgdat, highest_zoneidx, true); +} + +static inline void +clear_reclaim_active(pg_data_t *pgdat, int highest_zoneidx) +{ + update_reclaim_active(pgdat, highest_zoneidx, false); +} + /* * For kswapd, balance_pgdat() will reclaim pages across a node from zones * that are eligible for use by the caller until at least one zone is @@ -3774,6 +3806,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) boosted = nr_boost_reclaim; restart: + set_reclaim_active(pgdat, highest_zoneidx); sc.priority = DEF_PRIORITY; do { unsigned long nr_reclaimed = sc.nr_reclaimed; @@ -3907,6 +3940,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) pgdat->kswapd_failures++; out: + clear_reclaim_active(pgdat, highest_zoneidx); + /* If reclaim was boosted, account for the reclaim done in this pass */ if (boosted) { unsigned long flags; -- 2.26.2