Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4233486pxj; Tue, 25 May 2021 03:25:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx86wNjup6UaA0+Qq2fTyheTO3J9xeoMOMFIu3PkJjrdYS9rZQhMpev28BIkN5c+aWHjAgU X-Received: by 2002:a17:906:1982:: with SMTP id g2mr19188287ejd.184.1621938313766; Tue, 25 May 2021 03:25:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621938313; cv=none; d=google.com; s=arc-20160816; b=Ixh64kfVAAaFE2gOcMq1tLTQhTwdJ3goCQs1sjO3Lx46kbdpk3L7/WuniUMTiV26rz xR+lmqUrW62goJ6yNkp6OCrnbIQRyoSK0v+eXGSJ4Co8xAY6vAwBytfDL6XMLhrIGwN6 d4J/FNCN5WX45GAqJMyi6cPkcSBfSOjSRTAuFenNGNnlWiuLW66GXWTFSTA6JdHgeQM+ GhuNf0qTy7MGB4o/t+bwGR2+VfVCkxd1fv+zvRI8NwznkHizTxd5755E4tJL10n24Q5F INalXUsmmRbbFyNI1EwdDLGntZm7zD+mdRSKNg3EEAUAJyt7HPIQ3c9wcYjdv0Ov2DMP HIpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=NcdpXKMuV3u5kfv7ga9W0c9Fof8grEHJnTXWov+32c0=; b=Vs/lBFcCRvq+FLDqQQLzuRttxB2rUKaIRUapUg13llASh6N0R+oAJjReHtWYlraD/p eYvBI/+tYCdnDFBhIO04rf/A9N+WvdaMPoOpW+31fsuFZq8kkpaaAqojMhrB4xBr4aky jx+acDYF/x5yfkWbNvEa3VMrGeBbnpnjHCc4/lqprl01aeL3s0JsRm/Dyfs/IiA3DyIa LHH9hoV9kKlxcVcGPE2i2+fCKwKZty5TexWpC+NyowvfGhH4ozf9yrwdgdEO061H0ZyU Ose5drjadHVsWcLZ88klVgBxUure9Jfkyk94+PLbH3LW5a8057pRMh9at1nhdbPYcRXd jcpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n14si7319027edd.359.2021.05.25.03.24.50; Tue, 25 May 2021 03:25:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbhEYILT (ORCPT + 99 others); Tue, 25 May 2021 04:11:19 -0400 Received: from outbound-smtp26.blacknight.com ([81.17.249.194]:47292 "EHLO outbound-smtp26.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232075AbhEYIJc (ORCPT ); Tue, 25 May 2021 04:09:32 -0400 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp26.blacknight.com (Postfix) with ESMTPS id ABBE2CAC77 for ; Tue, 25 May 2021 09:01:50 +0100 (IST) Received: (qmail 5375 invoked from network); 25 May 2021 08:01:50 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPA; 25 May 2021 08:01:50 -0000 From: Mel Gorman To: Andrew Morton Cc: Hillf Danton , Dave Hansen , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , Mel Gorman Subject: [PATCH 2/6] mm/page_alloc: Disassociate the pcp->high from pcp->batch Date: Tue, 25 May 2021 09:01:15 +0100 Message-Id: <20210525080119.5455-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210525080119.5455-1-mgorman@techsingularity.net> References: <20210525080119.5455-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The pcp high watermark is based on the batch size but there is no relationship between them other than it is convenient to use early in boot. This patch takes the first step and bases pcp->high on the zone low watermark split across the number of CPUs local to a zone while the batch size remains the same to avoid increasing allocation latencies. The intent behind the default pcp->high is "set the number of PCP pages such that if they are all full that background reclaim is not started prematurely". Note that in this patch the pcp->high values are adjusted after memory hotplug events, min_free_kbytes adjustments and watermark scale factor adjustments but not CPU hotplug events which is handled later in the series. On a test KVM instance; Before grep -E "high:|batch" /proc/zoneinfo | tail -2 high: 378 batch: 63 After grep -E "high:|batch" /proc/zoneinfo | tail -2 high: 649 batch: 63 Signed-off-by: Mel Gorman --- mm/page_alloc.c | 60 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 42 insertions(+), 18 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a48f305f0381..c0536e5d088a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2163,14 +2163,6 @@ void __init page_alloc_init_late(void) /* Block until all are initialised */ wait_for_completion(&pgdat_init_all_done_comp); - /* - * The number of managed pages has changed due to the initialisation - * so the pcpu batch and high limits needs to be updated or the limits - * will be artificially small. - */ - for_each_populated_zone(zone) - zone_pcp_update(zone); - /* * We initialized the rest of the deferred pages. Permanently disable * on-demand struct page initialization. @@ -6594,13 +6586,12 @@ static int zone_batchsize(struct zone *zone) int batch; /* - * The per-cpu-pages pools are set to around 1000th of the - * size of the zone. + * The number of pages to batch allocate is either ~0.1% + * of the zone or 1MB, whichever is smaller. The batch + * size is striking a balance between allocation latency + * and zone lock contention. */ - batch = zone_managed_pages(zone) / 1024; - /* But no more than a meg. */ - if (batch * PAGE_SIZE > 1024 * 1024) - batch = (1024 * 1024) / PAGE_SIZE; + batch = min(zone_managed_pages(zone) >> 10, (1024 * 1024) / PAGE_SIZE); batch /= 4; /* We effectively *= 4 below */ if (batch < 1) batch = 1; @@ -6637,6 +6628,34 @@ static int zone_batchsize(struct zone *zone) #endif } +static int zone_highsize(struct zone *zone, int batch) +{ +#ifdef CONFIG_MMU + int high; + int nr_local_cpus; + + /* + * The high value of the pcp is based on the zone low watermark + * so that if they are full then background reclaim will not be + * started prematurely. The value is split across all online CPUs + * local to the zone. Note that early in boot that CPUs may not be + * online yet. + */ + nr_local_cpus = max(1U, cpumask_weight(cpumask_of_node(zone_to_nid(zone)))); + high = low_wmark_pages(zone) / nr_local_cpus; + + /* + * Ensure high is at least batch*4. The multiple is based on the + * historical relationship between high and batch. + */ + high = max(high, batch << 2); + + return high; +#else + return 0; +#endif +} + /* * pcp->high and pcp->batch values are related and generally batch is lower * than high. They are also related to pcp->count such that count is lower @@ -6698,11 +6717,10 @@ static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long h */ static void zone_set_pageset_high_and_batch(struct zone *zone) { - unsigned long new_high, new_batch; + int new_high, new_batch; - new_batch = zone_batchsize(zone); - new_high = 6 * new_batch; - new_batch = max(1UL, 1 * new_batch); + new_batch = max(1, zone_batchsize(zone)); + new_high = zone_highsize(zone, new_batch); if (zone->pageset_high == new_high && zone->pageset_batch == new_batch) @@ -8170,6 +8188,12 @@ static void __setup_per_zone_wmarks(void) zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; + /* + * The watermark size have changed so update the pcpu batch + * and high limits or the limits may be inappropriate. + */ + zone_set_pageset_high_and_batch(zone); + spin_unlock_irqrestore(&zone->lock, flags); } -- 2.26.2