Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1396655pxj; Fri, 21 May 2021 13:13:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPZt4d3lqGBIfksIczdJYh0yQDheKDVrn13If/PBKcIh2ji0Aqb8hnyOM2DhSRpezLPZjA X-Received: by 2002:a6b:4105:: with SMTP id n5mr566705ioa.148.1621628000887; Fri, 21 May 2021 13:13:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621628000; cv=none; d=google.com; s=arc-20160816; b=ooYMry6efZOwnT4hmVsCQ9soVCdC57dXrn8sDDaFfDP1WQCadWvQ42zJjlnuk8954J r4/BzGEMSigpMaiNKV2+KA8//W3Z/U0x7WvEwM+W29HHHd6bGV/RHyQ7bfpPDhZDQ0t/ g4agG5JnSvqEE/5BwoE92+zYagqwvln9/WE14pQgkQ7s2M43/eY8ZXFWN2HSWa+HsZRQ ziqYjRmc1mrchijkOcFKEJS9wnJWl0PQliuRfeDxf+r0kMt2yH0pB/LCxQ/vXw2z/krB oBQoMdSRGm8OftXuyYAVSf3lIp12OdZr7Yi5EzNdttgrfmIbl4GyPgP1veB3f4wy1Ixy or0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=QpCkv1DuTVey8dZNNaUZGwBE9NLvmQXdz6mO5RRZm0k=; b=IUmYcZa6NqLWOoCsmVke2bqbXtCIfJl0f8Pyzgvz8JgmHvPwnyGepGNThiQ0dAEXSH avcjHgYBGpS2aCDt1aaAURr5/fgOCQj8CbupgWOylgRgKs4ZYGntninTadWpoZu39vjt gh0rRWV4Rve6BixbOVAqhSgWH2sRPn7U6SWzR93j3Bbiuryfqjt84WJKODWRQmSkEXEf coVjSk1zenaAn/bW76LWnvF7nRPjEAbb398Q3nLE0wm4i1sQKEejJu5zOPYahUJlY39f FN8uPZJeGVfrMbutYUOin9rGRu4KJFtVs1DYiIyVd6TLpcJG0Q4+cs075g3eLorB9vV9 Z+bA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m12si6333390jav.104.2021.05.21.13.13.08; Fri, 21 May 2021 13:13:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231150AbhEUKce (ORCPT + 99 others); Fri, 21 May 2021 06:32:34 -0400 Received: from outbound-smtp20.blacknight.com ([46.22.139.247]:49371 "EHLO outbound-smtp20.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230225AbhEUKam (ORCPT ); Fri, 21 May 2021 06:30:42 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp20.blacknight.com (Postfix) with ESMTPS id 5DF661C3C0C for ; Fri, 21 May 2021 11:29:17 +0100 (IST) Received: (qmail 23575 invoked from network); 21 May 2021 10:29:17 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPA; 21 May 2021 10:29:17 -0000 From: Mel Gorman To: Linux-MM Cc: Dave Hansen , Matthew Wilcox , Vlastimil Babka , Michal Hocko , Nicholas Piggin , LKML , Mel Gorman Subject: [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Date: Fri, 21 May 2021 11:28:24 +0100 Message-Id: <20210521102826.28552-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210521102826.28552-1-mgorman@techsingularity.net> References: <20210521102826.28552-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a task is freeing a large number of order-0 pages, it may acquire the zone->lock multiple times freeing pages in batches. This may unnecessarily contend on the zone lock when freeing very large number of pages. This patch adapts the size of the batch based on the recent pattern to scale the batch size for subsequent frees. As the machines I used were not large enough to test this are not large enough to illustrate a problem, a debugging patch shows patterns like the following (slightly editted for clarity) Baseline vanilla kernel time-unmap-14426 [...] free_pcppages_bulk: free 63 count 378 high 378 time-unmap-14426 [...] free_pcppages_bulk: free 63 count 378 high 378 time-unmap-14426 [...] free_pcppages_bulk: free 63 count 378 high 378 time-unmap-14426 [...] free_pcppages_bulk: free 63 count 378 high 378 time-unmap-14426 [...] free_pcppages_bulk: free 63 count 378 high 378 With patches time-unmap-7724 [...] free_pcppages_bulk: free 126 count 814 high 814 time-unmap-7724 [...] free_pcppages_bulk: free 252 count 814 high 814 time-unmap-7724 [...] free_pcppages_bulk: free 504 count 814 high 814 time-unmap-7724 [...] free_pcppages_bulk: free 751 count 814 high 814 time-unmap-7724 [...] free_pcppages_bulk: free 751 count 814 high 814 Signed-off-by: Mel Gorman --- include/linux/mmzone.h | 3 ++- mm/page_alloc.c | 30 ++++++++++++++++++++++++++++-- 2 files changed, 30 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b449151745d7..92182e0299b2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -343,8 +343,9 @@ struct per_cpu_pages { int count; /* number of pages in the list */ int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ + short free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA - int expire; /* When 0, remote pagesets are drained */ + short expire; /* When 0, remote pagesets are drained */ #endif /* Lists of pages, one per migrate type stored on the pcp-lists */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2761b03b3a44..c3da6401f138 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3267,18 +3267,42 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn) return true; } +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch) +{ + int min_nr_free, max_nr_free; + + /* Check for PCP disabled or boot pageset */ + if (unlikely(high < batch)) + return 1; + + min_nr_free = batch; + max_nr_free = high - batch; + + batch <<= pcp->free_factor; + if (batch < max_nr_free) + pcp->free_factor++; + batch = clamp(batch, min_nr_free, max_nr_free); + + return batch; +} + static void free_unref_page_commit(struct page *page, unsigned long pfn, int migratetype) { struct zone *zone = page_zone(page); struct per_cpu_pages *pcp; + int high; __count_vm_event(PGFREE); pcp = this_cpu_ptr(zone->per_cpu_pageset); list_add(&page->lru, &pcp->lists[migratetype]); pcp->count++; - if (pcp->count >= READ_ONCE(pcp->high)) - free_pcppages_bulk(zone, READ_ONCE(pcp->batch), pcp); + high = READ_ONCE(pcp->high); + if (pcp->count >= high) { + int batch = READ_ONCE(pcp->batch); + + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp); + } } /* @@ -3531,6 +3555,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, local_lock_irqsave(&pagesets.lock, flags); pcp = this_cpu_ptr(zone->per_cpu_pageset); + pcp->free_factor >>= 1; list = &pcp->lists[migratetype]; page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); local_unlock_irqrestore(&pagesets.lock, flags); @@ -6690,6 +6715,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta */ pcp->high = BOOT_PAGESET_HIGH; pcp->batch = BOOT_PAGESET_BATCH; + pcp->free_factor = 0; } static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high, -- 2.26.2