Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3250323pxj; Mon, 24 May 2021 02:13:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzhAEijY7P/H4jpYJ6EUkxmA4sbpzQiWJ0S15OtUhVDOMiPYh/w9zOjK1TvbFBM06L4dzBP X-Received: by 2002:a6b:5c18:: with SMTP id z24mr13198837ioh.127.1621847592222; Mon, 24 May 2021 02:13:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621847592; cv=none; d=google.com; s=arc-20160816; b=U2UW2e5JRINXAhWMxCQ1WmtQdMaksnJHCYaUk35o3nAB6ZstRmgqKQv/k5riir6ngb +NVyaBPeNDbM1JYBbRRw7hmc9zohRsQs7J/NA9c44/8eADQNd0oaOpnWf2SR1sPxLG22 HDYGxbt65HZuSzHs+UUCAnjlsCLv2QNRLLuXY4jdlaUhNod+1qWL80n/kS3I1lA5EG3x sqKAwbjckP9rAb0HkKcMMSKVQPCfEuocUbfjD/dT5jlKS2Xn2n62Kg7hbMyVT0NGmbec Dbs4DGnTAQVAumzi+GwjxKa3SBZVIXaUv5BFYFfM6qy0+HNWkh7XCI0gXV+mKB6Gi8bo N7bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=emvmSDAXJjspIdql/mZZmojeQO0nJGesBB7xXfM0Heo=; b=BgJ3w77B9SAFw71WIlYFHQomyTQw6cWaAdkd+gLFVZMmEjOIqS/iULsGYs+RPwn/eD g5aXikvH9k8s8MicaSkGMMaiSLDzDwGt6Dx07djPfTP7P2HvoQMP4heWpvgcMyXmyP3y KZTIpgwLcXO9bUfopLvxlW5StSiQ9h18Y/wDHDATE/U87Kuh8FLC/AgWYa7jMd5TAjSZ J+XiEQ+in2FxRCPvTZ5SCJAzydT2TQy0OCLjyfANWMcc8QhfAMDEBzvrGZjc5m4mKsdf P+gi0gJ02G6Qns6s+SYTj8aGNCIWx9212HcPa25oWLgIZshZvPNOyOHonIFSJkw4GUzr tZxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i127si11729883iof.95.2021.05.24.02.12.59; Mon, 24 May 2021 02:13:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232415AbhEXJNw (ORCPT + 99 others); Mon, 24 May 2021 05:13:52 -0400 Received: from outbound-smtp25.blacknight.com ([81.17.249.193]:38439 "EHLO outbound-smtp25.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232313AbhEXJNv (ORCPT ); Mon, 24 May 2021 05:13:51 -0400 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp25.blacknight.com (Postfix) with ESMTPS id A0FC4CAD42 for ; Mon, 24 May 2021 10:12:22 +0100 (IST) Received: (qmail 26294 invoked from network); 24 May 2021 09:12:22 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 24 May 2021 09:12:22 -0000 Date: Mon, 24 May 2021 10:12:20 +0100 From: Mel Gorman To: Dave Hansen Cc: Linux-MM , Dave Hansen , Matthew Wilcox , Vlastimil Babka , Michal Hocko , Nicholas Piggin , LKML Subject: Re: [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Message-ID: <20210524091220.GC30378@techsingularity.net> References: <20210521102826.28552-1-mgorman@techsingularity.net> <20210521102826.28552-5-mgorman@techsingularity.net> <8646d3ad-345f-7ec7-fe4a-ada2680487a3@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <8646d3ad-345f-7ec7-fe4a-ada2680487a3@intel.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 21, 2021 at 03:36:05PM -0700, Dave Hansen wrote: > ... > > +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch) > > +{ > > + int min_nr_free, max_nr_free; > > + > > + /* Check for PCP disabled or boot pageset */ > > + if (unlikely(high < batch)) > > + return 1; > > + > > + min_nr_free = batch; > > + max_nr_free = high - batch; > > I puzzled over this for a minute. I *think* it means to say: "Leave at > least one batch worth of pages in the pcp at all times so that the next > allocation can still be satisfied from this pcp." > Yes, I added a comment. > > + batch <<= pcp->free_factor; > > + if (batch < max_nr_free) > > + pcp->free_factor++; > > + batch = clamp(batch, min_nr_free, max_nr_free); > > + > > + return batch; > > +} > > + > > static void free_unref_page_commit(struct page *page, unsigned long pfn, > > int migratetype) > > { > > struct zone *zone = page_zone(page); > > struct per_cpu_pages *pcp; > > + int high; > > > > __count_vm_event(PGFREE); > > pcp = this_cpu_ptr(zone->per_cpu_pageset); > > list_add(&page->lru, &pcp->lists[migratetype]); > > pcp->count++; > > - if (pcp->count >= READ_ONCE(pcp->high)) > > - free_pcppages_bulk(zone, READ_ONCE(pcp->batch), pcp); > > + high = READ_ONCE(pcp->high); > > + if (pcp->count >= high) { > > + int batch = READ_ONCE(pcp->batch); > > + > > + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp); > > + } > > } > > > > /* > > @@ -3531,6 +3555,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, > > > > local_lock_irqsave(&pagesets.lock, flags); > > pcp = this_cpu_ptr(zone->per_cpu_pageset); > > + pcp->free_factor >>= 1; > > list = &pcp->lists[migratetype]; > > page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); > > local_unlock_irqrestore(&pagesets.lock, flags); > > A high-level description of the algorithm in the changelog would also be > nice. I *think* it's basically: > > After hitting the high pcp mark, free one pcp->batch at a time. But, as > subsequent pcp free operations occur, keep doubling the size of the > freed batches. Cap them so that they always leave at least one > pcp->batch worth of pages. Scale the size back down by half whenever an > allocation that consumes a page from the pcp occurs. > > While I'd appreciate another comment or two, I do think this is worth > doing, and the approach seems sound: > > Acked-by: Dave Hansen Thanks, I added a few additional comments. -- Mel Gorman SUSE Labs