Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp287183pxa; Wed, 19 Aug 2020 00:57:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx7c/8wEOxp91SZ8dJbEokVzSGhzm1VMiz8sT/HbJSS9fBezrEEyL7wp/fuUR3brjVlRTTH X-Received: by 2002:aa7:d3c2:: with SMTP id o2mr12308864edr.11.1597823865105; Wed, 19 Aug 2020 00:57:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597823865; cv=none; d=google.com; s=arc-20160816; b=L9Uhoh6hm04LbERbMX0G7bfiX0zss6RKnWcLPRTg2VcQjPHaj6pyuhdqXBx0kZjzhh ezQQv9HpoZbEkHvzEwPc/uUqxwmnfMLD63rQXqAW/rEF4YuF7pyWjH4hasJvsxjTQEk9 GGXHkaBNKl6RdL4+Fea++nPJbm77POmUl2a8QWGJ7vIFNEhR3+MSMJ10K1lXusoTkGMA v5KkH/z1EHYDv18BXM0pvakW1gDQRd6+UKbQZBPpl9shDBocB/rH17q6B4IZ47/4N1EU EoyYVaktPs4w/HWogMFJFNahG89HVZ0ktC71ErB3Rk0t0bSF4vqsGRuQeW4h6dHil2Hl Ytew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=+o71djCzv4WOmMd5a1h4YqzgBGEAWKpSOhvTHKOtOt8=; b=UFbk5wjgEda3XfcmuMM/7+op7gyPJ+6RTdGK/ld8PKn9kEo+ZFhcBypu0Gd2PVr7D8 m8VN9xUBEAVk/l+VYg3ccOMvaWr1c59Rtcx+KJRIbzTKQZ4VMcJt0BgHb25JdjpQjYUf Gt5gQAuxQdQLLw4ucUGDfMcGI0zRIAGik7GQiLrBeRakFoBQdmJPm+DJDoPjjOqwSh36 n+VpiBRWzypQ3Yitl8/eGHYwgUrUxk0/pXBXgjCMQjvgKDaMhwGWlIUfixJm15fgmA0b zhZ4m2nGtVDzfctb2YPffPSD7othop+vqQMhLeP0d6FA80vr/xL9zYrVChvVjfOsYQP+ vKZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e9si14186187edq.67.2020.08.19.00.57.21; Wed, 19 Aug 2020 00:57:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726731AbgHSHym (ORCPT + 99 others); Wed, 19 Aug 2020 03:54:42 -0400 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:55191 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726342AbgHSHye (ORCPT ); Wed, 19 Aug 2020 03:54:34 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01422;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U6CdRlv_1597823666; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U6CdRlv_1597823666) by smtp.aliyun-inc.com(127.0.0.1); Wed, 19 Aug 2020 15:54:28 +0800 Subject: Re: [RFC PATCH v2 4/5] mm: Split release_pages work into 3 passes To: Alexander Duyck Cc: yang.shi@linux.alibaba.com, lkp@intel.com, rong.a.chen@intel.com, khlebnikov@yandex-team.ru, kirill@shutemov.name, hughd@google.com, linux-kernel@vger.kernel.org, daniel.m.jordan@oracle.com, linux-mm@kvack.org, shakeelb@google.com, willy@infradead.org, hannes@cmpxchg.org, tj@kernel.org, cgroups@vger.kernel.org, akpm@linux-foundation.org, richard.weiyang@gmail.com, mgorman@techsingularity.net, iamjoonsoo.kim@lge.com References: <20200819041852.23414.95939.stgit@localhost.localdomain> <20200819042730.23414.41309.stgit@localhost.localdomain> From: Alex Shi Message-ID: <15edf807-ce03-83f7-407d-5929341b2b4e@linux.alibaba.com> Date: Wed, 19 Aug 2020 15:53:15 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <20200819042730.23414.41309.stgit@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/8/19 下午12:27, Alexander Duyck 写道: > From: Alexander Duyck > > The release_pages function has a number of paths that end up with the > LRU lock having to be released and reacquired. Such an example would be the > freeing of THP pages as it requires releasing the LRU lock so that it can > be potentially reacquired by __put_compound_page. > > In order to avoid that we can split the work into 3 passes, the first > without the LRU lock to go through and sort out those pages that are not in > the LRU so they can be freed immediately from those that can't. The second > pass will then go through removing those pages from the LRU in batches as > large as a pagevec can hold before freeing the LRU lock. Once the pages have > been removed from the LRU we can then proceed to free the remaining pages > without needing to worry about if they are in the LRU any further. > > The general idea is to avoid bouncing the LRU lock between pages and to > hopefully aggregate the lock for up to the full page vector worth of pages. > > Signed-off-by: Alexander Duyck > --- > mm/swap.c | 109 +++++++++++++++++++++++++++++++++++++------------------------ > 1 file changed, 67 insertions(+), 42 deletions(-) > > diff --git a/mm/swap.c b/mm/swap.c > index fe53449fa1b8..b405f81b2c60 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -795,6 +795,54 @@ void lru_add_drain_all(void) > } > #endif > > +static void __release_page(struct page *page, struct list_head *pages_to_free) > +{ > + if (PageCompound(page)) { > + __put_compound_page(page); > + } else { > + /* Clear Active bit in case of parallel mark_page_accessed */ > + __ClearPageActive(page); > + __ClearPageWaiters(page); > + > + list_add(&page->lru, pages_to_free); > + } > +} > + > +static void __release_lru_pages(struct pagevec *pvec, > + struct list_head *pages_to_free) > +{ > + struct lruvec *lruvec = NULL; > + unsigned long flags = 0; > + int i; > + > + /* > + * The pagevec at this point should contain a set of pages with > + * their reference count at 0 and the LRU flag set. We will now > + * need to pull the pages from their LRU lists. > + * > + * We walk the list backwards here since that way we are starting at > + * the pages that should be warmest in the cache. > + */ > + for (i = pagevec_count(pvec); i--;) { > + struct page *page = pvec->pages[i]; > + > + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); the lock bounce is better with the patch, would you like to do further like using add_lruvecs to reduce bounce more? Thanks Alex > + VM_BUG_ON_PAGE(!PageLRU(page), page); > + __ClearPageLRU(page); > + del_page_from_lru_list(page, lruvec, page_off_lru(page)); > + } > + > + unlock_page_lruvec_irqrestore(lruvec, flags); > + > + /* > + * A batch of pages are no longer on the LRU list. Go through and > + * start the final process of returning the deferred pages to their > + * appropriate freelists. > + */ > + for (i = pagevec_count(pvec); i--;) > + __release_page(pvec->pages[i], pages_to_free); > +} > + > /** > * release_pages - batched put_page() > * @pages: array of pages to release > @@ -806,32 +854,24 @@ void lru_add_drain_all(void) > void release_pages(struct page **pages, int nr) > { > int i; > + struct pagevec pvec; > LIST_HEAD(pages_to_free); > - struct lruvec *lruvec = NULL; > - unsigned long flags; > - unsigned int lock_batch; > > + pagevec_init(&pvec); > + > + /* > + * We need to first walk through the list cleaning up the low hanging > + * fruit and clearing those pages that either cannot be freed or that > + * are non-LRU. We will store the LRU pages in a pagevec so that we > + * can get to them in the next pass. > + */ > for (i = 0; i < nr; i++) { > struct page *page = pages[i]; > > - /* > - * Make sure the IRQ-safe lock-holding time does not get > - * excessive with a continuous string of pages from the > - * same lruvec. The lock is held only if lruvec != NULL. > - */ > - if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = NULL; > - } > - > if (is_huge_zero_page(page)) > continue; > > if (is_zone_device_page(page)) { > - if (lruvec) { > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = NULL; > - } > /* > * ZONE_DEVICE pages that return 'false' from > * put_devmap_managed_page() do not require special > @@ -848,36 +888,21 @@ void release_pages(struct page **pages, int nr) > if (!put_page_testzero(page)) > continue; > > - if (PageCompound(page)) { > - if (lruvec) { > - unlock_page_lruvec_irqrestore(lruvec, flags); > - lruvec = NULL; > - } > - __put_compound_page(page); > + if (!PageLRU(page)) { > + __release_page(page, &pages_to_free); > continue; > } > > - if (PageLRU(page)) { > - struct lruvec *prev_lruvec = lruvec; > - > - lruvec = relock_page_lruvec_irqsave(page, lruvec, > - &flags); > - if (prev_lruvec != lruvec) > - lock_batch = 0; > - > - VM_BUG_ON_PAGE(!PageLRU(page), page); > - __ClearPageLRU(page); > - del_page_from_lru_list(page, lruvec, page_off_lru(page)); > + /* record page so we can get it in the next pass */ > + if (!pagevec_add(&pvec, page)) { > + __release_lru_pages(&pvec, &pages_to_free); > + pagevec_reinit(&pvec); > } > - > - /* Clear Active bit in case of parallel mark_page_accessed */ > - __ClearPageActive(page); > - __ClearPageWaiters(page); > - > - list_add(&page->lru, &pages_to_free); > } > - if (lruvec) > - unlock_page_lruvec_irqrestore(lruvec, flags); > + > + /* flush any remaining LRU pages that need to be processed */ > + if (pagevec_count(&pvec)) > + __release_lru_pages(&pvec, &pages_to_free); > > mem_cgroup_uncharge_list(&pages_to_free); > free_unref_page_list(&pages_to_free); >