Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp2270693pxb; Fri, 22 Oct 2021 18:16:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwf8wElWTcbhjpmTvmAKufrRxhEcfTfMPN76yiJRaqTmcRVNug2djLsSdrOxwgr6nEII4FT X-Received: by 2002:a63:7504:: with SMTP id q4mr2295151pgc.103.1634951812725; Fri, 22 Oct 2021 18:16:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634951812; cv=none; d=google.com; s=arc-20160816; b=w+DMQTy14/GQbHp2ULCbPjp0Vsl6jAqR3xeMS5tnMNe8DBOj98aDT8dfQpFNGq1FIN K7nR7zFroX08m98wjBEqmDZkh89SC+DUE8NIFi/MsFpbEgZyoaOBINLHPcq59l5vmNWi FURZm8oAZ+2MfdldBBNIC7f6XFdL7nGOIT8o060unKg35fI10K9nBnL+taCrfY/6HFz/ YDG0v7RfhdYrO18NKhK1SvxRqT0x/lbpK+o6m0l4TWhnG9dw/ZvqBU7ZuHA+rjZZJ5aQ rXEW2+Esu8S7VFY3UhSCzX1nQkz95dvAg3+fOZuUMMmM5PWWZJb8LMxGfi/2K8y5nR0h M/ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=0aWo7cdjf8VAjVdLovc3QcMEUO0D9bv9Ns1nWcTyQ4M=; b=T1mGN4j+iXZws94gRajox14rpGgl3Xzk/Uf1ZLY2/aB7X0PhcwyUmzD6VAYN2oVTq8 wWnrSRE5O+XKdKed5fqIVLUa7FHgP0h/9qLpB2OsJYw4/rX8pyOKEbojTkMGqlne9CBy sB46KwbWCgyxqQf47Z+q68XNlxNm2BG6xjk9uOB4mOjbnox2d5x+Qvz7FBJX77wISy1k i3d87nZiCIaMTkgwObHs73B4IkQtRiM0WFIM9fBZrZf2kPAPYRf/LKpYKes3MWqnV+nH 2v4wno1ooRJxWBxvpKA5SftkMn+6ZZCpSvEcv0FHssJuDsNcn5SwRUWYD5mgbBWYj5Ni NM5g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=lR94lvpY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v14si3611583pgl.327.2021.10.22.18.16.38; Fri, 22 Oct 2021 18:16:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=lR94lvpY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229925AbhJWBRu (ORCPT + 99 others); Fri, 22 Oct 2021 21:17:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229507AbhJWBRt (ORCPT ); Fri, 22 Oct 2021 21:17:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64AD7C061764 for ; Fri, 22 Oct 2021 18:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=0aWo7cdjf8VAjVdLovc3QcMEUO0D9bv9Ns1nWcTyQ4M=; b=lR94lvpYLjehSTpOek+ZG7nYnb 13UXuiNnKBggx+mZlLPp4Aj5sQD8549TImK7puBgKxmO+IbfsoLrpLfncA23hd/diiYe6b/Wgbj/J QLgVLpXtV9NEuvMjqqHNmqhpDZq5cNHa4yUUy9Ij2pY+EK1IlwAku9bkce7TxNVNN0K+4NJe+L2ti UYWF6oikPzsey7BYUX0vy23yeGbE2hf06R96O0MrAvvss7YxP6jt4rLb/KF1MWY7ZA4Nn1cjbHzg1 O82T1aptVHpKPflA6mHyuehpJXH0hJ5wYlFDqEydbYVyCAr08acZd0idYDEhvkLV6P8wCi5C9Wnkj kO4epW7A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1me5bT-00EIw3-3Y; Sat, 23 Oct 2021 01:13:58 +0000 Date: Sat, 23 Oct 2021 02:13:43 +0100 From: Matthew Wilcox To: Anthony Yznaga Cc: Andrew Morton , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm: Optimise put_pages_list() Message-ID: References: <20211007192138.561673-1-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 22, 2021 at 04:26:59PM -0700, Anthony Yznaga wrote: > > On 10/7/21 12:21 PM, Matthew Wilcox (Oracle) wrote: > > Instead of calling put_page() one page at a time, pop pages off > > the list if their refcount was too high and pass the remainder to > > put_unref_page_list(). This should be a speed improvement, but I have > > no measurements to support that. Current callers do not care about > > performance, but I hope to add some which do. > > > > Signed-off-by: Matthew Wilcox (Oracle) > > --- > > v2: > > - Handle compound pages (Mel) > > - Comment why we don't need to handle PageLRU > > - Added call to __ClearPageWaiters(), matching that in release_pages() > > > > mm/swap.c | 23 ++++++++++++++++------- > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > diff --git a/mm/swap.c b/mm/swap.c > > index af3cad4e5378..9f334d503fd2 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -134,18 +134,27 @@ EXPORT_SYMBOL(__put_page); > > * put_pages_list() - release a list of pages > > * @pages: list of pages threaded on page->lru > > * > > - * Release a list of pages which are strung together on page.lru. Currently > > - * used by read_cache_pages() and related error recovery code. > > + * Release a list of pages which are strung together on page.lru. > > */ > > void put_pages_list(struct list_head *pages) > > { > > - while (!list_empty(pages)) { > > - struct page *victim; > > + struct page *page, *next; > > - victim = lru_to_page(pages); > > - list_del(&victim->lru); > > - put_page(victim); > > + list_for_each_entry_safe(page, next, pages, lru) { > > + if (!put_page_testzero(page)) { > > + list_del(&page->lru); > > + continue; > > + } > > > I know that compound pages are not currently passed to put_pages_list(), > but I assume the put_page_testzero() should only be done on the head > page similar to release_pages()? Fun fact about pages: You can't put a tail page on an LRU list. Why? struct page { ... union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ so if you try to put a tail page on the LRU list, it becomes no longer a tail page.