Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4254777imm; Mon, 11 Jun 2018 09:18:27 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLTb3tdmr5Og2zhth+C+utnZaMWDD7kRsmygm354uhu1hnC+wn8VcEd04qU2rx4SOJWmnUZ X-Received: by 2002:a62:4785:: with SMTP id p5-v6mr18254731pfi.164.1528733906997; Mon, 11 Jun 2018 09:18:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528733906; cv=none; d=google.com; s=arc-20160816; b=VsAJHwhXkRk62o0gjA6gc8no840k4toWql7GYB1RE4hB+AwrUepbao09MVqksvtaMU nWa9c118He9RT24BD/t08wkWCV76QK+m1JH0cmn0nxj2AzKJLHqMwjz7SrLFZLzj+dZq nttFTxOkYljK8rq56/CsU3ggwwMhzcqBvzHIG/GU9NM7r6KcC9ywQ6kUe3bwqwREoSt5 fFsGiQCmjbmqATHhbd3kz9/XeXnXCtRys3HW4sESbbuQr7Prs700IWDRnm3x2/A6l5a3 IwpYhN6J/H5ksdvGdE4mDW0W4tmrUttcikowCQpIZQXNOMuFFRRJiXe59fAweSABs/os IkVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=mKlDprFcod+mEKzLVONxtmRV1mIt0qW8UDzOtSiZ0e8=; b=uLrckM9dyORP1ocivJacVgZLZ71J8JQl20mfCo6jeCMFc+IwKDXU5pbj3UQmfNfyA4 lxRTv8gAs0tC6bpnWqeyuKRFmGhFgCA/83Vl+nPOJq1Wu9wEO09pQFg98xi9U3f727Fw xTegxi3OBkTkawgiADa7AQLpssEcsYZsChlkRD6K69oIRyzG7L4lwSlunmpalo7AfiUd qrRpWcaSGnFeeZxYj+4sKyuUYvRryPRf+i8PFUgMrWAORKm8T4tp/k9+wjtN+8X6wRXQ 2/C3ONkB5rgRrIuC18Vu6XVYfFeKm4kD/6mYWzzeFyyLwRTH3AJhCiPwCZ8GjL1XYLiL eMfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=stPY4WFi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e37-v6si64879998plb.400.2018.06.11.09.18.13; Mon, 11 Jun 2018 09:18:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=stPY4WFi; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934283AbeFKOUe (ORCPT + 99 others); Mon, 11 Jun 2018 10:20:34 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:45286 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933497AbeFKOGv (ORCPT ); Mon, 11 Jun 2018 10:06:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=mKlDprFcod+mEKzLVONxtmRV1mIt0qW8UDzOtSiZ0e8=; b=stPY4WFippmnburrIWxLJ29Cf T2j8yEfcZ4UtMsVYKR9T6k/3vRlhc/b4GL6+L+1m+GFDzFXohAtcXrSxGXzK7QkMhOZU4SbvmkYFP Evb0oA5e5X4Wa46zPBcnctROlg25E1X5PNFnEBh+VXe/5/H2mGoM/awtxL68wymOVGUEsiPG3WRcR UIi5HcDl5EZd3J4zwr7Tel4FuKgOwGPMSm/sTe6huQRuMylaVLhznJGDenSa8STGQPbqankuM4Wbd J3eQ1qAY1hGg2VJ50SNBgHHepLnlxiAOvEB0RueMu8DrN0emEJz7li1KyX6hLxkGm/6Xa/WpAdOxg 17uL4hxfQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSNT9-0004gr-1N; Mon, 11 Jun 2018 14:06:51 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v13 27/72] page cache; Convert find_get_pages_range_tag to XArray Date: Mon, 11 Jun 2018 07:05:54 -0700 Message-Id: <20180611140639.17215-28-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180611140639.17215-1-willy@infradead.org> References: <20180611140639.17215-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox The 'end' parameter of the xas_for_each iterator avoids a useless iteration at the end of the range. Signed-off-by: Matthew Wilcox --- include/linux/pagemap.h | 4 +-- mm/filemap.c | 68 ++++++++++++++++------------------------- 2 files changed, 28 insertions(+), 44 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2f5d2d3ebaac..a6d635fefb01 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -363,10 +363,10 @@ static inline unsigned find_get_pages(struct address_space *mapping, unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start, unsigned int nr_pages, struct page **pages); unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, - pgoff_t end, int tag, unsigned int nr_pages, + pgoff_t end, xa_tag_t tag, unsigned int nr_pages, struct page **pages); static inline unsigned find_get_pages_tag(struct address_space *mapping, - pgoff_t *index, int tag, unsigned int nr_pages, + pgoff_t *index, xa_tag_t tag, unsigned int nr_pages, struct page **pages) { return find_get_pages_range_tag(mapping, index, (pgoff_t)-1, tag, diff --git a/mm/filemap.c b/mm/filemap.c index 8a69613fcdf3..83328635edaa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1789,74 +1789,58 @@ EXPORT_SYMBOL(find_get_pages_contig); * @tag. We update @index to index the next page for the traversal. */ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, - pgoff_t end, int tag, unsigned int nr_pages, + pgoff_t end, xa_tag_t tag, unsigned int nr_pages, struct page **pages) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &mapping->i_pages, *index); + struct page *page; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - radix_tree_for_each_tagged(slot, &mapping->i_pages, &iter, *index, tag) { - struct page *head, *page; - - if (iter.index > end) - break; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) + xas_for_each_tagged(&xas, page, end, tag) { + struct page *head; + if (xas_retry(&xas, page)) continue; - - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - /* - * A shadow entry of a recently evicted page. - * - * Those entries should never be tagged, but - * this tree walk is lockless and the tags are - * looked up in bulk, one radix tree node at a - * time, so there is a sizable window for page - * reclaim to evict a page we saw tagged. - * - * Skip over it. - */ + /* + * Shadow entries should never be tagged, but this iteration + * is lockless so there is a window for page reclaim to evict + * a page we saw tagged. Skip over it. + */ + if (xa_is_value(page)) continue; - } head = compound_head(page); if (!page_cache_get_speculative(head)) - goto repeat; + goto retry; /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } + if (compound_head(page) != head) + goto put_page; /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } + if (unlikely(page != xas_reload(&xas))) + goto put_page; pages[ret] = page; if (++ret == nr_pages) { - *index = pages[ret - 1]->index + 1; + *index = page->index + 1; goto out; } + continue; +put_page: + put_page(head); +retry: + xas_reset(&xas); } /* - * We come here when we got at @end. We take care to not overflow the + * We come here when we got to @end. We take care to not overflow the * index @index as it confuses some of the callers. This breaks the - * iteration when there is page at index -1 but that is already broken - * anyway. + * iteration when there is a page at index -1 but that is already + * broken anyway. */ if (end == (pgoff_t)-1) *index = (pgoff_t)-1; -- 2.17.1