Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp115989pxv; Thu, 15 Jul 2021 00:08:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzKtWNsSszANX1Zwfyj25C415C2zX+fhnW8UU12v0j4GgTEqfzpgj0AJspTv9XqsCDCcS3m X-Received: by 2002:a05:6402:1488:: with SMTP id e8mr4636829edv.341.1626332918060; Thu, 15 Jul 2021 00:08:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626332918; cv=none; d=google.com; s=arc-20160816; b=RoG4zjIemFeSCbK33ow3bxAB0R7wE2fGclXDgDyaNV6kpCrNZntdFD7UgOOCdmbgXa l3SpF1KhxS8e/RerWhE7zsrMF+3EXNDCTsh9k0akqRE3ztRp3pi/Hpv75OhzL4bmbZLp FP2x/fDEjNU7JsWRR+UCJiY58h9AfyE9AUlybHn/hBFOOa98DBKA1zEfF9yUTBre+uAt 9rNdQy2D9sXlN0WvuBXy/3pbbEUsB0hzmvUeeXDI9zKYMkMwXdGJAHqWZsBg9wu9f45a MqgpZCKo9VSAyLUrH59w/Nv5HqbrP9OnS+oQY7pgbxD6onHjhzZQ2YEH56oC8O8NygPq IGPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=n/5eY7pce2Jho4fMkKIU8aLMl2r5vP0UXOIuLv30wmQ=; b=YLQZnEEaD9j2f3wxkwB7IVbl0R8xRqj7UTmdhxWePlRRJ5pq+kRZO+xyLkmKsc5PD2 ksiVJ9ZV8sgc1y5ECb22LOViXh44y8w948GcmV5574gLy63DCSMVx8Hi7Ls9CmYCqWjm 9NUBrJ78cOWn6s1ATSBeuQ/kYCmtBeBxHT+wKSCJpACQ/zqpVquTWVlmh/qdJ8RSsUIf QCXlkjUNDpnMSfxjpqVzzapYnnFXL7TlblHAaVGTBm1jb5EM49npizIXnj9iFSWbLCch 7LemfsDVnFvlstUUgtPzDRQ5f2dTbvgcHDBqnhkWWchZuRmKqigcfglEwj99meLiHuK6 IJjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BdZNpa3E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l7si7320343edk.216.2021.07.15.00.08.15; Thu, 15 Jul 2021 00:08:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=BdZNpa3E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239068AbhGOFLW (ORCPT + 99 others); Thu, 15 Jul 2021 01:11:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233239AbhGOFLV (ORCPT ); Thu, 15 Jul 2021 01:11:21 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AE0CC06175F; Wed, 14 Jul 2021 22:08:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n/5eY7pce2Jho4fMkKIU8aLMl2r5vP0UXOIuLv30wmQ=; b=BdZNpa3Eq3XRBXHRRr2nzyWdKE PkcBR3ts80+1MBf3cKKWModR4gsEVOMw6vUy8jBft6y5MbRqlsdWBCLpf64xAuwK+nB4Ca7mSv906 vKBs+We6CMct4uTPgtDlTRbeSbPz+n1IbVyIfQUVM034p6x6zC8pbo/u4ZpO+NIIOYlkVcaDwHT/q 1bpqS4njFeYA66yDzZ+t1woTrNm5hj5GOejgKg2G3mxrLALX2vDWbmiCmWUM9pVFPkTGB7xU5UFHm IW2WZAYMWkF72duNAz/4tSf5BuQ+L9N+UR5qe5jxUHLAt8f2Z+BKe5Q6vF0hJgxoLCflchgvM+vhY hBl9U3SA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3taf-002zyN-Jx; Thu, 15 Jul 2021 05:07:24 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v14 111/138] mm/filemap: Convert find_get_entry to return a folio Date: Thu, 15 Jul 2021 04:36:37 +0100 Message-Id: <20210715033704.692967-112-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Convert callers to cope. Saves 580 bytes of kernel text; all five callers are reduced in size. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 129 +++++++++++++++++++++++++-------------------------- 1 file changed, 64 insertions(+), 65 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 4a81eaff363e..c4190c0a6d86 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1907,37 +1907,36 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, } EXPORT_SYMBOL(__filemap_get_folio); -static inline struct page *find_get_entry(struct xa_state *xas, pgoff_t max, +static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max, xa_mark_t mark) { - struct page *page; + struct folio *folio; retry: if (mark == XA_PRESENT) - page = xas_find(xas, max); + folio = xas_find(xas, max); else - page = xas_find_marked(xas, max, mark); + folio = xas_find_marked(xas, max, mark); - if (xas_retry(xas, page)) + if (xas_retry(xas, folio)) goto retry; /* * A shadow entry of a recently evicted page, a swap * entry from shmem/tmpfs or a DAX entry. Return it * without attempting to raise page count. */ - if (!page || xa_is_value(page)) - return page; + if (!folio || xa_is_value(folio)) + return folio; - if (!page_cache_get_speculative(page)) + if (!folio_try_get_rcu(folio)) goto reset; - /* Has the page moved or been split? */ - if (unlikely(page != xas_reload(xas))) { - put_page(page); + if (unlikely(folio != xas_reload(xas))) { + folio_put(folio); goto reset; } - return page; + return folio; reset: xas_reset(xas); goto retry; @@ -1978,7 +1977,7 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, unsigned nr_entries = PAGEVEC_SIZE; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { + while ((page = &find_get_entry(&xas, end, XA_PRESENT)->page)) { /* * Terminate early on finding a THP, to allow the caller to * handle it all at once; but continue if this is hugetlbfs. @@ -2025,38 +2024,38 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t start, pgoff_t end, struct pagevec *pvec, pgoff_t *indices) { XA_STATE(xas, &mapping->i_pages, start); - struct page *page; + struct folio *folio; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { - if (!xa_is_value(page)) { - if (page->index < start) + while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { + if (!xa_is_value(folio)) { + if (folio->index < start) goto put; - VM_BUG_ON_PAGE(page->index != xas.xa_index, page); - if (page->index + thp_nr_pages(page) - 1 > end) + VM_BUG_ON_FOLIO(folio->index != xas.xa_index, + folio); + if (folio->index + folio_nr_pages(folio) - 1 > end) goto put; - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto put; - if (page->mapping != mapping || PageWriteback(page)) + if (folio->mapping != mapping || + folio_test_writeback(folio)) goto unlock; - VM_BUG_ON_PAGE(!thp_contains(page, xas.xa_index), - page); + VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), + folio); } indices[pvec->nr] = xas.xa_index; - if (!pagevec_add(pvec, page)) + if (!pagevec_add(pvec, &folio->page)) break; goto next; unlock: - unlock_page(page); + folio_unlock(folio); put: - put_page(page); + folio_put(folio); next: - if (!xa_is_value(page) && PageTransHuge(page)) { - unsigned int nr_pages = thp_nr_pages(page); - - /* Final THP may cross MAX_LFS_FILESIZE on 32-bit */ - xas_set(&xas, page->index + nr_pages); - if (xas.xa_index < nr_pages) + if (!xa_is_value(folio) && folio_multi(folio)) { + xas_set(&xas, folio->index + folio_nr_pages(folio)); + /* Did we wrap on 32-bit? */ + if (!xas.xa_index) break; } } @@ -2091,19 +2090,19 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, struct page **pages) { XA_STATE(xas, &mapping->i_pages, *start); - struct page *page; + struct folio *folio; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, XA_PRESENT))) { + while ((folio = find_get_entry(&xas, end, XA_PRESENT))) { /* Skip over shadow, swap and DAX entries */ - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; - pages[ret] = find_subpage(page, xas.xa_index); + pages[ret] = folio_file_page(folio, xas.xa_index); if (++ret == nr_pages) { *start = xas.xa_index + 1; goto out; @@ -2200,25 +2199,25 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, struct page **pages) { XA_STATE(xas, &mapping->i_pages, *index); - struct page *page; + struct folio *folio; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - while ((page = find_get_entry(&xas, end, tag))) { + while ((folio = find_get_entry(&xas, end, tag))) { /* * Shadow entries should never be tagged, but this iteration * is lockless so there is a window for page reclaim to evict * a page we saw tagged. Skip over it. */ - if (xa_is_value(page)) + if (xa_is_value(folio)) continue; - pages[ret] = page; + pages[ret] = &folio->page; if (++ret == nr_pages) { - *index = page->index + thp_nr_pages(page); + *index = folio->index + folio_nr_pages(folio); goto out; } } @@ -2697,44 +2696,44 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) } EXPORT_SYMBOL(generic_file_read_iter); -static inline loff_t page_seek_hole_data(struct xa_state *xas, - struct address_space *mapping, struct page *page, +static inline loff_t folio_seek_hole_data(struct xa_state *xas, + struct address_space *mapping, struct folio *folio, loff_t start, loff_t end, bool seek_data) { const struct address_space_operations *ops = mapping->a_ops; size_t offset, bsz = i_blocksize(mapping->host); - if (xa_is_value(page) || PageUptodate(page)) + if (xa_is_value(folio) || folio_test_uptodate(folio)) return seek_data ? start : end; if (!ops->is_partially_uptodate) return seek_data ? end : start; xas_pause(xas); rcu_read_unlock(); - lock_page(page); - if (unlikely(page->mapping != mapping)) + folio_lock(folio); + if (unlikely(folio->mapping != mapping)) goto unlock; - offset = offset_in_thp(page, start) & ~(bsz - 1); + offset = offset_in_folio(folio, start) & ~(bsz - 1); do { - if (ops->is_partially_uptodate(page, offset, bsz) == seek_data) + if (ops->is_partially_uptodate(&folio->page, offset, bsz) == + seek_data) break; start = (start + bsz) & ~(bsz - 1); offset += bsz; - } while (offset < thp_size(page)); + } while (offset < folio_size(folio)); unlock: - unlock_page(page); + folio_unlock(folio); rcu_read_lock(); return start; } -static inline -unsigned int seek_page_size(struct xa_state *xas, struct page *page) +static inline size_t seek_folio_size(struct xa_state *xas, struct folio *folio) { - if (xa_is_value(page)) + if (xa_is_value(folio)) return PAGE_SIZE << xa_get_order(xas->xa, xas->xa_index); - return thp_size(page); + return folio_size(folio); } /** @@ -2761,15 +2760,15 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, XA_STATE(xas, &mapping->i_pages, start >> PAGE_SHIFT); pgoff_t max = (end - 1) >> PAGE_SHIFT; bool seek_data = (whence == SEEK_DATA); - struct page *page; + struct folio *folio; if (end <= start) return -ENXIO; rcu_read_lock(); - while ((page = find_get_entry(&xas, max, XA_PRESENT))) { - loff_t pos = (u64)xas.xa_index << PAGE_SHIFT; - unsigned int seek_size; + while ((folio = find_get_entry(&xas, max, XA_PRESENT))) { + loff_t pos = xas.xa_index * PAGE_SIZE; + size_t seek_size; if (start < pos) { if (!seek_data) @@ -2777,9 +2776,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, start = pos; } - seek_size = seek_page_size(&xas, page); - pos = round_up(pos + 1, seek_size); - start = page_seek_hole_data(&xas, mapping, page, start, pos, + seek_size = seek_folio_size(&xas, folio); + pos = round_up((u64)pos + 1, seek_size); + start = folio_seek_hole_data(&xas, mapping, folio, start, pos, seek_data); if (start < pos) goto unlock; @@ -2787,15 +2786,15 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, break; if (seek_size > PAGE_SIZE) xas_set(&xas, pos >> PAGE_SHIFT); - if (!xa_is_value(page)) - put_page(page); + if (!xa_is_value(folio)) + folio_put(folio); } if (seek_data) start = -ENXIO; unlock: rcu_read_unlock(); - if (page && !xa_is_value(page)) - put_page(page); + if (folio && !xa_is_value(folio)) + folio_put(folio); if (start > end) return end; return start; -- 2.30.2