Received: by 10.223.148.5 with SMTP id 5csp6396603wrq; Wed, 17 Jan 2018 13:06:57 -0800 (PST) X-Google-Smtp-Source: ACJfBouMkA1KCCAvkbL7xL5SxP6dB9HVsbpODh6cz/SCh1s+YIyvHrZ33k9R7SvgaPOQa6xhSuZB X-Received: by 10.159.196.151 with SMTP id c23mr33743279plo.139.1516223217611; Wed, 17 Jan 2018 13:06:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516223217; cv=none; d=google.com; s=arc-20160816; b=u1/ccFoAeCW+qE7QNvFrlmfF5+CUiZpLpM6Hst6bZdJIB4CRxlJ7u6RQGDaOZmgWEw bs/eK7J7qlWW08KO8COtTFgvQFq2qx48jIEpp6b22Qg20+LlfYycGHsSCxhh+Nzq3hOB HbSYbYy/Nx6wXzp+a/BlCjRkSFFn5teWjFo8+2Ka399kWSbrrbLlLctyCZCej++6Q+rf wv3/nKQfVi4DtJuKHDYvEvJddV/8tZQx8OJCE0ZOvkELx6Q0zoIeXo6xpTG1O4KXRV3K c7RuVZTKVjgPrAUhpKt1e/302mtzdo5OOWzx/AczhwDO3SQ6Fc9Lu+sG5n1gKh/Uyog/ iliw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=hKcEPe4kR3RayWYrXDL5OGhmbi/wyQbouxH8X/W8x0c=; b=lBR8kgGWyTbC4Xn7y1REqv/U/PeKkeCcptpXl/wJmpze1AT8O9aIGcr83ZCtXx82FT jItEqDfX/c3nXzO/R0dCqIMayB+BrtpHyPNlYvEgxDyQ650t5ls0geW8UIECobBKnDbw u52duBE6uVVNnPGZ3BaLzWzVTTK8WszkzT3lcQOJdmxUHC+ZO1+L3gLNI6BKp8ZGbmEJ 3Ieko0Bu95afKC3gTlUk4Xdr77Tmtz3eWVazat/EE8XL6TBudkz14M7a8badbKOOH81c eh9W5oS55TD/1tbABYtWbXT6PHjWPFN6FgxwvWFyFJSA3YpxLLyHMsrHKPPOLhGfQaje jJMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=cMdWvPZB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a11si5436870pln.694.2018.01.17.13.06.43; Wed, 17 Jan 2018 13:06:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=cMdWvPZB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754527AbeAQVEq (ORCPT + 99 others); Wed, 17 Jan 2018 16:04:46 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:42313 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753551AbeAQUWf (ORCPT ); Wed, 17 Jan 2018 15:22:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=hKcEPe4kR3RayWYrXDL5OGhmbi/wyQbouxH8X/W8x0c=; b=cMdWvPZBeNgF/0dnsBvzTYEZc dY5o5MgYbIL6oQ1s0n99vuAtgyPS3QjhuiPYbi/TPErZiMmo5mEglshDKpQGDgYR3++NJXX25H2A6 t3woblmr+MEM/MhcnHnkNADwTQdYKmXcDTP60oyXM1N9HALW64cF5ujxsdTExyhRVWPNDioVMRUVO I6qIDu1n0k8rT+NldgVZzvbFxOjWvaZ5cbKCM+mfQQC+AG1e4xk8+MkivPjMw2vHezxoLldt253S0 Lp/ncb0in7ER0yPsxSixpQZPorUacSSqSDEt77te3rUAFhb5uhrExeCDNrTZw8tXrCf83foonih7x SGCqSdxiQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEE-0005mT-DS; Wed, 17 Jan 2018 20:22:34 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Cc: Matthew Wilcox , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, Bjorn Andersson , Stefano Stabellini , iommu@lists.linux-foundation.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, intel-gfx@lists.freedesktop.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, David Howells Subject: [PATCH v6 24/99] page cache: Add and replace pages using the XArray Date: Wed, 17 Jan 2018 12:20:48 -0800 Message-Id: <20180117202203.19756-25-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Use the XArray APIs to add and replace pages in the page cache. This removes two uses of the radix tree preload API and is significantly shorter code. Signed-off-by: Matthew Wilcox --- include/linux/swap.h | 8 ++- mm/filemap.c | 143 ++++++++++++++++++++++----------------------------- 2 files changed, 67 insertions(+), 84 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index c2b8128799c1..394957963c4b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -299,8 +299,12 @@ void *workingset_eviction(struct address_space *mapping, struct page *page); bool workingset_refault(void *shadow); void workingset_activation(struct page *page); -/* Do not use directly, use workingset_lookup_update */ -void workingset_update_node(struct radix_tree_node *node); +/* Only track the nodes of mappings with shadow entries */ +void workingset_update_node(struct xa_node *node); +#define mapping_set_update(xas, mapping) do { \ + if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \ + xas_set_update(xas, workingset_update_node); \ +} while (0) /* Returns workingset_update_node() if the mapping has shadow entries. */ #define workingset_lookup_update(mapping) \ diff --git a/mm/filemap.c b/mm/filemap.c index f1b4480723dd..e6371b551de1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -112,35 +112,6 @@ * ->tasklist_lock (memory_failure, collect_procs_ao) */ -static int page_cache_tree_insert(struct address_space *mapping, - struct page *page, void **shadowp) -{ - struct radix_tree_node *node; - void **slot; - int error; - - error = __radix_tree_create(&mapping->pages, page->index, 0, - &node, &slot); - if (error) - return error; - if (*slot) { - void *p; - - p = radix_tree_deref_slot_protected(slot, - &mapping->pages.xa_lock); - if (!xa_is_value(p)) - return -EEXIST; - - mapping->nrexceptional--; - if (shadowp) - *shadowp = p; - } - __radix_tree_replace(&mapping->pages, node, slot, page, - workingset_lookup_update(mapping)); - mapping->nrpages++; - return 0; -} - static void page_cache_tree_delete(struct address_space *mapping, struct page *page, void *shadow) { @@ -776,51 +747,44 @@ EXPORT_SYMBOL(file_write_and_wait_range); * locked. This function does not add the new page to the LRU, the * caller must do that. * - * The remove + add is atomic. The only way this function can fail is - * memory allocation failure. + * The remove + add is atomic. This function cannot fail. */ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) { - int error; + struct address_space *mapping = old->mapping; + void (*freepage)(struct page *) = mapping->a_ops->freepage; + pgoff_t offset = old->index; + XA_STATE(xas, &mapping->pages, offset); + unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(old), old); VM_BUG_ON_PAGE(!PageLocked(new), new); VM_BUG_ON_PAGE(new->mapping, new); - error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM); - if (!error) { - struct address_space *mapping = old->mapping; - void (*freepage)(struct page *); - unsigned long flags; - - pgoff_t offset = old->index; - freepage = mapping->a_ops->freepage; - - get_page(new); - new->mapping = mapping; - new->index = offset; + get_page(new); + new->mapping = mapping; + new->index = offset; - xa_lock_irqsave(&mapping->pages, flags); - __delete_from_page_cache(old, NULL); - error = page_cache_tree_insert(mapping, new, NULL); - BUG_ON(error); + xas_lock_irqsave(&xas, flags); + xas_store(&xas, new); - /* - * hugetlb pages do not participate in page cache accounting. - */ - if (!PageHuge(new)) - __inc_node_page_state(new, NR_FILE_PAGES); - if (PageSwapBacked(new)) - __inc_node_page_state(new, NR_SHMEM); - xa_unlock_irqrestore(&mapping->pages, flags); - mem_cgroup_migrate(old, new); - radix_tree_preload_end(); - if (freepage) - freepage(old); - put_page(old); - } + old->mapping = NULL; + /* hugetlb pages do not participate in page cache accounting. */ + if (!PageHuge(old)) + __dec_node_page_state(new, NR_FILE_PAGES); + if (!PageHuge(new)) + __inc_node_page_state(new, NR_FILE_PAGES); + if (PageSwapBacked(old)) + __dec_node_page_state(new, NR_SHMEM); + if (PageSwapBacked(new)) + __inc_node_page_state(new, NR_SHMEM); + xas_unlock_irqrestore(&xas, flags); + mem_cgroup_migrate(old, new); + if (freepage) + freepage(old); + put_page(old); - return error; + return 0; } EXPORT_SYMBOL_GPL(replace_page_cache_page); @@ -829,12 +793,15 @@ static int __add_to_page_cache_locked(struct page *page, pgoff_t offset, gfp_t gfp_mask, void **shadowp) { + XA_STATE(xas, &mapping->pages, offset); int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); + mapping_set_update(&xas, mapping); if (!huge) { error = mem_cgroup_try_charge(page, current->mm, @@ -843,39 +810,51 @@ static int __add_to_page_cache_locked(struct page *page, return error; } - error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM); - if (error) { - if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; - xa_lock_irq(&mapping->pages); - error = page_cache_tree_insert(mapping, page, shadowp); - radix_tree_preload_end(); - if (unlikely(error)) - goto err_insert; + do { + xas_lock_irq(&xas); + old = xas_create(&xas); + if (xas_error(&xas)) + goto unlock; + if (xa_is_value(old)) { + mapping->nrexceptional--; + if (shadowp) + *shadowp = old; + } else if (old) { + xas_set_err(&xas, -EEXIST); + goto unlock; + } + + xas_store(&xas, page); + mapping->nrpages++; + + /* + * hugetlb pages do not participate in + * page cache accounting. + */ + if (!huge) + __inc_node_page_state(page, NR_FILE_PAGES); +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp_mask & ~__GFP_HIGHMEM)); + + if (xas_error(&xas)) + goto error; - /* hugetlb pages do not participate in page cache accounting. */ - if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); - xa_unlock_irq(&mapping->pages); if (!huge) mem_cgroup_commit_charge(page, memcg, false, false); trace_mm_filemap_add_to_page_cache(page); return 0; -err_insert: +error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - xa_unlock_irq(&mapping->pages); if (!huge) mem_cgroup_cancel_charge(page, memcg, false); put_page(page); - return error; + return xas_error(&xas); } /** -- 2.15.1