Received: by 10.223.164.202 with SMTP id h10csp661983wrb; Wed, 22 Nov 2017 13:15:43 -0800 (PST) X-Google-Smtp-Source: AGs4zMb/rXo/xMVZCYyq4xqZawgF9GHyaCCnXAeA4zT3YmPFapIIsSoRyNf1ajm0Xa2mQFqyglI6 X-Received: by 10.98.78.204 with SMTP id c195mr20757231pfb.51.1511385343022; Wed, 22 Nov 2017 13:15:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511385342; cv=none; d=google.com; s=arc-20160816; b=JZClE097dbb9tBunOgbvwYkhxs5ZDZ6wgZApfVV3ZjbGR9eoIRIjyP0mSljO1groJU 2DXyVik4Cc+lFPFJ8N+UcXE/tHd3j6O9sofA1VP/uzLzpw4TrrorPDFjmfOXjMuPW3dw CwnlBJRptvAk8Smtt2hwLbLynkstJxIl1bhNcfTPQl61Ym5IWwbGzFJeKyC28uihybn3 f+7K/7QYG2atdV5yqwMgiOiag4e7rVZtnumwUzC9Dgko7uv2KvqwBLnFN2HwJqLd+NC9 8CCB9Ds+0qdP7s/eN7lxPSnjG90v8PWV55BBCN29k8z+UcpwNyhKwAb5JPI4Nj7sFK7n es5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=4LATPi9+MVCF/AeAakHZGNPkGbpeOkLK6RISPAVgBKw=; b=oYAJrTCAGhAA2cF+eSpUnbb6rrujpDNxdKJi7N4r5Soc/RqgATvUmBAih7CYXCsSJB ZIw1zXUMuad6vAo0D81Mk4f78f/vEMxzn+CuB76nxisnXwHcTonE3Fso/pwAz7t55jbC aLyqfnF6ZXUyYD/mFIFJvhFsZoKgT9MeGjI4dkO0DOosYOmpUdYHF4tYZX/sSUqm3xX3 gw77yTV7pzhfFpYAmzFqz/xIor6XYw1PcIhEUKRbWwNtycHn8lGaMXpRWwM5E8Lg6Rqa VxWC1TDT4mAwyyGYVoFI9zkfV3rxfu3qoHH+u8WPl1DqOWxacPcHgLVducSlYULzTKpA ajqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tJvB8Tvv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si5776404pgr.99.2017.11.22.13.15.31; Wed, 22 Nov 2017 13:15:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tJvB8Tvv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752369AbdKVVOg (ORCPT + 77 others); Wed, 22 Nov 2017 16:14:36 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:54792 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751803AbdKVVIT (ORCPT ); Wed, 22 Nov 2017 16:08:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4LATPi9+MVCF/AeAakHZGNPkGbpeOkLK6RISPAVgBKw=; b=tJvB8TvvMqPAcshuNJ7as5wiG yiwaogMWPZPmsSUFFZm9NToa+llC2ccdhmBgL5j0qiwirqYWAVOI9fcZqHwMzYqZMrP72bmLCR9VO 4gITSalUO8G6zyKUOV/hq3f7tpJiMd3AHnpLSO2Zg1/fNFO8Nt8HfkDPawI1l0ijzgMhOqQoWAGrs j5edcGqeRsz8OnVvox3i5omdVybj0e2E7gt4k7M0DCwKhesIarNW0j4Bojv/af0+FpT5OfHTNi2TK M8Pdw7r/ovXp8oG1fEiv/x/ZQ6380Ju7n0gKnZKy7ud+x0B3EmApSgt7fnKRlzLxV5+i01AE9JwPW u2DAvZEyA==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eHcFm-0007vJ-CB; Wed, 22 Nov 2017 21:08:18 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 34/62] page cache: Use xarray for adding pages Date: Wed, 22 Nov 2017 13:07:11 -0800 Message-Id: <20171122210739.29916-35-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171122210739.29916-1-willy@infradead.org> References: <20171122210739.29916-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Use the XArray APIs to add and replace pages in the page cache. This removes two uses of the radix tree preload API and is significantly shorter code. Signed-off-by: Matthew Wilcox --- include/linux/xarray.h | 13 +++++ mm/filemap.c | 131 +++++++++++++++++++++---------------------------- 2 files changed, 69 insertions(+), 75 deletions(-) diff --git a/include/linux/xarray.h b/include/linux/xarray.h index 8d7cd70beb8e..1648eda4a20d 100644 --- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -542,6 +542,19 @@ static inline void xas_set_order(struct xa_state *xas, unsigned long index, xas->xa_node = XAS_RESTART; } +/** + * xas_set_update() - Set up XArray operation state for a callback. + * @xas: XArray operation state. + * @update: Function to call when updating a node. + * + * The XArray can notify a caller after it has updated an xa_node. + * This is advanced functionality and is only needed by the page cache. + */ +static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update) +{ + xas->xa_update = update; +} + /* Skip over any of these entries when iterating */ static inline bool xa_iter_skip(void *entry) { diff --git a/mm/filemap.c b/mm/filemap.c index accc350f9544..1d520748789b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -112,34 +112,6 @@ * ->tasklist_lock (memory_failure, collect_procs_ao) */ -static int page_cache_tree_insert(struct address_space *mapping, - struct page *page, void **shadowp) -{ - struct radix_tree_node *node; - void **slot; - int error; - - error = __radix_tree_create(&mapping->pages, page->index, 0, - &node, &slot); - if (error) - return error; - if (*slot) { - void *p; - - p = radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock); - if (!xa_is_value(p)) - return -EEXIST; - - mapping->nrexceptional--; - if (shadowp) - *shadowp = p; - } - __radix_tree_replace(&mapping->pages, node, slot, page, - workingset_lookup_update(mapping)); - mapping->nrpages++; - return 0; -} - static void page_cache_tree_delete(struct address_space *mapping, struct page *page, void *shadow) { @@ -775,51 +747,44 @@ EXPORT_SYMBOL(file_write_and_wait_range); * locked. This function does not add the new page to the LRU, the * caller must do that. * - * The remove + add is atomic. The only way this function can fail is - * memory allocation failure. + * The remove + add is atomic. This function cannot fail. */ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) { - int error; + XA_STATE(xas, old->index); + struct address_space *mapping = old->mapping; + void (*freepage)(struct page *) = mapping->a_ops->freepage; + pgoff_t offset = old->index; + unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(old), old); VM_BUG_ON_PAGE(!PageLocked(new), new); VM_BUG_ON_PAGE(new->mapping, new); - error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM); - if (!error) { - struct address_space *mapping = old->mapping; - void (*freepage)(struct page *); - unsigned long flags; - - pgoff_t offset = old->index; - freepage = mapping->a_ops->freepage; - - get_page(new); - new->mapping = mapping; - new->index = offset; + get_page(new); + new->mapping = mapping; + new->index = offset; - xa_lock_irqsave(&mapping->pages, flags); - __delete_from_page_cache(old, NULL); - error = page_cache_tree_insert(mapping, new, NULL); - BUG_ON(error); + xa_lock_irqsave(&mapping->pages, flags); + xas_store(&mapping->pages, &xas, new); - /* - * hugetlb pages do not participate in page cache accounting. - */ - if (!PageHuge(new)) - __inc_node_page_state(new, NR_FILE_PAGES); - if (PageSwapBacked(new)) - __inc_node_page_state(new, NR_SHMEM); - xa_unlock_irqrestore(&mapping->pages, flags); - mem_cgroup_migrate(old, new); - radix_tree_preload_end(); - if (freepage) - freepage(old); - put_page(old); - } + old->mapping = NULL; + /* hugetlb pages do not participate in page cache accounting. */ + if (!PageHuge(old)) + __dec_node_page_state(new, NR_FILE_PAGES); + if (!PageHuge(new)) + __inc_node_page_state(new, NR_FILE_PAGES); + if (PageSwapBacked(old)) + __dec_node_page_state(new, NR_SHMEM); + if (PageSwapBacked(new)) + __inc_node_page_state(new, NR_SHMEM); + xa_unlock_irqrestore(&mapping->pages, flags); + mem_cgroup_migrate(old, new); + if (freepage) + freepage(old); + put_page(old); - return error; + return 0; } EXPORT_SYMBOL_GPL(replace_page_cache_page); @@ -828,12 +793,15 @@ static int __add_to_page_cache_locked(struct page *page, pgoff_t offset, gfp_t gfp_mask, void **shadowp) { + XA_STATE(xas, offset); int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); + xas_set_update(&xas, workingset_lookup_update(mapping)); if (!huge) { error = mem_cgroup_try_charge(page, current->mm, @@ -842,35 +810,48 @@ static int __add_to_page_cache_locked(struct page *page, return error; } - error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM); - if (error) { - if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; +retry: xa_lock_irq(&mapping->pages); - error = page_cache_tree_insert(mapping, page, shadowp); - radix_tree_preload_end(); - if (unlikely(error)) - goto err_insert; + old = xas_create(&mapping->pages, &xas); + error = xas_error(&xas); + if (error) { + xa_unlock_irq(&mapping->pages); + if (xas_nomem(&xas, gfp_mask & ~__GFP_HIGHMEM)) + goto retry; + goto error; + } + + if (xa_is_value(old)) { + mapping->nrexceptional--; + if (shadowp) + *shadowp = old; + } else if (old) { + goto exist; + } + + xas_store(&mapping->pages, &xas, page); + mapping->nrpages++; /* hugetlb pages do not participate in page cache accounting. */ if (!huge) __inc_node_page_state(page, NR_FILE_PAGES); xa_unlock_irq(&mapping->pages); + xas_destroy(&xas); if (!huge) mem_cgroup_commit_charge(page, memcg, false, false); trace_mm_filemap_add_to_page_cache(page); return 0; -err_insert: +exist: + xa_unlock_irq(&mapping->pages); + error = -EEXIST; +error: + xas_destroy(&xas); page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - xa_unlock_irq(&mapping->pages); if (!huge) mem_cgroup_cancel_charge(page, memcg, false); put_page(page); -- 2.15.0 From 1584784944031880709@xxx Wed Nov 22 16:38:18 +0000 2017 X-GM-THRID: 1580343412888188336 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread