From: Matthew Wilcox <[email protected]>
This patchset is, I believe, appropriate for merging for 4.17.
It contains the XArray implementation, to eventually replace the radix
tree, and converts the page cache to use it.
Improvements the XArray has over the radix tree:
- The radix tree provides operations like other trees do; 'insert' and
'delete'. But what most users really want is an automatically resizing
array, and so it makes more sense to give users an API that is like
an array -- 'load' and 'store'. We still have an 'insert' operation
for users that really want that semantic.
- Locking is part of the API. This simplifies a lot of users who
formerly had to manage their own locking just for the radix tree.
It also improves code generation as we can now tell RCU that we're
holding a lock and it doesn't need to generate as much fencing code.
The other advantage is that tree nodes can be moved (not yet
implemented).
- GFP flags are now parameters to calls which may need to allocate
memory. The radix tree forced users to decide what the allocation
flags would be at creation time. It's much clearer to specify them
at allocation time.
- Memory is not preloaded; we don't tie up dozens of pages on the
off chance that the slab allocator fails. Instead, we drop the lock,
allocate a new node and retry the operation.
- The XArray provides a cmpxchg operation. The radix tree forces users
to roll their own (and at least four have).
- Iterators take a 'max' parameter. That simplifies many users and
will reduce the amount of iteration done.
- Iteration can proceed backwards. We only have one user for this, but
since it's called as part of the pagefault readahead algorithm, that
seemed worth mentioning.
- RCU-protected pointers are not exposed as part of the API. There are
some fun bugs where the page cache forgets to use rcu_dereference()
in the current codebase.
- Value entries gain an extra bit compared to radix tree exceptional
entries. That gives us the extra bit we need to put huge page swap
entries in the page cache.
- Some iterators now take a 'filter' argument instead of having
separate iterators for tagged/untagged iterations.
This conversion keeps the radix tree and XArray data structures in sync
at all times. That allows us to convert the page cache one function at
a time and should allow for easier bisection.
The page cache is improved by this:
- Shorter, easier to read code
- More efficient iterations
- Reduction in size of struct address_space
- Fewer walks from the top of the data structure; the XArray API
encourages staying at the leaf node and conducting operations there.
Matthew Wilcox (61):
radix tree test suite: Check reclaim bit
radix tree: Use bottom four bits of gfp_t for flags
arm64: Turn flush_dcache_mmap_lock into a no-op
unicore32: Turn flush_dcache_mmap_lock into a no-op
Export __set_page_dirty
xfs: Rename xa_ elements to ail_
fscache: Use appropriate radix tree accessors
xarray: Add the xa_lock to the radix_tree_root
page cache: Use xa_lock
xarray: Replace exceptional entries
xarray: Change definition of sibling entries
xarray: Add definition of struct xarray
xarray: Define struct xa_node
xarray: Add documentation
xarray: Add xa_load
xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag
xarray: Add xa_store
xarray: Add xa_cmpxchg and xa_insert
xarray: Add xa_for_each
xarray: Add xa_extract
xarray: Add xa_destroy
xarray: Add xas_next and xas_prev
xarray: Add xas_create_range
xarray: Add MAINTAINERS entry
page cache: Convert hole search to XArray
page cache: Add page_cache_range_empty function
page cache: Add and replace pages using the XArray
page cache: Convert page deletion to XArray
page cache: Convert page cache lookups to XArray
page cache: Convert delete_batch to XArray
page cache: Remove stray radix comment
page cache: Convert filemap_range_has_page to XArray
mm: Convert page-writeback to XArray
mm: Convert workingset to XArray
mm: Convert truncate to XArray
mm: Convert add_to_swap_cache to XArray
mm: Convert delete_from_swap_cache to XArray
mm: Convert __do_page_cache_readahead to XArray
mm: Convert page migration to XArray
mm: Convert huge_memory to XArray
mm: Convert collapse_shmem to XArray
mm: Convert khugepaged_scan_shmem to XArray
pagevec: Use xa_tag_t
shmem: Convert replace to XArray
shmem: Convert shmem_confirm_swap to XArray
shmem: Convert find_swap_entry to XArray
shmem: Convert shmem_tag_pins to XArray
shmem: Convert shmem_wait_for_pins to XArray
shmem: Convert shmem_add_to_page_cache to XArray
shmem: Convert shmem_alloc_hugepage to XArray
shmem: Convert shmem_free_swap to XArray
shmem: Convert shmem_partial_swap_usage to XArray
shmem: Comment fixups
btrfs: Convert page cache to XArray
fs: Convert buffer to XArray
fs: Convert writeback to XArray
nilfs2: Convert to XArray
f2fs: Convert to XArray
lustre: Convert to XArray
dax: Convert to XArray
page cache: Finish XArray conversion
Documentation/cgroup-v1/memory.txt | 2 +-
Documentation/core-api/index.rst | 1 +
Documentation/core-api/xarray.rst | 361 +++++
Documentation/vm/page_migration | 14 +-
MAINTAINERS | 12 +
arch/arm/include/asm/cacheflush.h | 6 +-
arch/arm64/include/asm/cacheflush.h | 6 +-
arch/nios2/include/asm/cacheflush.h | 6 +-
arch/parisc/include/asm/cacheflush.h | 6 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +-
arch/powerpc/include/asm/nohash/64/pgtable.h | 4 +-
arch/unicore32/include/asm/cacheflush.h | 6 +-
drivers/gpu/drm/i915/i915_gem.c | 17 +-
drivers/staging/lustre/lustre/llite/glimpse.c | 12 +-
drivers/staging/lustre/lustre/mdc/mdc_request.c | 16 +-
fs/afs/write.c | 9 +-
fs/btrfs/btrfs_inode.h | 7 +-
fs/btrfs/compression.c | 6 +-
fs/btrfs/extent_io.c | 24 +-
fs/btrfs/inode.c | 70 -
fs/buffer.c | 28 +-
fs/cifs/file.c | 9 +-
fs/dax.c | 458 +++----
fs/ext4/inode.c | 2 +-
fs/f2fs/data.c | 9 +-
fs/f2fs/dir.c | 5 +-
fs/f2fs/gc.c | 2 +-
fs/f2fs/inline.c | 6 +-
fs/f2fs/node.c | 10 +-
fs/fs-writeback.c | 37 +-
fs/fscache/cookie.c | 2 +-
fs/fscache/object.c | 2 +-
fs/gfs2/aops.c | 2 +-
fs/inode.c | 11 +-
fs/nfs/blocklayout/blocklayout.c | 2 +-
fs/nilfs2/btnode.c | 41 +-
fs/nilfs2/page.c | 78 +-
fs/proc/task_mmu.c | 2 +-
fs/xfs/xfs_aops.c | 15 +-
fs/xfs/xfs_buf_item.c | 10 +-
fs/xfs/xfs_dquot.c | 4 +-
fs/xfs/xfs_dquot_item.c | 11 +-
fs/xfs/xfs_inode_item.c | 22 +-
fs/xfs/xfs_log.c | 6 +-
fs/xfs/xfs_log_recover.c | 80 +-
fs/xfs/xfs_trans.c | 18 +-
fs/xfs/xfs_trans_ail.c | 152 +--
fs/xfs/xfs_trans_buf.c | 4 +-
fs/xfs/xfs_trans_priv.h | 42 +-
include/linux/backing-dev.h | 12 +-
include/linux/fs.h | 68 +-
include/linux/idr.h | 22 +-
include/linux/mm.h | 3 +-
include/linux/pagemap.h | 16 +-
include/linux/pagevec.h | 8 +-
include/linux/radix-tree.h | 96 +-
include/linux/swap.h | 22 +-
include/linux/swapops.h | 19 +-
include/linux/xarray.h | 1015 ++++++++++++++
kernel/pid.c | 2 +-
lib/Makefile | 2 +-
lib/idr.c | 67 +-
lib/radix-tree.c | 234 ++--
lib/xarray.c | 1667 +++++++++++++++++++++++
mm/filemap.c | 766 ++++-------
mm/huge_memory.c | 23 +-
mm/khugepaged.c | 182 +--
mm/madvise.c | 2 +-
mm/memcontrol.c | 6 +-
mm/migrate.c | 41 +-
mm/mincore.c | 2 +-
mm/page-writeback.c | 78 +-
mm/readahead.c | 10 +-
mm/rmap.c | 4 +-
mm/shmem.c | 312 ++---
mm/swap.c | 6 +-
mm/swap_state.c | 124 +-
mm/truncate.c | 45 +-
mm/vmscan.c | 14 +-
mm/workingset.c | 89 +-
tools/include/linux/spinlock.h | 12 +-
tools/testing/radix-tree/.gitignore | 2 +
tools/testing/radix-tree/Makefile | 15 +-
tools/testing/radix-tree/idr-test.c | 6 +-
tools/testing/radix-tree/linux.c | 2 +-
tools/testing/radix-tree/linux/bug.h | 1 +
tools/testing/radix-tree/linux/gfp.h | 1 +
tools/testing/radix-tree/linux/kconfig.h | 1 +
tools/testing/radix-tree/linux/kernel.h | 5 +
tools/testing/radix-tree/linux/lockdep.h | 11 +
tools/testing/radix-tree/linux/rcupdate.h | 2 +
tools/testing/radix-tree/linux/xarray.h | 3 +
tools/testing/radix-tree/multiorder.c | 83 +-
tools/testing/radix-tree/regression1.c | 68 +-
tools/testing/radix-tree/test.c | 53 +-
tools/testing/radix-tree/test.h | 6 +
tools/testing/radix-tree/xarray-test.c | 556 ++++++++
97 files changed, 5211 insertions(+), 2232 deletions(-)
create mode 100644 Documentation/core-api/xarray.rst
create mode 100644 include/linux/xarray.h
create mode 100644 lib/xarray.c
create mode 100644 tools/testing/radix-tree/linux/kconfig.h
create mode 100644 tools/testing/radix-tree/linux/lockdep.h
create mode 100644 tools/testing/radix-tree/linux/xarray.h
create mode 100644 tools/testing/radix-tree/xarray-test.c
--
2.16.1
From: Matthew Wilcox <[email protected]>
ARM64 doesn't walk the VMA tree in its flush_dcache_page()
implementation, so has no need to take the tree_lock.
Signed-off-by: Matthew Wilcox <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
---
arch/arm64/include/asm/cacheflush.h | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index bef9f418f089..550b0abea953 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -137,10 +137,8 @@ static inline void __flush_icache_all(void)
dsb(ish);
}
-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) do { } while (0)
+#define flush_dcache_mmap_unlock(mapping) do { } while (0)
/*
* We don't appear to need to do anything here. In fact, if we did, we'd
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a perfect use for xa_cmpxchg(). Note the use of 0 for GFP
flags; we won't be allocating memory.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 50f068f59c82..a7931660897d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -635,16 +635,13 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
}
/*
- * Remove swap entry from radix tree, free the swap and its page cache.
+ * Remove swap entry from page cache, free the swap and its page cache.
*/
static int shmem_free_swap(struct address_space *mapping,
pgoff_t index, void *radswap)
{
- void *old;
+ void *old = xa_cmpxchg(&mapping->pages, index, radswap, NULL, 0);
- xa_lock_irq(&mapping->pages);
- old = radix_tree_delete_item(&mapping->pages, index, radswap);
- xa_unlock_irq(&mapping->pages);
if (old != radswap)
return -ENOENT;
free_swap_and_cache(radix_to_swp_entry(radswap));
--
2.16.1
From: Matthew Wilcox <[email protected]>
This removes the last caller of radix_tree_maybe_preload_order().
Simpler code, unless we run out of memory for new xa_nodes partway through
inserting entries into the xarray. Hopefully we can support multi-index
entries in the page cache soon and all the awful code goes away.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 87 ++++++++++++++++++++++++++++----------------------------------
1 file changed, 39 insertions(+), 48 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index ccb6d7ecdee0..347661653803 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -558,9 +558,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
*/
static int shmem_add_to_page_cache(struct page *page,
struct address_space *mapping,
- pgoff_t index, void *expected)
+ pgoff_t index, void *expected, gfp_t gfp)
{
- int error, nr = hpage_nr_pages(page);
+ XA_STATE(xas, &mapping->pages, index);
+ unsigned long i, nr = 1UL << compound_order(page);
VM_BUG_ON_PAGE(PageTail(page), page);
VM_BUG_ON_PAGE(index != round_down(index, nr), page);
@@ -569,49 +570,47 @@ static int shmem_add_to_page_cache(struct page *page,
VM_BUG_ON(expected && PageTransHuge(page));
page_ref_add(page, nr);
- page->mapping = mapping;
page->index = index;
+ page->mapping = mapping;
- xa_lock_irq(&mapping->pages);
- if (PageTransHuge(page)) {
- void __rcu **results;
- pgoff_t idx;
- int i;
-
- error = 0;
- if (radix_tree_gang_lookup_slot(&mapping->pages,
- &results, &idx, index, 1) &&
- idx < index + HPAGE_PMD_NR) {
- error = -EEXIST;
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, index + nr - 1);
+ if (xas_error(&xas))
+ goto unlock;
+ for (i = 0; i < nr; i++) {
+ void *entry = xas_load(&xas);
+ if (entry != expected)
+ xas_set_err(&xas, -ENOENT);
+ if (xas_error(&xas))
+ goto undo;
+ xas_store(&xas, page + i);
+ xas_next(&xas);
}
-
- if (!error) {
- for (i = 0; i < HPAGE_PMD_NR; i++) {
- error = radix_tree_insert(&mapping->pages,
- index + i, page + i);
- VM_BUG_ON(error);
- }
+ if (PageTransHuge(page)) {
count_vm_event(THP_FILE_ALLOC);
+ __inc_node_page_state(page, NR_SHMEM_THPS);
}
- } else if (!expected) {
- error = radix_tree_insert(&mapping->pages, index, page);
- } else {
- error = shmem_xa_replace(mapping, index, expected, page);
- }
-
- if (!error) {
mapping->nrpages += nr;
- if (PageTransHuge(page))
- __inc_node_page_state(page, NR_SHMEM_THPS);
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
__mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
- xa_unlock_irq(&mapping->pages);
- } else {
+ goto unlock;
+undo:
+ while (i-- > 0) {
+ xas_store(&xas, NULL);
+ xas_prev(&xas);
+ }
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ if (xas_error(&xas)) {
page->mapping = NULL;
- xa_unlock_irq(&mapping->pages);
page_ref_sub(page, nr);
+ return xas_error(&xas);
}
- return error;
+
+ return 0;
}
/*
@@ -1159,7 +1158,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
*/
if (!error)
error = shmem_add_to_page_cache(*pagep, mapping, index,
- radswap);
+ radswap, gfp);
if (error != -ENOMEM) {
/*
* Truncation and eviction use free_swap_and_cache(), which
@@ -1677,7 +1676,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
false);
if (!error) {
error = shmem_add_to_page_cache(page, mapping, index,
- swp_to_radix_entry(swap));
+ swp_to_radix_entry(swap), gfp);
/*
* We already confirmed swap under page lock, and make
* no memory allocation here, so usually no possibility
@@ -1783,13 +1782,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode,
PageTransHuge(page));
if (error)
goto unacct;
- error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK,
- compound_order(page));
- if (!error) {
- error = shmem_add_to_page_cache(page, mapping, hindex,
- NULL);
- radix_tree_preload_end();
- }
+ error = shmem_add_to_page_cache(page, mapping, hindex,
+ NULL, gfp & GFP_RECLAIM_MASK);
if (error) {
mem_cgroup_cancel_charge(page, memcg,
PageTransHuge(page));
@@ -2256,11 +2250,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
if (ret)
goto out_release;
- ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK);
- if (!ret) {
- ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL);
- radix_tree_preload_end();
- }
+ ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL,
+ gfp & GFP_RECLAIM_MASK);
if (ret)
goto out_release_uncharge;
--
2.16.1
From: Matthew Wilcox <[email protected]>
xa_find() is a slightly easier API to use than
radix_tree_gang_lookup_slot() because it contains its own RCU locking.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 347661653803..50f068f59c82 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1413,23 +1413,17 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
struct vm_area_struct pvma;
- struct inode *inode = &info->vfs_inode;
- struct address_space *mapping = inode->i_mapping;
- pgoff_t idx, hindex;
- void __rcu **results;
+ struct address_space *mapping = info->vfs_inode.i_mapping;
+ pgoff_t hindex;
struct page *page;
if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
return NULL;
hindex = round_down(index, HPAGE_PMD_NR);
- rcu_read_lock();
- if (radix_tree_gang_lookup_slot(&mapping->pages, &results, &idx,
- hindex, 1) && idx < hindex + HPAGE_PMD_NR) {
- rcu_read_unlock();
+ if (xa_find(&mapping->pages, &hindex, hindex + HPAGE_PMD_NR - 1,
+ XA_PRESENT))
return NULL;
- }
- rcu_read_unlock();
shmem_pseudo_vma_init(&pvma, info, hindex);
page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN,
--
2.16.1
From: Matthew Wilcox <[email protected]>
Simpler code because the xarray takes care of things like the limit and
dereferencing the slot.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 18 +++---------------
1 file changed, 3 insertions(+), 15 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index a7931660897d..c24c4cb76c43 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -658,29 +658,17 @@ static int shmem_free_swap(struct address_space *mapping,
unsigned long shmem_partial_swap_usage(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
struct page *page;
unsigned long swapped = 0;
rcu_read_lock();
-
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= end)
- break;
-
- page = radix_tree_deref_slot(slot);
-
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
+ xas_for_each(&xas, page, end - 1) {
if (xa_is_value(page))
swapped++;
if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
The DAX code (by its nature) is deeply interwoven with the radix tree
infrastructure, doing operations directly on the radix tree slots.
Convert the whole file to use XArray concepts; mostly passing around
xa_state instead of address_space, index or slot.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 366 +++++++++++++++++++++++++--------------------------------------
1 file changed, 142 insertions(+), 224 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 61cb25c8b9fd..1967d7d6b907 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -45,6 +45,7 @@
/* The 'colour' (ie low bits) within a PMD of a page offset. */
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
+#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
@@ -74,21 +75,26 @@ fs_initcall(init_dax_wait_table);
#define DAX_ZERO_PAGE (1UL << 2)
#define DAX_EMPTY (1UL << 3)
-static unsigned long dax_radix_sector(void *entry)
+static bool xa_is_dax_locked(void *entry)
+{
+ return xa_to_value(entry) & DAX_ENTRY_LOCK;
+}
+
+static unsigned long xa_to_dax_sector(void *entry)
{
return xa_to_value(entry) >> DAX_SHIFT;
}
-static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
+static void *xa_mk_dax_locked(sector_t sector, unsigned long flags)
{
return xa_mk_value(flags | ((unsigned long)sector << DAX_SHIFT) |
DAX_ENTRY_LOCK);
}
-static unsigned int dax_radix_order(void *entry)
+static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_SHIFT - PAGE_SHIFT;
+ return PMD_ORDER;
return 0;
}
@@ -113,10 +119,10 @@ static int dax_is_empty_entry(void *entry)
}
/*
- * DAX radix tree locking
+ * DAX page cache entry locking
*/
struct exceptional_entry_key {
- struct address_space *mapping;
+ struct xarray *xa;
pgoff_t entry_start;
};
@@ -125,9 +131,10 @@ struct wait_exceptional_entry_queue {
struct exceptional_entry_key key;
};
-static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
- pgoff_t index, void *entry, struct exceptional_entry_key *key)
+static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
+ void *entry, struct exceptional_entry_key *key)
{
+ unsigned long index = xas->xa_index;
unsigned long hash;
/*
@@ -138,10 +145,10 @@ static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
if (dax_is_pmd_entry(entry))
index &= ~PG_PMD_COLOUR;
- key->mapping = mapping;
+ key->xa = xas->xa;
key->entry_start = index;
- hash = hash_long((unsigned long)mapping ^ index, DAX_WAIT_TABLE_BITS);
+ hash = hash_long((unsigned long)xas->xa ^ index, DAX_WAIT_TABLE_BITS);
return wait_table + hash;
}
@@ -152,7 +159,7 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
struct wait_exceptional_entry_queue *ewait =
container_of(wait, struct wait_exceptional_entry_queue, wait);
- if (key->mapping != ewait->key.mapping ||
+ if (key->xa != ewait->key.xa ||
key->entry_start != ewait->key.entry_start)
return 0;
return autoremove_wake_function(wait, mode, sync, NULL);
@@ -163,13 +170,12 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
* The important information it's conveying is whether the entry at
* this index used to be a PMD entry.
*/
-static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
- pgoff_t index, void *entry, bool wake_all)
+static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
{
struct exceptional_entry_key key;
wait_queue_head_t *wq;
- wq = dax_entry_waitqueue(mapping, index, entry, &key);
+ wq = dax_entry_waitqueue(xas, entry, &key);
/*
* Checking for locked entry and prepare_to_wait_exclusive() happens
@@ -182,52 +188,27 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
}
/*
- * Check whether the given slot is locked. Must be called with xa_lock held.
+ * Mark the given entry as locked. Must be called with xa_lock held.
*/
-static inline int slot_locked(struct address_space *mapping, void **slot)
+static inline void *lock_entry(struct xa_state *xas)
{
- unsigned long entry = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
- return entry & DAX_ENTRY_LOCK;
-}
-
-/*
- * Mark the given slot as locked. Must be called with xa_lock held.
- */
-static inline void *lock_slot(struct address_space *mapping, void **slot)
-{
- unsigned long v = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ unsigned long v = xa_to_value(xas_load(xas));
void *entry = xa_mk_value(v | DAX_ENTRY_LOCK);
- radix_tree_replace_slot(&mapping->pages, slot, entry);
- return entry;
-}
-
-/*
- * Mark the given slot as unlocked. Must be called with xa_lock held.
- */
-static inline void *unlock_slot(struct address_space *mapping, void **slot)
-{
- unsigned long v = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
- void *entry = xa_mk_value(v & ~DAX_ENTRY_LOCK);
- radix_tree_replace_slot(&mapping->pages, slot, entry);
+ xas_store(xas, entry);
return entry;
}
/*
- * Lookup entry in radix tree, wait for it to become unlocked if it is
- * a DAX entry and return it. The caller must call
- * put_unlocked_mapping_entry() when he decided not to lock the entry or
- * put_locked_mapping_entry() when he locked the entry and now wants to
- * unlock it.
+ * Lookup entry in page cache, wait for it to become unlocked if it
+ * is a DAX entry and return it. The caller must subsequently call
+ * put_unlocked_entry() if it did not lock the entry or
+ * put_locked_entry() if it did lock the entry.
*
* Must be called with xa_lock held.
*/
-static void *get_unlocked_mapping_entry(struct address_space *mapping,
- pgoff_t index, void ***slotp)
+static void *get_unlocked_entry(struct xa_state *xas)
{
- void *entry, **slot;
+ void *entry;
struct wait_exceptional_entry_queue ewait;
wait_queue_head_t *wq;
@@ -235,67 +216,59 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
ewait.wait.func = wake_exceptional_entry_func;
for (;;) {
- entry = __radix_tree_lookup(&mapping->pages, index, NULL,
- &slot);
- if (!entry ||
- WARN_ON_ONCE(!xa_is_value(entry)) ||
- !slot_locked(mapping, slot)) {
- if (slotp)
- *slotp = slot;
+ entry = xas_load(xas);
+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
+ !xa_is_dax_locked(entry))
return entry;
- }
- wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
+ wq = dax_entry_waitqueue(xas, entry, &ewait.key);
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(xas);
schedule();
finish_wait(wq, &ewait.wait);
- xa_lock_irq(&mapping->pages);
+ xas_reset(xas);
+ xas_lock_irq(xas);
}
}
-static void dax_unlock_mapping_entry(struct address_space *mapping,
- pgoff_t index)
+static void put_locked_entry(struct xa_state *xas, void *entry)
{
- void *entry, **slot;
-
- xa_lock_irq(&mapping->pages);
- entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
- if (WARN_ON_ONCE(!entry || !xa_is_value(entry) ||
- !slot_locked(mapping, slot))) {
- xa_unlock_irq(&mapping->pages);
- return;
- }
- unlock_slot(mapping, slot);
- xa_unlock_irq(&mapping->pages);
- dax_wake_mapping_entry_waiter(mapping, index, entry, false);
+ entry = xa_mk_value(xa_to_value(entry) & ~DAX_ENTRY_LOCK);
+ xas_reset(xas);
+ xas_lock_irq(xas);
+ xas_store(xas, entry);
+ xas_unlock_irq(xas);
+ dax_wake_entry(xas, entry, false);
}
-static void put_locked_mapping_entry(struct address_space *mapping,
- pgoff_t index)
+static void dax_unlock_entry(struct address_space *mapping, pgoff_t index)
{
- dax_unlock_mapping_entry(mapping, index);
+ XA_STATE(xas, &mapping->pages, index);
+ void *entry = xas_load(&xas);
+
+ if (WARN_ON_ONCE(!xa_is_value(entry) || !xa_is_dax_locked(entry)))
+ return;
+ put_locked_entry(&xas, entry);
}
/*
- * Called when we are done with radix tree entry we looked up via
- * get_unlocked_mapping_entry() and which we didn't lock in the end.
+ * Called when we are done with page cache entry we looked up via
+ * get_unlocked_entry() and which we didn't lock in the end.
*/
-static void put_unlocked_mapping_entry(struct address_space *mapping,
- pgoff_t index, void *entry)
+static void put_unlocked_entry(struct xa_state *xas, void *entry)
{
if (!entry)
return;
- /* We have to wake up next waiter for the radix tree entry lock */
- dax_wake_mapping_entry_waiter(mapping, index, entry, false);
+ /* We have to wake up next waiter for the page cache entry lock */
+ dax_wake_entry(xas, entry, false);
}
/*
- * Find radix tree entry at given index. If it is a DAX entry, return it
- * with the radix tree entry locked. If the radix tree doesn't contain the
- * given index, create an empty entry for the index and return with it locked.
+ * Find page cache entry at given index. If it is a DAX entry, return it
+ * with the entry locked. If the page cache doesn't contain the given
+ * index, create an empty entry for the index and return with it locked.
*
* When requesting an entry with size DAX_PMD, grab_mapping_entry() will
* either return that locked entry or will return an error. This error will
@@ -320,12 +293,14 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
unsigned long size_flag)
{
+ XA_STATE(xas, &mapping->pages, index);
bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
- void *entry, **slot;
+ void *entry;
+ xas_set_order(&xas, index, size_flag ? PMD_ORDER : 0);
restart:
- xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, &slot);
+ xas_lock_irq(&xas);
+ entry = get_unlocked_entry(&xas);
if (WARN_ON_ONCE(entry && !xa_is_value(entry))) {
entry = ERR_PTR(-EIO);
@@ -335,8 +310,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
if (entry) {
if (size_flag & DAX_PMD) {
if (dax_is_pte_entry(entry)) {
- put_unlocked_mapping_entry(mapping, index,
- entry);
+ put_unlocked_entry(&xas, entry);
entry = ERR_PTR(-EEXIST);
goto out_unlock;
}
@@ -349,123 +323,75 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
}
}
- /* No entry for given index? Make sure radix tree is big enough. */
- if (!entry || pmd_downgrade) {
- int err;
-
- if (pmd_downgrade) {
- /*
- * Make sure 'entry' remains valid while we drop
- * xa_lock.
- */
- entry = lock_slot(mapping, slot);
- }
-
- xa_unlock_irq(&mapping->pages);
+ if (pmd_downgrade) {
+ entry = lock_entry(&xas);
/*
* Besides huge zero pages the only other thing that gets
* downgraded are empty entries which don't need to be
* unmapped.
*/
- if (pmd_downgrade && dax_is_zero_entry(entry))
+ if (dax_is_zero_entry(entry)) {
+ xas_unlock_irq(&xas);
unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR,
PG_PMD_NR, false);
-
- err = radix_tree_preload(
- mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
- if (err) {
- if (pmd_downgrade)
- put_locked_mapping_entry(mapping, index);
- return ERR_PTR(err);
+ xas_reset(&xas);
+ xas_lock_irq(&xas);
}
- xa_lock_irq(&mapping->pages);
-
- if (!entry) {
- /*
- * We needed to drop the pages lock while calling
- * radix_tree_preload() and we didn't have an entry to
- * lock. See if another thread inserted an entry at
- * our index during this time.
- */
- entry = __radix_tree_lookup(&mapping->pages, index,
- NULL, &slot);
- if (entry) {
- radix_tree_preload_end();
- xa_unlock_irq(&mapping->pages);
- goto restart;
- }
- }
-
- if (pmd_downgrade) {
- radix_tree_delete(&mapping->pages, index);
- mapping->nrexceptional--;
- dax_wake_mapping_entry_waiter(mapping, index, entry,
- true);
- }
-
- entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);
-
- err = __radix_tree_insert(&mapping->pages, index,
- dax_radix_order(entry), entry);
- radix_tree_preload_end();
- if (err) {
- xa_unlock_irq(&mapping->pages);
- /*
- * Our insertion of a DAX entry failed, most likely
- * because we were inserting a PMD entry and it
- * collided with a PTE sized entry at a different
- * index in the PMD range. We haven't inserted
- * anything into the radix tree and have no waiters to
- * wake.
- */
- return ERR_PTR(err);
- }
- /* Good, we have inserted empty locked entry into the tree. */
- mapping->nrexceptional++;
- xa_unlock_irq(&mapping->pages);
- return entry;
+ xas_store(&xas, NULL);
+ mapping->nrexceptional--;
+ dax_wake_entry(&xas, entry, true);
+ }
+ if (!entry || pmd_downgrade) {
+ entry = xa_mk_dax_locked(0, size_flag | DAX_EMPTY);
+ xas_store(&xas, entry);
+ if (!xas_error(&xas))
+ mapping->nrexceptional++;
+ } else {
+ entry = lock_entry(&xas);
}
- entry = lock_slot(mapping, slot);
out_unlock:
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
+ if (xas_nomem(&xas, GFP_NOIO))
+ goto restart;
return entry;
}
-static int __dax_invalidate_mapping_entry(struct address_space *mapping,
+static int __dax_invalidate_entry(struct address_space *mapping,
pgoff_t index, bool trunc)
{
+ XA_STATE(xas, &mapping->pages, index);
int ret = 0;
void *entry;
- struct radix_tree_root *pages = &mapping->pages;
xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, NULL);
+ entry = get_unlocked_entry(&xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
goto out;
if (!trunc &&
- (radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
- radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE)))
+ (xas_get_tag(&xas, PAGECACHE_TAG_DIRTY) ||
+ xas_get_tag(&xas, PAGECACHE_TAG_TOWRITE)))
goto out;
- radix_tree_delete(pages, index);
+ xas_store(&xas, NULL);
mapping->nrexceptional--;
ret = 1;
out:
- put_unlocked_mapping_entry(mapping, index, entry);
- xa_unlock_irq(&mapping->pages);
+ put_unlocked_entry(&xas, entry);
+ xas_unlock_irq(&xas);
return ret;
}
+
/*
* Delete DAX entry at @index from @mapping. Wait for it
* to be unlocked before deleting it.
*/
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
{
- int ret = __dax_invalidate_mapping_entry(mapping, index, true);
+ int ret = __dax_invalidate_entry(mapping, index, true);
/*
* This gets called from truncate / punch_hole path. As such, the caller
* must hold locks protecting against concurrent modifications of the
- * radix tree (usually fs-private i_mmap_sem for writing). Since the
+ * page cache (usually fs-private i_mmap_sem for writing). Since the
* caller has seen a DAX entry for this index, we better find it
* at that index as well...
*/
@@ -479,7 +405,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
pgoff_t index)
{
- return __dax_invalidate_mapping_entry(mapping, index, false);
+ return __dax_invalidate_entry(mapping, index, false);
}
static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
@@ -516,14 +442,14 @@ static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
* already in the tree, we will skip the insertion and just dirty the PMD as
* appropriate.
*/
-static void *dax_insert_mapping_entry(struct address_space *mapping,
+static void *dax_insert_entry(struct address_space *mapping,
struct vm_fault *vmf,
void *entry, sector_t sector,
unsigned long flags, bool dirty)
{
- struct radix_tree_root *pages = &mapping->pages;
void *new_entry;
pgoff_t index = vmf->pgoff;
+ XA_STATE(xas, &mapping->pages, index);
if (dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
@@ -537,33 +463,27 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
unmap_mapping_pages(mapping, vmf->pgoff, 1, false);
}
- xa_lock_irq(&mapping->pages);
- new_entry = dax_radix_locked_entry(sector, flags);
+ xas_lock_irq(&xas);
+ new_entry = xa_mk_dax_locked(sector, flags);
if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
/*
- * Only swap our new entry into the radix tree if the current
+ * Only swap our new entry into the page cache if the current
* entry is a zero page or an empty entry. If a normal PTE or
* PMD entry is already in the tree, we leave it alone. This
* means that if we are trying to insert a PTE and the
* existing entry is a PMD, we will just leave the PMD in the
* tree and dirty it if necessary.
*/
- struct radix_tree_node *node;
- void **slot;
- void *ret;
-
- ret = __radix_tree_lookup(pages, index, &node, &slot);
- WARN_ON_ONCE(ret != entry);
- __radix_tree_replace(pages, node, slot,
- new_entry, NULL);
+ void *prev = xas_store(&xas, new_entry);
+ WARN_ON_ONCE(prev != entry);
entry = new_entry;
}
if (dirty)
- radix_tree_tag_set(pages, index, PAGECACHE_TAG_DIRTY);
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return entry;
}
@@ -578,7 +498,7 @@ pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
}
/* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_mapping_entry_mkclean(struct address_space *mapping,
+static void dax_entry_mkclean(struct address_space *mapping,
pgoff_t index, unsigned long pfn)
{
struct vm_area_struct *vma;
@@ -653,8 +573,8 @@ static int dax_writeback_one(struct block_device *bdev,
struct dax_device *dax_dev, struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_root *pages = &mapping->pages;
- void *entry2, **slot, *kaddr;
+ XA_STATE(xas, &mapping->pages, index);
+ void *entry2, *kaddr;
long ret = 0, id;
sector_t sector;
pgoff_t pgoff;
@@ -668,8 +588,8 @@ static int dax_writeback_one(struct block_device *bdev,
if (WARN_ON(!xa_is_value(entry)))
return -EIO;
- xa_lock_irq(&mapping->pages);
- entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
+ xas_lock_irq(&xas);
+ entry2 = get_unlocked_entry(&xas);
/* Entry got punched out / reallocated? */
if (!entry2 || WARN_ON_ONCE(!xa_is_value(entry2)))
goto put_unlocked;
@@ -678,7 +598,7 @@ static int dax_writeback_one(struct block_device *bdev,
* compare sectors as we must not bail out due to difference in lockbit
* or entry type.
*/
- if (dax_radix_sector(entry2) != dax_radix_sector(entry))
+ if (xa_to_dax_sector(entry2) != xa_to_dax_sector(entry))
goto put_unlocked;
if (WARN_ON_ONCE(dax_is_empty_entry(entry) ||
dax_is_zero_entry(entry))) {
@@ -687,10 +607,10 @@ static int dax_writeback_one(struct block_device *bdev,
}
/* Another fsync thread may have already written back this entry */
- if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
+ if (!xas_get_tag(&xas, PAGECACHE_TAG_TOWRITE))
goto put_unlocked;
/* Lock the entry to serialize with page faults */
- entry = lock_slot(mapping, slot);
+ entry = lock_entry(&xas);
/*
* We can clear the tag now but we have to be careful so that concurrent
* dax_writeback_one() calls for the same index cannot finish before we
@@ -698,8 +618,8 @@ static int dax_writeback_one(struct block_device *bdev,
* at the entry only under xa_lock and once they do that they will
* see the entry locked and wait for it to unlock.
*/
- radix_tree_tag_clear(pages, index, PAGECACHE_TAG_TOWRITE);
- xa_unlock_irq(&mapping->pages);
+ xas_clear_tag(&xas, PAGECACHE_TAG_TOWRITE);
+ xas_unlock_irq(&xas);
/*
* Even if dax_writeback_mapping_range() was given a wbc->range_start
@@ -708,8 +628,8 @@ static int dax_writeback_one(struct block_device *bdev,
* 'entry'. This allows us to flush for PMD_SIZE and not have to
* worry about partial PMD writebacks.
*/
- sector = dax_radix_sector(entry);
- size = PAGE_SIZE << dax_radix_order(entry);
+ sector = xa_to_dax_sector(entry);
+ size = PAGE_SIZE << dax_entry_order(entry);
id = dax_read_lock();
ret = bdev_dax_pgoff(bdev, sector, size, &pgoff);
@@ -729,7 +649,7 @@ static int dax_writeback_one(struct block_device *bdev,
goto dax_unlock;
}
- dax_mapping_entry_mkclean(mapping, index, pfn_t_to_pfn(pfn));
+ dax_entry_mkclean(mapping, index, pfn_t_to_pfn(pfn));
dax_flush(dax_dev, kaddr, size);
/*
* After we have flushed the cache, we can clear the dirty tag. There
@@ -737,18 +657,16 @@ static int dax_writeback_one(struct block_device *bdev,
* the pfn mappings are writeprotected and fault waits for mapping
* entry lock.
*/
- xa_lock_irq(&mapping->pages);
- radix_tree_tag_clear(pages, index, PAGECACHE_TAG_DIRTY);
- xa_unlock_irq(&mapping->pages);
+ xa_clear_tag(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
dax_unlock:
dax_read_unlock(id);
- put_locked_mapping_entry(mapping, index);
+ put_locked_entry(&xas, entry2);
return ret;
put_unlocked:
- put_unlocked_mapping_entry(mapping, index, entry2);
- xa_unlock_irq(&mapping->pages);
+ put_unlocked_entry(&xas, entry2);
+ xas_unlock_irq(&xas);
return ret;
}
@@ -876,7 +794,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
goto out;
}
- entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0,
+ entry2 = dax_insert_entry(mapping, vmf, entry, 0,
DAX_ZERO_PAGE, false);
if (IS_ERR(entry2)) {
ret = VM_FAULT_SIGBUS;
@@ -1192,7 +1110,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
if (error < 0)
goto error_finish_iomap;
- entry = dax_insert_mapping_entry(mapping, vmf, entry,
+ entry = dax_insert_entry(mapping, vmf, entry,
dax_iomap_sector(&iomap, pos),
0, write && !sync);
if (IS_ERR(entry)) {
@@ -1255,7 +1173,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
ops->iomap_end(inode, pos, PAGE_SIZE, copied, flags, &iomap);
}
unlock_entry:
- put_locked_mapping_entry(mapping, vmf->pgoff);
+ dax_unlock_entry(mapping, vmf->pgoff);
out:
trace_dax_pte_fault_done(inode, vmf, vmf_ret);
return vmf_ret;
@@ -1278,7 +1196,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
if (unlikely(!zero_page))
goto fallback;
- ret = dax_insert_mapping_entry(mapping, vmf, entry, 0,
+ ret = dax_insert_entry(mapping, vmf, entry, 0,
DAX_PMD | DAX_ZERO_PAGE, false);
if (IS_ERR(ret))
goto fallback;
@@ -1333,7 +1251,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* Make sure that the faulting address's PMD offset (color) matches
* the PMD offset from the start of the file. This is necessary so
* that a PMD range in the page table overlaps exactly with a PMD
- * range in the radix tree.
+ * range in the page cache.
*/
if ((vmf->pgoff & PG_PMD_COLOUR) !=
((vmf->address >> PAGE_SHIFT) & PG_PMD_COLOUR))
@@ -1401,7 +1319,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
if (error < 0)
goto finish_iomap;
- entry = dax_insert_mapping_entry(mapping, vmf, entry,
+ entry = dax_insert_entry(mapping, vmf, entry,
dax_iomap_sector(&iomap, pos),
DAX_PMD, write && !sync);
if (IS_ERR(entry))
@@ -1452,7 +1370,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
&iomap);
}
unlock_entry:
- put_locked_mapping_entry(mapping, pgoff);
+ dax_unlock_entry(mapping, pgoff);
fallback:
if (result == VM_FAULT_FALLBACK) {
split_huge_pmd(vma, vmf->pmd, vmf->address);
@@ -1503,34 +1421,34 @@ EXPORT_SYMBOL_GPL(dax_iomap_fault);
* @pe_size: Size of entry to be inserted
* @pfn: PFN to insert
*
- * This function inserts writeable PTE or PMD entry into page tables for mmaped
- * DAX file. It takes care of marking corresponding radix tree entry as dirty
- * as well.
+ * This function inserts a writeable PTE or PMD entry into the page tables
+ * for an mmaped DAX file. It also marks the page cache entry as dirty.
*/
static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
enum page_entry_size pe_size,
pfn_t pfn)
{
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
- void *entry, **slot;
pgoff_t index = vmf->pgoff;
+ XA_STATE(xas, &mapping->pages, index);
+ void *entry;
int vmf_ret, error;
- xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, &slot);
+ xas_lock_irq(&xas);
+ entry = get_unlocked_entry(&xas);
/* Did we race with someone splitting entry or so? */
if (!entry ||
(pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
(pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
- put_unlocked_mapping_entry(mapping, index, entry);
- xa_unlock_irq(&mapping->pages);
+ put_unlocked_entry(&xas, entry);
+ xas_unlock_irq(&xas);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
return VM_FAULT_NOPAGE;
}
- radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
- entry = lock_slot(mapping, slot);
- xa_unlock_irq(&mapping->pages);
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);
+ entry = lock_entry(&xas);
+ xas_unlock_irq(&xas);
switch (pe_size) {
case PE_SIZE_PTE:
error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
@@ -1545,7 +1463,7 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
default:
vmf_ret = VM_FAULT_FALLBACK;
}
- put_locked_mapping_entry(mapping, index);
+ put_locked_entry(&xas, entry);
trace_dax_insert_pfn_mkwrite(mapping->host, vmf, vmf_ret);
return vmf_ret;
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
With no more radix tree API users left, we can drop the GFP flags
and use xa_init() instead of INIT_RADIX_TREE().
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/inode.c | 2 +-
include/linux/fs.h | 2 +-
mm/swap_state.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/inode.c b/fs/inode.c
index 07e26909e24d..c1e8d61c9925 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -349,7 +349,7 @@ EXPORT_SYMBOL(inc_nlink);
void address_space_init_once(struct address_space *mapping)
{
memset(mapping, 0, sizeof(*mapping));
- INIT_RADIX_TREE(&mapping->pages, GFP_ATOMIC | __GFP_ACCOUNT);
+ xa_init_flags(&mapping->pages, XA_FLAGS_LOCK_IRQ);
init_rwsem(&mapping->i_mmap_rwsem);
INIT_LIST_HEAD(&mapping->private_list);
spin_lock_init(&mapping->private_lock);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index b134f80ca498..518167152254 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -410,7 +410,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
*/
struct address_space {
struct inode *host;
- struct radix_tree_root pages;
+ struct xarray pages;
gfp_t gfp_mask;
atomic_t i_mmap_writable;
struct rb_root_cached i_mmap;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 219e3b4f09e6..25f027d0bb00 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -573,7 +573,7 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages)
return -ENOMEM;
for (i = 0; i < nr; i++) {
space = spaces + i;
- INIT_RADIX_TREE(&space->pages, GFP_ATOMIC|__GFP_NOWARN);
+ xa_init_flags(&space->pages, XA_FLAGS_LOCK_IRQ);
atomic_set(&space->i_mmap_writable, 0);
space->a_ops = &swap_aops;
/* swap cache doesn't use writeback related tags */
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a straightforward conversion.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/data.c | 3 +--
fs/f2fs/dir.c | 5 +----
fs/f2fs/inline.c | 6 +-----
fs/f2fs/node.c | 10 ++--------
4 files changed, 5 insertions(+), 19 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index ce029060acd0..6de3d82377e4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2384,8 +2384,7 @@ void f2fs_set_page_dirty_nobuffers(struct page *page)
xa_lock_irqsave(&mapping->pages, flags);
WARN_ON_ONCE(!PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages,
- page_index(page), PAGECACHE_TAG_DIRTY);
+ __xa_set_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index 0fd9695eddf6..ab833f624cc2 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -708,7 +708,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
unsigned int bit_pos;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
struct address_space *mapping = page_mapping(page);
- unsigned long flags;
int i;
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
@@ -741,10 +740,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) {
- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);
clear_page_dirty_for_io(page);
ClearPagePrivate(page);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 7858b8e15f33..d3c3f84beca9 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -204,7 +204,6 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
void *src_addr, *dst_addr;
struct dnode_of_data dn;
struct address_space *mapping = page_mapping(page);
- unsigned long flags;
int err;
set_new_dnode(&dn, inode, NULL, NULL, 0);
@@ -226,10 +225,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page);
- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages, page_index(page),
- PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);
+ xa_clear_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY);
set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index fba2644abdf0..0c1e9add0952 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -88,14 +88,10 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
static void clear_node_page_dirty(struct page *page)
{
struct address_space *mapping = page->mapping;
- unsigned int long flags;
if (PageDirty(page)) {
- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages,
- page_index(page),
+ xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);
clear_page_dirty_for_io(page);
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
@@ -1139,9 +1135,7 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
return;
f2fs_bug_on(sbi, check_nid_range(sbi, nid));
- rcu_read_lock();
- apage = radix_tree_lookup(&NODE_MAPPING(sbi)->pages, nid);
- rcu_read_unlock();
+ apage = xa_load(&NODE_MAPPING(sbi)->pages, nid);
if (apage)
return;
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is documentation on how to use the XArray, not details about its
internal implementation.
Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/core-api/index.rst | 1 +
Documentation/core-api/xarray.rst | 361 ++++++++++++++++++++++++++++++++++++++
2 files changed, 362 insertions(+)
create mode 100644 Documentation/core-api/xarray.rst
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index c670a8031786..e4e15f0f608b 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -20,6 +20,7 @@ Core utilities
local_ops
workqueue
genericirq
+ xarray
flexible-arrays
librs
genalloc
diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
new file mode 100644
index 000000000000..914999c0bf3f
--- /dev/null
+++ b/Documentation/core-api/xarray.rst
@@ -0,0 +1,361 @@
+.. SPDX-License-Identifier: CC-BY-SA-4.0
+
+======
+XArray
+======
+
+:Author: Matthew Wilcox
+
+Overview
+========
+
+The XArray is an abstract data type which behaves like a very large array
+of pointers. It meets many of the same needs as a hash or a conventional
+resizable array. Unlike a hash, it allows you to sensibly go to the
+next or previous entry in a cache-efficient manner. In contrast to
+a resizable array, there is no need for copying data or changing MMU
+mappings in order to grow the array. It is more memory-efficient,
+parallelisable and cache friendly than a doubly-linked list. It takes
+advantage of RCU to perform lookups without locking.
+
+The XArray implementation is efficient when the indices used are densely
+clustered; hashing the object and using the hash as the index will not
+perform well. The XArray is optimised for small indices, but still has
+good performance with large indices. If your index can be larger than
+``ULONG_MAX`` then the XArray is not the data type for you. The most
+important user of the XArray is the page cache.
+
+A freshly-initialised XArray contains a ``NULL`` pointer at every index.
+Each non-``NULL`` entry in the array has three bits associated with it
+called tags. Each tag may be set or cleared independently of the others.
+You can iterate over entries which are tagged.
+
+Normal pointers may be stored in the XArray directly. They must be 4-byte
+aligned, which is true for any pointer returned from :c:func:`kmalloc` and
+:c:func:`alloc_page`. It isn't true for arbitrary user-space pointers,
+nor for function pointers. You can store pointers to statically allocated
+objects, as long as those objects have an alignment of at least 4.
+
+You can also store integers between 0 and ``LONG_MAX`` in the XArray.
+You must first convert it into an entry using :c:func:`xa_mk_value`.
+When you retrieve an entry from the XArray, you can check whether it is
+a value entry by calling :c:func:`xa_is_value`, and convert it back to
+an integer by calling :c:func:`xa_to_value`.
+
+The XArray does not support storing :c:func:`IS_ERR` pointers as some
+conflict with value entries or internal entries.
+
+An unusual feature of the XArray is the ability to create entries which
+occupy a range of indices. Once stored to, looking up any index in
+the range will return the same entry as looking up any other index in
+the range. Setting a tag on one index will set it on all of them.
+Storing to any index will store to all of them. Multi-index entries can
+be explicitly split into smaller entries, or storing ``NULL`` into any
+entry will cause the XArray to forget about the range.
+
+Normal API
+==========
+
+Start by initialising an XArray, either with :c:func:`DEFINE_XARRAY`
+for statically allocated XArrays or :c:func:`xa_init` for dynamically
+allocated ones.
+
+You can then set entries using :c:func:`xa_store` and get entries
+using :c:func:`xa_load`. xa_store will overwrite any entry with the
+new entry and return the previous entry stored at that index. You can
+use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a
+%NULL entry. There is no difference between an entry that has never
+been stored to and one that has most recently had ``NULL`` stored to it.
+
+You can conditionally replace an entry at an index by using
+:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if
+the entry at that index has the 'old' value. It also returns the entry
+which was at that index; if it returns the same entry which was passed as
+'old', then :c:func:`xa_cmpxchg` succeeded.
+
+If you want to only store a new entry to an index if the current entry
+at that index is ``NULL``, you can use :c:func:`xa_insert` which
+returns ``-EEXIST`` if the entry is not empty.
+
+Calling :c:func:`xa_reserve` ensures that there is enough memory allocated
+to store an entry at the specified index. This is not normally needed,
+but some users have a complicated locking scheme.
+
+You can enquire whether a tag is set on an entry by using
+:c:func:`xa_get_tag`. If the entry is not ``NULL``, you can set a tag
+on it by using :c:func:`xa_set_tag` and remove the tag from an entry by
+calling :c:func:`xa_clear_tag`. You can ask whether any entry in the
+XArray has a particular tag set by calling :c:func:`xa_tagged`.
+
+You can copy entries out of the XArray into a plain array by calling
+:c:func:`xa_extract`. Or you can iterate over the present entries in
+the XArray by calling :c:func:`xa_for_each`. You may prefer to use
+:c:func:`xa_find` or :c:func:`xa_find_after` to move to the next present
+entry in the XArray.
+
+Finally, you can remove all entries from an XArray by calling
+:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish
+to free the entries first. You can do this by iterating over all present
+entries in the XArray using the :c:func:`xa_for_each` iterator.
+
+Memory allocation
+-----------------
+
+The :c:func:`xa_store`, :c:func:`xa_cmpxchg`, :c:func:`xa_reserve`
+and :c:func:`xa_insert` functions take a gfp_t parameter in case
+the XArray needs to allocate memory to store this entry. If the entry
+being stored is ``NULL``, no memory allocation needs to be performed,
+and the GFP flags specified will be ignored.
+
+It is possible for no memory to be allocatable, particularly if you pass
+a restrictive set of GFP flags. In that case, the functions return a
+special value which can be turned into an errno using :c:func:`xa_err`.
+If you don't need to know exactly which error occurred, using
+:c:func:`xa_is_err` is slightly more efficient.
+
+Locking
+-------
+
+When using the Normal API, you do not have to worry about locking.
+The XArray uses RCU and an internal spinlock to synchronise access:
+
+No lock needed:
+ * :c:func:`xa_empty`
+ * :c:func:`xa_tagged`
+
+Takes RCU read lock:
+ * :c:func:`xa_load`
+ * :c:func:`xa_for_each`
+ * :c:func:`xa_find`
+ * :c:func:`xa_find_after`
+ * :c:func:`xa_extract`
+ * :c:func:`xa_get_tag`
+
+Takes xa_lock internally:
+ * :c:func:`xa_store`
+ * :c:func:`xa_insert`
+ * :c:func:`xa_erase`
+ * :c:func:`xa_cmpxchg`
+ * :c:func:`xa_reserve`
+ * :c:func:`xa_destroy`
+ * :c:func:`xa_set_tag`
+ * :c:func:`xa_clear_tag`
+
+Assumes xa_lock held on entry:
+ * :c:func:`__xa_store`
+ * :c:func:`__xa_insert`
+ * :c:func:`__xa_erase`
+ * :c:func:`__xa_cmpxchg`
+ * :c:func:`__xa_set_tag`
+ * :c:func:`__xa_clear_tag`
+
+If you want to take advantage of the lock to protect the data structures
+that you are storing in the XArray, you can call :c:func:`xa_lock`
+before calling :c:func:`xa_load`, then take a reference count on the
+object you have found before calling :c:func:`xa_unlock`. This will
+prevent stores from removing the object from the array between looking
+up the object and incrementing the refcount. You can also use RCU to
+avoid dereferencing freed memory, but an explanation of that is beyond
+the scope of this document.
+
+The XArray does not disable interrupts or softirqs while modifying
+the array. It is safe to read the XArray from interrupt or softirq
+context as the RCU lock provides enough protection.
+
+If, for example, you want to store entries in the XArray in process
+context and then erase them in softirq context, you can do that this way::
+
+ foo_init(struct foo *foo)
+ {
+ xa_init_flags(&foo->array, XA_FLAGS_LOCK_BH);
+ }
+
+ foo_store(struct foo *foo, unsigned long index, void *entry)
+ {
+ xa_lock_bh(&foo->array);
+ __xa_store(&foo->array, index, entry, GFP_KERNEL);
+ foo->count++;
+ xa_unlock_bh(&foo->array);
+ }
+
+ /* foo_erase() is only called from softirq context */
+ foo_erase(struct foo *foo, unsigned long index)
+ {
+ xa_erase(&foo->array, index);
+ }
+
+If you are going to modify the XArray from interrupt or softirq context,
+you need to initialise the array using :c:func:`xa_init_flags`, passing
+``XA_FLAGS_LOCK_IRQ`` or ``XA_FLAGS_LOCK_BH``.
+
+The above example also shows a common pattern of wanting to extend the
+coverage of the xa_lock on the store side to protect some statistics
+associated with the array.
+
+Sharing the XArray with interrupt context is also possible, either
+using :c:func:`xa_lock_irqsave` in both the interrupt handler and process
+context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`
+in the interrupt handler.
+
+Sometimes you need to protect access to the XArray with a mutex because
+that lock sits above another mutex in the locking hierarchy. That does
+not entitle you to use functions like :c:func:`__xa_erase` without taking
+the xa_lock; the xa_lock is used for lockdep validation and will be used
+for other purposes in the future.
+
+The :c:func:`__xa_set_tag` and :c:func:`__xa_clear_tag` functions are also
+available for situations where you look up an entry and want to atomically
+set or clear a tag. It may be more efficient to use the advanced API
+in this case, as it will save you from walking the tree twice.
+
+Advanced API
+============
+
+The advanced API offers more flexibility and better performance at the
+cost of an interface which can be harder to use and has fewer safeguards.
+No locking is done for you by the advanced API, and you are required
+to use the xa_lock while modifying the array. You can choose whether
+to use the xa_lock or the RCU lock while doing read-only operations on
+the array. You can mix advanced and normal operations on the same array;
+indeed the normal API is implemented in terms of the advanced API. The
+advanced API is only available to modules with a GPL-compatible license.
+
+The advanced API is based around the xa_state. This is an opaque data
+structure which you declare on the stack using the :c:func:`XA_STATE`
+macro. This macro initialises the xa_state ready to start walking
+around the XArray. It is used as a cursor to maintain the position
+in the XArray and let you compose various operations together without
+having to restart from the top every time.
+
+The xa_state is also used to store errors. You can call
+:c:func:`xas_error` to retrieve the error. All operations check whether
+the xa_state is in an error state before proceeding, so there's no need
+for you to check for an error after each call; you can make multiple
+calls in succession and only check at a convenient point. The only
+errors currently generated by the xarray code itself are %ENOMEM and
+%EINVAL, but it supports arbitrary errors in case you want to call
+:c:func:`xas_set_err` yourself.
+
+If the xa_state is holding an %ENOMEM error, calling :c:func:`xas_nomem`
+will attempt to allocate more memory using the specified gfp flags and
+cache it in the xa_state for the next attempt. The idea is that you take
+the xa_lock, attempt the operation and drop the lock. The operation
+attempts to allocate memory while holding the lock, but it is more
+likely to fail. Once you have dropped the lock, :c:func:`xas_nomem`
+can try harder to allocate more memory. It will return ``true`` if it
+is worth retrying the operation (i.e. that there was a memory error *and*
+more memory was allocated). If it has previously allocated memory, and
+that memory wasn't used, and there is no error (or some error that isn't
+%ENOMEM), then it will free the memory previously allocated.
+
+Internal Entries
+----------------
+
+The XArray reserves some entries for its own purposes. These are never
+exposed through the normal API, but when using the advanced API, it's
+possible to see them. Usually the best way to handle them is to pass them
+to :c:func:`xas_retry`, and retry the operation if it returns ``true``.
+
+.. flat-table::
+ :widths: 1 1 6
+
+ * - Name
+ - Test
+ - Usage
+
+ * - Node
+ - :c:func:`xa_is_node`
+ - An XArray node. Should never be visible; all functions should recurse
+ into an XArray node.
+
+ * - Sibling
+ - :c:func:`xa_is_sibling`
+ - A non-canonical entry for a multi-index entry. The value indicates
+ which slot in this node has the canonical entry.
+
+ * - Retry
+ - :c:func:`xa_is_retry`
+ - This entry is currently being modified by a thread which has the
+ xa_lock. The node containing this entry may be freed at the end of
+ this RCU period. You should restart the lookup from the head of the
+ array.
+
+Other internal entries may be added in the future. As far as possible, they
+will be handled by :c:func:`xas_retry`.
+
+Additional functionality
+------------------------
+
+The :c:func:`xas_create` function ensures that there is somewhere in the
+XArray to store an entry. It will store ENOMEM in the xa_state if it
+cannot allocate memory. You do not normally need to call this function
+yourself as it is called by :c:func:`xas_store`.
+
+You can use :c:func:`xas_init_tags` to reset the tags on an entry
+to their default state. This is usually all tags clear, unless the
+XArray is marked with ``XA_FLAGS_TRACK_FREE``, in which case tag 0 is set
+and all other tags are clear. Replacing one entry with another using
+:c:func:`xas_store` will not reset the tags on that entry; if you want
+the tags reset, you should do that explicitly.
+
+The :c:func:`xas_load` will walk the xa_state as close to the entry
+as it can. If you know the xa_state has already been walked to the
+entry and need to check that the entry hasn't changed, you can use
+:c:func:`xas_reload` to save a function call.
+
+If you need to move to a different index in the XArray, call
+:c:func:`xas_set`. This reinitialises the cursor, which will generally
+have the effect of making the next operation walk the cursor to the
+desired spot in the tree. If you want to move to the next or previous
+index, call :c:func:`xas_next` or :c:func:`xas_prev`. Setting the index
+does not walk the cursor around the array so does not require a lock to
+be held, while moving to the next or previous index does.
+
+You can create a multi-index entry by using :c:func:`xas_set_order`.
+If a load or find operation finds a multi-index entry, the index in the
+xa_state will be the one searched for, and not necessarily the
+lowest or highest index used by the entry.
+Currently the only supported multi-index entries supported are powers
+of two, but there are two potential users of arbitrary ranges, so that
+functionality may be added soon.
+
+You can search for the next present entry using :c:func:`xas_find`. This
+is the equivalent of both :c:func:`xa_find` and :c:func:`xa_find_after`;
+if the cursor has been walked to an entry, then it will find the next
+entry after the one currently referenced. If not, it will return the
+entry at the index of the xa_state. Using :c:func:`xas_next_entry` to
+move to the next present entry instead of :c:func:`xas_find` will save
+a function call in the majority of cases at the expense of emitting more
+inline code.
+
+The :c:func:`xas_find_tag` function is similar, returning the first tagged
+entry after the entry referenced by the xa_state if it has already been
+walked, and returning the entry at the index of the xa_state if it is
+tagged, and the xa_state has not been walked. The :c:func:`xas_next_tag`
+function is the equivalent of :c:func:`xas_next_entry`.
+
+When iterating over a range of the XArray using :c:func:`xas_for_each`
+or :c:func:`xas_for_each_tag`, it may be necessary to temporarily stop
+the iteration. The :c:func:`xas_pause` function exists for this purpose.
+After you have done the necessary work and wish to resume, the xa_state
+is in an appropriate state to continue the iteration after the entry
+you last processed. If you have interrupts disabled while iterating,
+then it is good manners to pause the iteration and reenable interrupts
+every ``XA_CHECK_SCHED`` entries.
+
+The :c:func:`xas_get_tag`, :c:func:`xas_set_tag` and
+:c:func:`xas_clear_tag` functions require the xa_state cursor to have
+been moved to the appropriate location in the xarray; they will do
+nothing if you have called :c:func:`xas_pause` or :c:func:`xas_set`
+immediately before.
+
+You can call :c:func:`xas_set_update` to have a callback function
+called each time the XArray updates a node. This is used by the page
+cache workingset code to maintain its list of nodes which contain only
+shadow entries.
+
+Functions and structures
+========================
+
+.. kernel-doc:: include/linux/xarray.h
+.. kernel-doc:: lib/xarray.c
--
2.16.1
From: Matthew Wilcox <[email protected]>
Simplify the locking by taking the spinlock while we walk the tree on
the assumption that many acquires and releases of the lock will be
worse than holding the lock for a (potentially) long time.
We could replicate the same locking behaviour with the xarray, but would
have to be careful that the xa_node wasn't RCU-freed under us before we
took the lock.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 39 ++++++++++++++++-----------------------
1 file changed, 16 insertions(+), 23 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0179f9aa7d0e..5b70fbdec605 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
static void shmem_tag_pins(struct address_space *mapping)
{
- struct radix_tree_iter iter;
- void **slot;
- pgoff_t start;
+ XA_STATE(xas, &mapping->pages, 0);
struct page *page;
+ unsigned int tagged = 0;
lru_add_drain();
- start = 0;
- rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- page = radix_tree_deref_slot(slot);
- if (!page || radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- } else if (page_count(page) - page_mapcount(page) > 1) {
- xa_lock_irq(&mapping->pages);
- radix_tree_tag_set(&mapping->pages, iter.index,
- SHMEM_TAG_PINNED);
- xa_unlock_irq(&mapping->pages);
- }
+ xas_lock_irq(&xas);
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xa_is_value(page))
+ continue;
+ if (page_count(page) - page_mapcount(page) > 1)
+ xas_set_tag(&xas, SHMEM_TAG_PINNED);
- if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
- cond_resched_rcu();
- }
+ if (++tagged % XA_CHECK_SCHED)
+ continue;
+
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
+ cond_resched();
+ xas_lock_irq(&xas);
}
- rcu_read_unlock();
+ xas_unlock_irq(&xas);
}
/*
--
2.16.1
From: Matthew Wilcox <[email protected]>
As with shmem_tag_pins(), hold the lock around the entire loop instead
of acquiring & dropping it for each entry we're going to untag.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 59 ++++++++++++++++++++++++-----------------------------------
1 file changed, 24 insertions(+), 35 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 5b70fbdec605..ccb6d7ecdee0 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2636,9 +2636,7 @@ static void shmem_tag_pins(struct address_space *mapping)
*/
static int shmem_wait_for_pins(struct address_space *mapping)
{
- struct radix_tree_iter iter;
- void **slot;
- pgoff_t start;
+ XA_STATE(xas, &mapping->pages, 0);
struct page *page;
int error, scan;
@@ -2646,7 +2644,9 @@ static int shmem_wait_for_pins(struct address_space *mapping)
error = 0;
for (scan = 0; scan <= LAST_SCAN; scan++) {
- if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED))
+ unsigned int tagged = 0;
+
+ if (!xas_tagged(&xas, SHMEM_TAG_PINNED))
break;
if (!scan)
@@ -2654,45 +2654,34 @@ static int shmem_wait_for_pins(struct address_space *mapping)
else if (schedule_timeout_killable((HZ << scan) / 200))
scan = LAST_SCAN;
- start = 0;
- rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter,
- start, SHMEM_TAG_PINNED) {
-
- page = radix_tree_deref_slot(slot);
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
- page = NULL;
- }
-
- if (page &&
- page_count(page) - page_mapcount(page) != 1) {
- if (scan < LAST_SCAN)
- goto continue_resched;
-
+ xas_set(&xas, 0);
+ xas_lock_irq(&xas);
+ xas_for_each_tag(&xas, page, ULONG_MAX, SHMEM_TAG_PINNED) {
+ bool clear = true;
+ if (xa_is_value(page))
+ continue;
+ if (page_count(page) - page_mapcount(page) != 1) {
/*
* On the last scan, we clean up all those tags
* we inserted; but make a note that we still
* found pages pinned.
*/
- error = -EBUSY;
+ if (scan == LAST_SCAN)
+ error = -EBUSY;
+ else
+ clear = false;
}
+ if (clear)
+ xas_clear_tag(&xas, SHMEM_TAG_PINNED);
+ if (++tagged % XA_CHECK_SCHED)
+ continue;
- xa_lock_irq(&mapping->pages);
- radix_tree_tag_clear(&mapping->pages,
- iter.index, SHMEM_TAG_PINNED);
- xa_unlock_irq(&mapping->pages);
-continue_resched:
- if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
- cond_resched_rcu();
- }
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
+ cond_resched();
+ xas_lock_irq(&xas);
}
- rcu_read_unlock();
+ xas_unlock_irq(&xas);
}
return error;
--
2.16.1
From: Matthew Wilcox <[email protected]>
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/compression.c | 4 +---
fs/btrfs/extent_io.c | 6 ++----
2 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 9fa8617c7344..23867981d016 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -457,9 +457,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
if (pg_index > end_index)
break;
- rcu_read_lock();
- page = radix_tree_lookup(&mapping->pages, pg_index);
- rcu_read_unlock();
+ page = xa_load(&mapping->pages, pg_index);
if (page && !xa_is_value(page)) {
misses++;
if (misses > 4)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 54cef60dd79b..02e15093ed57 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -5170,11 +5170,9 @@ void clear_extent_buffer_dirty(struct extent_buffer *eb)
clear_page_dirty_for_io(page);
xa_lock_irq(&page->mapping->pages);
- if (!PageDirty(page)) {
- radix_tree_tag_clear(&page->mapping->pages,
- page_index(page),
+ if (!PageDirty(page))
+ __xa_clear_tag(&page->mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- }
xa_unlock_irq(&page->mapping->pages);
ClearPageError(page);
unlock_page(page);
--
2.16.1
From: Matthew Wilcox <[email protected]>
Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/staging/lustre/lustre/llite/glimpse.c | 12 +++++-------
drivers/staging/lustre/lustre/mdc/mdc_request.c | 16 ++++++++--------
2 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index 5f2843da911c..25232fdf5797 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -57,7 +57,7 @@ static const struct cl_lock_descr whole_file = {
};
/*
- * Check whether file has possible unwriten pages.
+ * Check whether file has possible unwritten pages.
*
* \retval 1 file is mmap-ed or has dirty pages
* 0 otherwise
@@ -66,16 +66,14 @@ blkcnt_t dirty_cnt(struct inode *inode)
{
blkcnt_t cnt = 0;
struct vvp_object *vob = cl_inode2vvp(inode);
- void *results[1];
- if (inode->i_mapping)
- cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->pages,
- results, 0, 1,
- PAGECACHE_TAG_DIRTY);
+ if (inode->i_mapping && xa_tagged(&inode->i_mapping->pages,
+ PAGECACHE_TAG_DIRTY))
+ cnt = 1;
if (cnt == 0 && atomic_read(&vob->vob_mmap_cnt) > 0)
cnt = 1;
- return (cnt > 0) ? 1 : 0;
+ return cnt;
}
int cl_glimpse_lock(const struct lu_env *env, struct cl_io *io,
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 2ec79a6b17da..ea23247e9e02 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -934,17 +934,18 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
* hash _smaller_ than one we are looking for.
*/
unsigned long offset = hash_x_index(*hash, hash64);
+ XA_STATE(xas, &mapping->pages, offset);
struct page *page;
- int found;
- xa_lock_irq(&mapping->pages);
- found = radix_tree_gang_lookup(&mapping->pages,
- (void **)&page, offset, 1);
- if (found > 0 && !xa_is_value(page)) {
+ xas_lock_irq(&xas);
+ page = xas_find(&xas, ULONG_MAX);
+ if (xa_is_value(page))
+ page = NULL;
+ if (page) {
struct lu_dirpage *dp;
get_page(page);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
/*
* In contrast to find_lock_page() we are sure that directory
* page cannot be truncated (while DLM lock is held) and,
@@ -992,8 +993,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
page = ERR_PTR(-EIO);
}
} else {
- xa_unlock_irq(&mapping->pages);
- page = NULL;
+ xas_unlock_irq(&xas);
}
return page;
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
A couple of short loops.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/fs-writeback.c | 25 +++++++++----------------
1 file changed, 9 insertions(+), 16 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d5c0e70dbfa8..a937ace7e031 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -339,9 +339,9 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
struct bdi_writeback *new_wb = isw->new_wb;
- struct radix_tree_iter iter;
+ XA_STATE(xas, &mapping->pages, 0);
+ struct page *page;
bool switched = false;
- void **slot;
/*
* By the time control reaches here, RCU grace period has passed
@@ -375,25 +375,18 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
* to possibly dirty pages while PAGECACHE_TAG_WRITEBACK points to
* pages actually under writeback.
*/
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
- PAGECACHE_TAG_DIRTY) {
- struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (likely(page) && PageDirty(page)) {
+ xas_for_each_tag(&xas, page, ULONG_MAX, PAGECACHE_TAG_DIRTY) {
+ if (PageDirty(page)) {
dec_wb_stat(old_wb, WB_RECLAIMABLE);
inc_wb_stat(new_wb, WB_RECLAIMABLE);
}
}
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
- PAGECACHE_TAG_WRITEBACK) {
- struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (likely(page)) {
- WARN_ON_ONCE(!PageWriteback(page));
- dec_wb_stat(old_wb, WB_WRITEBACK);
- inc_wb_stat(new_wb, WB_WRITEBACK);
- }
+ xas_set(&xas, 0);
+ xas_for_each_tag(&xas, page, ULONG_MAX, PAGECACHE_TAG_WRITEBACK) {
+ WARN_ON_ONCE(!PageWriteback(page));
+ dec_wb_stat(old_wb, WB_WRITEBACK);
+ inc_wb_stat(new_wb, WB_WRITEBACK);
}
wb_get(new_wb);
--
2.16.1
From: Matthew Wilcox <[email protected]>
Mostly comment fixes, but one use of __xa_set_tag.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/buffer.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 692ee249fb6a..e69c9d39e5d4 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -585,7 +585,7 @@ void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode)
EXPORT_SYMBOL(mark_buffer_dirty_inode);
/*
- * Mark the page dirty, and set it dirty in the radix tree, and mark the inode
+ * Mark the page dirty, and set it dirty in the page cache, and mark the inode
* dirty.
*
* If warn is true, then emit a warning if the page is not uptodate and has
@@ -602,8 +602,8 @@ void __set_page_dirty(struct page *page, struct address_space *mapping,
if (page->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages,
- page_index(page), PAGECACHE_TAG_DIRTY);
+ __xa_set_tag(&mapping->pages, page_index(page),
+ PAGECACHE_TAG_DIRTY);
}
xa_unlock_irqrestore(&mapping->pages, flags);
}
@@ -1066,7 +1066,7 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* The relationship between dirty buffers and dirty pages:
*
* Whenever a page has any dirty buffers, the page's dirty bit is set, and
- * the page is tagged dirty in its radix tree.
+ * the page is tagged dirty in the page cache.
*
* At all times, the dirtiness of the buffers represents the dirtiness of
* subsections of the page. If the page has buffers, the page dirty bit is
@@ -1089,9 +1089,9 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* mark_buffer_dirty - mark a buffer_head as needing writeout
* @bh: the buffer_head to mark dirty
*
- * mark_buffer_dirty() will set the dirty bit against the buffer, then set its
- * backing page dirty, then tag the page as dirty in its address_space's radix
- * tree and then attach the address_space's inode to its superblock's dirty
+ * mark_buffer_dirty() will set the dirty bit against the buffer, then set
+ * its backing page dirty, then tag the page as dirty in the page cache
+ * and then attach the address_space's inode to its superblock's dirty
* inode list.
*
* mark_buffer_dirty() is atomic. It takes bh->b_page->mapping->private_lock,
--
2.16.1
From: Matthew Wilcox <[email protected]>
Remove the last mentions of radix tree from various comments.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index c24c4cb76c43..68aeff336822 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -743,7 +743,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
}
/*
- * Remove range of pages and swap entries from radix tree, and free them.
+ * Remove range of pages and swap entries from page cache, and free them.
* If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
*/
static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
@@ -1118,10 +1118,10 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
* We needed to drop mutex to make that restrictive page
* allocation, but the inode might have been freed while we
* dropped it: although a racing shmem_evict_inode() cannot
- * complete without emptying the radix_tree, our page lock
+ * complete without emptying the page cache, our page lock
* on this swapcache page is not enough to prevent that -
* free_swap_and_cache() of our swap entry will only
- * trylock_page(), removing swap from radix_tree whatever.
+ * trylock_page(), removing swap from page cache whatever.
*
* We must not proceed to shmem_add_to_page_cache() if the
* inode has been freed, but of course we cannot rely on
@@ -1187,7 +1187,7 @@ int shmem_unuse(swp_entry_t swap, struct page *page)
false);
if (error)
goto out;
- /* No radix_tree_preload: swap entry keeps a place for page in tree */
+ /* No memory allocation: swap entry occupies the slot for the page */
error = -EAGAIN;
mutex_lock(&shmem_swaplist_mutex);
@@ -1863,7 +1863,7 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode,
spin_unlock_irq(&info->lock);
goto repeat;
}
- if (error == -EEXIST) /* from above or from radix_tree_insert */
+ if (error == -EEXIST)
goto repeat;
return error;
}
@@ -2475,7 +2475,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
}
/*
- * llseek SEEK_DATA or SEEK_HOLE through the radix_tree.
+ * llseek SEEK_DATA or SEEK_HOLE through the page cache.
*/
static pgoff_t shmem_seek_hole_data(struct address_space *mapping,
pgoff_t index, pgoff_t end, int whence)
@@ -2563,7 +2563,7 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
}
/*
- * We need a tag: a new tag would expand every radix_tree_node by 8 bytes,
+ * We need a tag: a new tag would expand every xa_node by 8 bytes,
* so reuse a tag which we firmly believe is never set or cleared on shmem.
*/
#define SHMEM_TAG_PINNED PAGECACHE_TAG_TOWRITE
--
2.16.1
From: Matthew Wilcox <[email protected]>
I'm not 100% convinced that the rewrite of nilfs_copy_back_pages is
correct, but it will at least have different bugs from the current
version.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/nilfs2/btnode.c | 37 +++++++++++-----------------
fs/nilfs2/page.c | 72 +++++++++++++++++++++++++++++++-----------------------
2 files changed, 56 insertions(+), 53 deletions(-)
diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index 9e2a00207436..b5997e8c5441 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -177,42 +177,36 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
ctxt->newbh = NULL;
if (inode->i_blkbits == PAGE_SHIFT) {
- lock_page(obh->b_page);
- /*
- * We cannot call radix_tree_preload for the kernels older
- * than 2.6.23, because it is not exported for modules.
- */
+ void *entry;
+ struct page *opage = obh->b_page;
+ lock_page(opage);
retry:
- err = radix_tree_preload(GFP_NOFS & ~__GFP_HIGHMEM);
- if (err)
- goto failed_unlock;
/* BUG_ON(oldkey != obh->b_page->index); */
- if (unlikely(oldkey != obh->b_page->index))
- NILFS_PAGE_BUG(obh->b_page,
+ if (unlikely(oldkey != opage->index))
+ NILFS_PAGE_BUG(opage,
"invalid oldkey %lld (newkey=%lld)",
(unsigned long long)oldkey,
(unsigned long long)newkey);
- xa_lock_irq(&btnc->pages);
- err = radix_tree_insert(&btnc->pages, newkey, obh->b_page);
- xa_unlock_irq(&btnc->pages);
+ entry = xa_cmpxchg(&btnc->pages, newkey, NULL, opage, GFP_NOFS);
/*
* Note: page->index will not change to newkey until
* nilfs_btnode_commit_change_key() will be called.
* To protect the page in intermediate state, the page lock
* is held.
*/
- radix_tree_preload_end();
- if (!err)
+ if (!entry)
return 0;
- else if (err != -EEXIST)
+ if (xa_is_err(entry)) {
+ err = xa_err(entry);
goto failed_unlock;
+ }
err = invalidate_inode_pages2_range(btnc, newkey, newkey);
if (!err)
goto retry;
/* fallback to copy mode */
- unlock_page(obh->b_page);
+ unlock_page(opage);
}
nbh = nilfs_btnode_create_block(btnc, newkey);
@@ -252,9 +246,8 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
mark_buffer_dirty(obh);
xa_lock_irq(&btnc->pages);
- radix_tree_delete(&btnc->pages, oldkey);
- radix_tree_tag_set(&btnc->pages, newkey,
- PAGECACHE_TAG_DIRTY);
+ __xa_erase(&btnc->pages, oldkey);
+ __xa_set_tag(&btnc->pages, newkey, PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&btnc->pages);
opage->index = obh->b_blocknr = newkey;
@@ -283,9 +276,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
return;
if (nbh == NULL) { /* blocksize == pagesize */
- xa_lock_irq(&btnc->pages);
- radix_tree_delete(&btnc->pages, newkey);
- xa_unlock_irq(&btnc->pages);
+ xa_erase(&btnc->pages, newkey);
unlock_page(ctxt->bh->b_page);
} else
brelse(nbh);
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 1c6703efde9e..31d20f624971 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -304,10 +304,10 @@ int nilfs_copy_dirty_pages(struct address_space *dmap,
void nilfs_copy_back_pages(struct address_space *dmap,
struct address_space *smap)
{
+ XA_STATE(xas, &dmap->pages, 0);
struct pagevec pvec;
unsigned int i, n;
pgoff_t index = 0;
- int err;
pagevec_init(&pvec);
repeat:
@@ -317,43 +317,56 @@ void nilfs_copy_back_pages(struct address_space *dmap,
for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i], *dpage;
- pgoff_t offset = page->index;
+ xas_set(&xas, page->index);
lock_page(page);
- dpage = find_lock_page(dmap, offset);
+ do {
+ xas_lock_irq(&xas);
+ dpage = xas_create(&xas);
+ if (!xas_error(&xas))
+ break;
+ xas_unlock_irq(&xas);
+ if (!xas_nomem(&xas, GFP_NOFS)) {
+ unlock_page(page);
+ /*
+ * Callers have a touching faith that this
+ * function cannot fail. Just leak the page.
+ * Other pages may be salvagable if the
+ * xarray doesn't need to allocate memory
+ * to store them.
+ */
+ WARN_ON(1);
+ page->mapping = NULL;
+ put_page(page);
+ goto shadow_remove;
+ }
+ } while (1);
+
if (dpage) {
- /* override existing page on the destination cache */
+ get_page(dpage);
+ xas_unlock_irq(&xas);
+ lock_page(dpage);
+ /* override existing page in the destination cache */
WARN_ON(PageDirty(dpage));
nilfs_copy_page(dpage, page, 0);
unlock_page(dpage);
put_page(dpage);
} else {
- struct page *page2;
-
- /* move the page to the destination cache */
- xa_lock_irq(&smap->pages);
- page2 = radix_tree_delete(&smap->pages, offset);
- WARN_ON(page2 != page);
-
- smap->nrpages--;
- xa_unlock_irq(&smap->pages);
-
- xa_lock_irq(&dmap->pages);
- err = radix_tree_insert(&dmap->pages, offset, page);
- if (unlikely(err < 0)) {
- WARN_ON(err == -EEXIST);
- page->mapping = NULL;
- put_page(page); /* for cache */
- } else {
- page->mapping = dmap;
- dmap->nrpages++;
- if (PageDirty(page))
- radix_tree_tag_set(&dmap->pages,
- offset,
- PAGECACHE_TAG_DIRTY);
- }
+ xas_store(&xas, page);
+ page->mapping = dmap;
+ dmap->nrpages++;
+ if (PageDirty(page))
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&dmap->pages);
}
+
+shadow_remove:
+ /* remove the page from the shadow cache */
+ xa_lock_irq(&smap->pages);
+ WARN_ON(__xa_erase(&smap->pages, xas.xa_index) != page);
+ smap->nrpages--;
+ xa_unlock_irq(&smap->pages);
+
unlock_page(page);
}
pagevec_release(&pvec);
@@ -476,8 +489,7 @@ int __nilfs_clear_page_dirty(struct page *page)
if (mapping) {
xa_lock_irq(&mapping->pages);
if (test_bit(PG_dirty, &page->flags)) {
- radix_tree_tag_clear(&mapping->pages,
- page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&mapping->pages);
return clear_page_dirty_for_io(page);
--
2.16.1
From: Matthew Wilcox <[email protected]>
xa_load has its own RCU locking, so we can eliminate it here.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index a8db3241f826..25a8e611bf1b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -348,12 +348,7 @@ static int shmem_xa_replace(struct address_space *mapping,
static bool shmem_confirm_swap(struct address_space *mapping,
pgoff_t index, swp_entry_t swap)
{
- void *item;
-
- rcu_read_lock();
- item = radix_tree_lookup(&mapping->pages, index);
- rcu_read_unlock();
- return item == swp_to_radix_entry(swap);
+ return xa_load(&mapping->pages, index) == swp_to_radix_entry(swap);
}
/*
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a 1:1 conversion.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 25a8e611bf1b..0179f9aa7d0e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1076,28 +1076,27 @@ static void shmem_evict_inode(struct inode *inode)
clear_inode(inode);
}
-static unsigned long find_swap_entry(struct radix_tree_root *root, void *item)
+static unsigned long find_swap_entry(struct xarray *xa, void *item)
{
- struct radix_tree_iter iter;
- void **slot;
- unsigned long found = -1;
+ XA_STATE(xas, xa, 0);
unsigned int checked = 0;
+ void *entry;
rcu_read_lock();
- radix_tree_for_each_slot(slot, root, &iter, 0) {
- if (*slot == item) {
- found = iter.index;
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (xas_retry(&xas, entry))
+ continue;
+ if (entry == item)
break;
- }
checked++;
- if ((checked % 4096) != 0)
+ if ((checked % XA_CHECK_SCHED) != 0)
continue;
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
-
rcu_read_unlock();
- return found;
+
+ return xas_invalid(&xas) ? -1 : xas.xa_index;
}
/*
--
2.16.1
From: Matthew Wilcox <[email protected]>
Slightly shorter and easier to read code.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/khugepaged.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a1b1a714aff9..5c96452090ec 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1533,8 +1533,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
pgoff_t start, struct page **hpage)
{
struct page *page = NULL;
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
int present, swap;
int node = NUMA_NO_NODE;
int result = SCAN_SUCCEED;
@@ -1543,17 +1542,11 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
swap = 0;
memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= start + HPAGE_PMD_NR)
- break;
-
- page = radix_tree_deref_slot(slot);
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
+ xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) {
+ if (xas_retry(&xas, page))
continue;
- }
- if (radix_tree_exception(page)) {
+ if (xa_is_value(page)) {
if (++swap > khugepaged_max_ptes_swap) {
result = SCAN_EXCEED_SWAP_PTE;
break;
@@ -1592,7 +1585,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
present++;
if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
Removes sparse warnings.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/extent_io.c | 4 ++--
fs/ext4/inode.c | 2 +-
fs/f2fs/data.c | 2 +-
fs/gfs2/aops.c | 2 +-
include/linux/pagevec.h | 8 +++++---
mm/swap.c | 4 ++--
6 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 1f2739702518..54cef60dd79b 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3789,7 +3789,7 @@ int btree_write_cache_pages(struct address_space *mapping,
pgoff_t index;
pgoff_t end; /* Inclusive */
int scanned = 0;
- int tag;
+ xa_tag_t tag;
pagevec_init(&pvec);
if (wbc->range_cyclic) {
@@ -3914,7 +3914,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
pgoff_t done_index;
int range_whole = 0;
int scanned = 0;
- int tag;
+ xa_tag_t tag;
/*
* We have to hold onto the inode so that ordered extents can do their
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index c94780075b04..40000331fe5b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2615,7 +2615,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
long left = mpd->wbc->nr_to_write;
pgoff_t index = mpd->first_page;
pgoff_t end = mpd->last_page;
- int tag;
+ xa_tag_t tag;
int i, err = 0;
int blkbits = mpd->inode->i_blkbits;
ext4_lblk_t lblk;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 4eee39befc67..ce029060acd0 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1848,7 +1848,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
pgoff_t last_idx = ULONG_MAX;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;
pagevec_init(&pvec);
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 2f725b4a386b..e85ed63b87d6 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -371,7 +371,7 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
pgoff_t done_index;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;
pagevec_init(&pvec);
if (wbc->range_cyclic) {
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 6dc456ac6136..955bd6425903 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -9,6 +9,8 @@
#ifndef _LINUX_PAGEVEC_H
#define _LINUX_PAGEVEC_H
+#include <linux/xarray.h>
+
/* 15 pointers + header align the pagevec structure to a power of two */
#define PAGEVEC_SIZE 15
@@ -40,12 +42,12 @@ static inline unsigned pagevec_lookup(struct pagevec *pvec,
unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag);
+ xa_tag_t tag);
unsigned pagevec_lookup_range_nr_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag, unsigned max_pages);
+ xa_tag_t tag, unsigned max_pages);
static inline unsigned pagevec_lookup_tag(struct pagevec *pvec,
- struct address_space *mapping, pgoff_t *index, int tag)
+ struct address_space *mapping, pgoff_t *index, xa_tag_t tag)
{
return pagevec_lookup_range_tag(pvec, mapping, index, (pgoff_t)-1, tag);
}
diff --git a/mm/swap.c b/mm/swap.c
index f62c5c7198f2..10c94ff9e542 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -990,7 +990,7 @@ EXPORT_SYMBOL(pagevec_lookup_range);
unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag)
+ xa_tag_t tag)
{
pvec->nr = find_get_pages_range_tag(mapping, index, end, tag,
PAGEVEC_SIZE, pvec->pages);
@@ -1000,7 +1000,7 @@ EXPORT_SYMBOL(pagevec_lookup_range_tag);
unsigned pagevec_lookup_range_nr_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag, unsigned max_pages)
+ xa_tag_t tag, unsigned max_pages)
{
pvec->nr = find_get_pages_range_tag(mapping, index, end, tag,
min_t(unsigned int, max_pages, PAGEVEC_SIZE), pvec->pages);
--
2.16.1
From: Matthew Wilcox <[email protected]>
I found another victim of the radix tree being hard to use. Because
there was no call to radix_tree_preload(), khugepaged was allocating
radix_tree_nodes using GFP_ATOMIC.
I also converted a local_irq_save()/restore() pair to
disable()/enable().
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/khugepaged.c | 158 +++++++++++++++++++++++---------------------------------
1 file changed, 65 insertions(+), 93 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 70e10c1f3127..a1b1a714aff9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1282,17 +1282,17 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
*
* Basic scheme is simple, details are more complex:
* - allocate and freeze a new huge page;
- * - scan over radix tree replacing old pages the new one
+ * - scan page cache replacing old pages with the new one
* + swap in pages if necessary;
* + fill in gaps;
- * + keep old pages around in case if rollback is required;
- * - if replacing succeed:
+ * + keep old pages around in case rollback is required;
+ * - if replacing succeeds:
* + copy data over;
* + free old pages;
* + unfreeze huge page;
* - if replacing failed;
* + put all pages back and unfreeze them;
- * + restore gaps in the radix-tree;
+ * + restore gaps in the page cache;
* + free huge page;
*/
static void collapse_shmem(struct mm_struct *mm,
@@ -1300,12 +1300,11 @@ static void collapse_shmem(struct mm_struct *mm,
struct page **hpage, int node)
{
gfp_t gfp;
- struct page *page, *new_page, *tmp;
+ struct page *new_page;
struct mem_cgroup *memcg;
pgoff_t index, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
int nr_none = 0, result = SCAN_SUCCEED;
VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
@@ -1330,48 +1329,48 @@ static void collapse_shmem(struct mm_struct *mm,
__SetPageLocked(new_page);
BUG_ON(!page_ref_freeze(new_page, 1));
-
/*
- * At this point the new_page is 'frozen' (page_count() is zero), locked
- * and not up-to-date. It's safe to insert it into radix tree, because
- * nobody would be able to map it or use it in other way until we
- * unfreeze it.
+ * At this point the new_page is 'frozen' (page_count() is zero),
+ * locked and not up-to-date. It's safe to insert it into the page
+ * cache, because nobody would be able to map it or use it in other
+ * way until we unfreeze it.
*/
- index = start;
- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- int n = min(iter.index, end) - index;
-
- /*
- * Handle holes in the radix tree: charge it from shmem and
- * insert relevant subpage of new_page into the radix-tree.
- */
- if (n && !shmem_charge(mapping->host, n)) {
- result = SCAN_FAIL;
+ /* This will be less messy when we use multi-index entries */
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, end - 1);
+ if (!xas_error(&xas))
break;
- }
- nr_none += n;
- for (; index < min(iter.index, end); index++) {
- radix_tree_insert(&mapping->pages, index,
- new_page + (index % HPAGE_PMD_NR));
- }
+ xas_unlock_irq(&xas);
+ if (!xas_nomem(&xas, GFP_KERNEL))
+ goto out;
+ } while (1);
- /* We are done. */
- if (index >= end)
- break;
+ for (index = start; index < end; index++) {
+ struct page *page = xas_next(&xas);
+
+ VM_BUG_ON(index != xas.xa_index);
+ if (!page) {
+ if (!shmem_charge(mapping->host, 1)) {
+ result = SCAN_FAIL;
+ break;
+ }
+ xas_store(&xas, new_page + (index % HPAGE_PMD_NR));
+ nr_none++;
+ continue;
+ }
- page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
if (xa_is_value(page) || !PageUptodate(page)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
SGP_NOHUGE)) {
result = SCAN_FAIL;
- goto tree_unlocked;
+ goto xa_unlocked;
}
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
+ xas_set(&xas, index);
} else if (trylock_page(page)) {
get_page(page);
} else {
@@ -1391,7 +1390,7 @@ static void collapse_shmem(struct mm_struct *mm,
result = SCAN_TRUNCATED;
goto out_unlock;
}
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
if (isolate_lru_page(page)) {
result = SCAN_DEL_PAGE_LRU;
@@ -1401,17 +1400,16 @@ static void collapse_shmem(struct mm_struct *mm,
if (page_mapped(page))
unmap_mapping_pages(mapping, index, 1, false);
- xa_lock_irq(&mapping->pages);
+ xas_lock(&xas);
+ xas_set(&xas, index);
- slot = radix_tree_lookup_slot(&mapping->pages, index);
- VM_BUG_ON_PAGE(page != radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock), page);
+ VM_BUG_ON_PAGE(page != xas_load(&xas), page);
VM_BUG_ON_PAGE(page_mapped(page), page);
/*
* The page is expected to have page_count() == 3:
* - we hold a pin on it;
- * - one reference from radix tree;
+ * - one reference from page cache;
* - one from isolate_lru_page;
*/
if (!page_ref_freeze(page, 3)) {
@@ -1426,56 +1424,30 @@ static void collapse_shmem(struct mm_struct *mm,
list_add_tail(&page->lru, &pagelist);
/* Finally, replace with the new page. */
- radix_tree_replace_slot(&mapping->pages, slot,
- new_page + (index % HPAGE_PMD_NR));
-
- slot = radix_tree_iter_resume(slot, &iter);
- index++;
+ xas_store(&xas, new_page + (index % HPAGE_PMD_NR));
continue;
out_lru:
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
putback_lru_page(page);
out_isolate_failed:
unlock_page(page);
put_page(page);
- goto tree_unlocked;
+ goto xa_unlocked;
out_unlock:
unlock_page(page);
put_page(page);
break;
}
+ xas_unlock_irq(&xas);
- /*
- * Handle hole in radix tree at the end of the range.
- * This code only triggers if there's nothing in radix tree
- * beyond 'end'.
- */
- if (result == SCAN_SUCCEED && index < end) {
- int n = end - index;
-
- if (!shmem_charge(mapping->host, n)) {
- result = SCAN_FAIL;
- goto tree_locked;
- }
-
- for (; index < end; index++) {
- radix_tree_insert(&mapping->pages, index,
- new_page + (index % HPAGE_PMD_NR));
- }
- nr_none += n;
- }
-
-tree_locked:
- xa_unlock_irq(&mapping->pages);
-tree_unlocked:
-
+xa_unlocked:
if (result == SCAN_SUCCEED) {
- unsigned long flags;
+ struct page *page, *tmp;
struct zone *zone = page_zone(new_page);
/*
- * Replacing old pages with new one has succeed, now we need to
- * copy the content and free old pages.
+ * Replacing old pages with new one has succeeded, now we
+ * need to copy the content and free the old pages.
*/
list_for_each_entry_safe(page, tmp, &pagelist, lru) {
copy_highpage(new_page + (page->index % HPAGE_PMD_NR),
@@ -1489,16 +1461,16 @@ static void collapse_shmem(struct mm_struct *mm,
put_page(page);
}
- local_irq_save(flags);
+ local_irq_disable();
__inc_node_page_state(new_page, NR_SHMEM_THPS);
if (nr_none) {
__mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);
__mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);
}
- local_irq_restore(flags);
+ local_irq_enable();
/*
- * Remove pte page tables, so we can re-faulti
+ * Remove pte page tables, so we can re-fault
* the page as huge.
*/
retract_page_tables(mapping, start);
@@ -1513,37 +1485,37 @@ static void collapse_shmem(struct mm_struct *mm,
*hpage = NULL;
} else {
- /* Something went wrong: rollback changes to the radix-tree */
+ struct page *page;
+ /* Something went wrong: roll back page cache changes */
shmem_uncharge(mapping->host, nr_none);
- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= end)
- break;
+ xas_lock_irq(&xas);
+ xas_set(&xas, start);
+ xas_for_each(&xas, page, end - 1) {
page = list_first_entry_or_null(&pagelist,
struct page, lru);
- if (!page || iter.index < page->index) {
+ if (!page || xas.xa_index < page->index) {
if (!nr_none)
break;
nr_none--;
/* Put holes back where they were */
- radix_tree_delete(&mapping->pages, iter.index);
+ xas_store(&xas, NULL);
continue;
}
- VM_BUG_ON_PAGE(page->index != iter.index, page);
+ VM_BUG_ON_PAGE(page->index != xas.xa_index, page);
/* Unfreeze the page. */
list_del(&page->lru);
page_ref_unfreeze(page, 2);
- radix_tree_replace_slot(&mapping->pages, slot, page);
- slot = radix_tree_iter_resume(slot, &iter);
- xa_unlock_irq(&mapping->pages);
+ xas_store(&xas, page);
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
putback_lru_page(page);
unlock_page(page);
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
/* Unfreeze new_page, caller would take care about freeing it */
page_ref_unfreeze(new_page, 1);
--
2.16.1
From: Matthew Wilcox <[email protected]>
xa_store() differs from radix_tree_insert() in that it will overwrite an
existing element in the array rather than returning an error. This is
the behaviour which most users want, and those that want more complex
behaviour generally want to use the xas family of routines anyway.
For memory allocation, xa_store() will first attempt to request memory
from the slab allocator; if memory is not immediately available, it will
drop the xa_lock and allocate memory, keeping a pointer in the xa_state.
It does not use the per-CPU cache, although those will continue to exist
until all radix tree users are converted to the xarray.
This patch also includes xa_erase() and __xa_erase() for a streamlined
way to store NULL. Since there is no need to allocate memory in order
to store a NULL in the XArray, we do not need to trouble the user with
deciding what memory allocation flags to use.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 109 +++++
lib/radix-tree.c | 4 +-
lib/xarray.c | 648 ++++++++++++++++++++++++++++++
tools/include/linux/spinlock.h | 2 +
tools/testing/radix-tree/linux/kernel.h | 4 +
tools/testing/radix-tree/linux/lockdep.h | 11 +
tools/testing/radix-tree/linux/rcupdate.h | 1 +
tools/testing/radix-tree/test.c | 32 ++
tools/testing/radix-tree/test.h | 5 +
tools/testing/radix-tree/xarray-test.c | 113 +++++-
10 files changed, 925 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/radix-tree/linux/lockdep.h
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 1cf012256eab..38e290df2ff0 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -157,10 +157,17 @@ typedef unsigned __bitwise xa_tag_t;
#define XA_PRESENT ((__force xa_tag_t)8U)
#define XA_TAG_MAX XA_TAG_2
+enum xa_lock_type {
+ XA_LOCK_IRQ = 1,
+ XA_LOCK_BH = 2,
+};
+
/*
* Values for xa_flags. The radix tree stores its GFP flags in the xa_flags,
* and we remain compatible with that.
*/
+#define XA_FLAGS_LOCK_IRQ ((__force gfp_t)XA_LOCK_IRQ)
+#define XA_FLAGS_LOCK_BH ((__force gfp_t)XA_LOCK_BH)
#define XA_FLAGS_TAG(tag) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
(__force unsigned)(tag)))
@@ -210,6 +217,7 @@ struct xarray {
void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
+void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -227,6 +235,35 @@ static inline void xa_init(struct xarray *xa)
xa_init_flags(xa, 0);
}
+/**
+ * xa_erase() - Erase this entry from the XArray.
+ * @xa: XArray.
+ * @index: Index of entry.
+ *
+ * This function is the equivalent of calling xa_store() with %NULL as
+ * the third argument. The XArray does not need to allocate memory, so
+ * the user does not need to provide GFP flags.
+ *
+ * Context: Process context. Takes and releases the xa_lock.
+ * Return: The entry which used to be at this index.
+ */
+static inline void *xa_erase(struct xarray *xa, unsigned long index)
+{
+ return xa_store(xa, index, NULL, 0);
+}
+
+/**
+ * xa_empty() - Determine if an array has any present entries.
+ * @xa: XArray.
+ *
+ * Context: Any context.
+ * Return: %true if the array contains only NULL pointers.
+ */
+static inline bool xa_empty(const struct xarray *xa)
+{
+ return xa->xa_head == NULL;
+}
+
/**
* xa_tagged() - Inquire whether any entry in this array has a tag set
* @xa: Array
@@ -254,7 +291,11 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
/*
* Versions of the normal API which require the caller to hold the xa_lock.
+ * If the GFP flags allow it, will drop the lock in order to allocate
+ * memory, then reacquire it afterwards.
*/
+void *__xa_erase(struct xarray *, unsigned long index);
+void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -350,6 +391,12 @@ static inline void *xa_entry_locked(struct xarray *xa,
lockdep_is_held(&xa->xa_lock));
}
+/* Private */
+static inline void *xa_mk_node(const struct xa_node *node)
+{
+ return (void *)((unsigned long)node | 2);
+}
+
/* Private */
static inline struct xa_node *xa_to_node(const void *entry)
{
@@ -534,6 +581,12 @@ static inline bool xas_valid(const struct xa_state *xas)
return !xas_invalid(xas);
}
+/* True if the node represents head-of-tree, RESTART or BOUNDS */
+static inline bool xas_top(struct xa_node *node)
+{
+ return node <= XAS_RESTART;
+}
+
/**
* xas_reset() - Reset an XArray operation state.
* @xas: XArray operation state.
@@ -570,10 +623,15 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
}
void *xas_load(struct xa_state *);
+void *xas_store(struct xa_state *, void *entry);
+void *xas_create(struct xa_state *);
bool xas_get_tag(const struct xa_state *, xa_tag_t);
void xas_set_tag(const struct xa_state *, xa_tag_t);
void xas_clear_tag(const struct xa_state *, xa_tag_t);
+void xas_init_tags(const struct xa_state *);
+
+bool xas_nomem(struct xa_state *, gfp_t);
/**
* xas_reload() - Refetch an entry from the xarray.
@@ -598,4 +656,55 @@ static inline void *xas_reload(struct xa_state *xas)
return xa_head(xas->xa);
}
+/**
+ * xas_set() - Set up XArray operation state for a different index.
+ * @xas: XArray operation state.
+ * @index: New index into the XArray.
+ *
+ * Move the operation state to refer to a different index. This will
+ * have the effect of starting a walk from the top; see xas_next()
+ * to move to an adjacent index.
+ */
+static inline void xas_set(struct xa_state *xas, unsigned long index)
+{
+ xas->xa_index = index;
+ xas->xa_node = XAS_RESTART;
+}
+
+/**
+ * xas_set_order() - Set up XArray operation state for a multislot entry.
+ * @xas: XArray operation state.
+ * @index: Target of the operation.
+ * @order: Entry occupies 2^@order indices.
+ */
+static inline void xas_set_order(struct xa_state *xas, unsigned long index,
+ unsigned int order)
+{
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ xas->xa_index = (index >> order) << order;
+ xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
+ xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
+ xas->xa_node = XAS_RESTART;
+#else
+ BUG_ON(order > 0);
+ xas_set(xas, index);
+#endif
+}
+
+/**
+ * xas_set_update() - Set up XArray operation state for a callback.
+ * @xas: XArray operation state.
+ * @update: Function to call when updating a node.
+ *
+ * The XArray can notify a caller after it has updated an xa_node.
+ * This is advanced functionality and is only needed by the page cache.
+ */
+static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update)
+{
+ xas->xa_update = update;
+}
+
+/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
+void xas_destroy(struct xa_state *);
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index d3cb26104589..05bd8172b148 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -47,7 +47,7 @@ static unsigned long height_to_maxnodes[RADIX_TREE_MAX_PATH + 1] __read_mostly;
/*
* Radix tree node cache.
*/
-static struct kmem_cache *radix_tree_node_cachep;
+struct kmem_cache *radix_tree_node_cachep;
/*
* The radix tree is variable-height, so an insert operation not only has
@@ -365,7 +365,7 @@ radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
return ret;
}
-static void radix_tree_node_rcu_free(struct rcu_head *head)
+void radix_tree_node_rcu_free(struct rcu_head *head)
{
struct radix_tree_node *node =
container_of(head, struct radix_tree_node, rcu_head);
diff --git a/lib/xarray.c b/lib/xarray.c
index ca25a7a4a4fa..9e50804f168c 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -7,6 +7,8 @@
#include <linux/bitmap.h>
#include <linux/export.h>
+#include <linux/list.h>
+#include <linux/slab.h>
#include <linux/xarray.h>
/*
@@ -39,6 +41,11 @@ static inline struct xa_node *xa_parent_locked(struct xarray *xa,
lockdep_is_held(&xa->xa_lock));
}
+static inline unsigned int xa_lock_type(const struct xarray *xa)
+{
+ return (__force unsigned int)xa->xa_flags & 3;
+}
+
static inline void xa_tag_set(struct xarray *xa, xa_tag_t tag)
{
if (!(xa->xa_flags & XA_FLAGS_TAG(tag)))
@@ -74,6 +81,10 @@ static inline bool node_any_tag(struct xa_node *node, xa_tag_t tag)
return !bitmap_empty(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
}
+#define tag_inc(tag) do { \
+ tag = (__force xa_tag_t)((__force unsigned)(tag) + 1); \
+} while (0)
+
/* extracts the offset within this node from the index */
static unsigned int get_offset(unsigned long index, struct xa_node *node)
{
@@ -168,6 +179,515 @@ void *xas_load(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_load);
+/* Move the radix tree node cache here */
+extern struct kmem_cache *radix_tree_node_cachep;
+extern void radix_tree_node_rcu_free(struct rcu_head *head);
+
+#define XA_RCU_FREE ((struct xarray *)1)
+
+static void xa_node_free(struct xa_node *node)
+{
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ node->array = XA_RCU_FREE;
+ call_rcu(&node->rcu_head, radix_tree_node_rcu_free);
+}
+
+/*
+ * xas_destroy() - Free any resources allocated during the XArray operation.
+ * @xas: XArray operation state.
+ *
+ * This function is now internal-only (and will be made static once
+ * idr_preload() is removed).
+ */
+void xas_destroy(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_alloc;
+
+ if (!node)
+ return;
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ kmem_cache_free(radix_tree_node_cachep, node);
+ xas->xa_alloc = NULL;
+}
+
+/**
+ * xas_nomem() - Allocate memory if needed.
+ * @xas: XArray operation state.
+ * @gfp: Memory allocation flags.
+ *
+ * If we need to add new nodes to the XArray, we try to allocate memory
+ * with GFP_NOWAIT while holding the lock, which will usually succeed.
+ * If it fails, @xas is flagged as needing memory to continue. The caller
+ * should drop the lock and call xas_nomem(). If xas_nomem() succeeds,
+ * the caller should retry the operation.
+ *
+ * Forward progress is guaranteed as one node is allocated here and
+ * stored in the xa_state where it will be found by xas_alloc(). More
+ * nodes will likely be found in the slab allocator, but we do not tie
+ * them up here.
+ *
+ * Return: true if memory was needed, and was successfully allocated.
+ */
+bool xas_nomem(struct xa_state *xas, gfp_t gfp)
+{
+ if (xas->xa_node != XA_ERROR(-ENOMEM)) {
+ xas_destroy(xas);
+ return false;
+ }
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ if (!xas->xa_alloc)
+ return false;
+ XA_NODE_BUG_ON(xas->xa_alloc, !list_empty(&xas->xa_alloc->private_list));
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+EXPORT_SYMBOL_GPL(xas_nomem);
+
+/*
+ * __xas_nomem() - Drop locks and allocate memory if needed.
+ * @xas: XArray operation state.
+ * @gfp: Memory allocation flags.
+ *
+ * Internal variant of xas_nomem().
+ *
+ * Return: true if memory was needed, and was successfully allocated.
+ */
+static bool __xas_nomem(struct xa_state *xas, gfp_t gfp)
+ __must_hold(xas->xa->xa_lock)
+{
+ unsigned int lock_type = xa_lock_type(xas->xa);
+
+ if (xas->xa_node != XA_ERROR(-ENOMEM)) {
+ xas_destroy(xas);
+ return false;
+ }
+ if (gfpflags_allow_blocking(gfp)) {
+ if (lock_type == XA_LOCK_IRQ)
+ xas_unlock_irq(xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_unlock_bh(xas);
+ else
+ xas_unlock(xas);
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ if (lock_type == XA_LOCK_IRQ)
+ xas_lock_irq(xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_lock_bh(xas);
+ else
+ xas_lock(xas);
+ } else {
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ }
+ if (!xas->xa_alloc)
+ return false;
+ XA_NODE_BUG_ON(xas->xa_alloc, !list_empty(&xas->xa_alloc->private_list));
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+
+static void xas_update(struct xa_state *xas, struct xa_node *node)
+{
+ if (xas->xa_update)
+ xas->xa_update(node);
+ else
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+}
+
+static void *xas_alloc(struct xa_state *xas, unsigned int shift)
+{
+ struct xa_node *parent = xas->xa_node;
+ struct xa_node *node = xas->xa_alloc;
+
+ if (xas_invalid(xas))
+ return NULL;
+
+ if (node) {
+ xas->xa_alloc = NULL;
+ } else {
+ node = kmem_cache_alloc(radix_tree_node_cachep,
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (!node) {
+ xas_set_err(xas, -ENOMEM);
+ return NULL;
+ }
+ }
+
+ if (parent) {
+ node->offset = xas->xa_offset;
+ parent->count++;
+ XA_NODE_BUG_ON(node, parent->count > XA_CHUNK_SIZE);
+ xas_update(xas, parent);
+ }
+ XA_NODE_BUG_ON(node, shift > BITS_PER_LONG);
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ node->shift = shift;
+ node->count = 0;
+ node->nr_values = 0;
+ RCU_INIT_POINTER(node->parent, xas->xa_node);
+ node->array = xas->xa;
+
+ return node;
+}
+
+/*
+ * Use this to calculate the maximum index that will need to be created
+ * in order to add the entry described by @xas. Because we cannot store a
+ * multiple-index entry at index 0, the calculation is a little more complex
+ * than you might expect.
+ */
+static unsigned long xas_max(struct xa_state *xas)
+{
+ unsigned long max = xas->xa_index;
+
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ if (xas->xa_shift || xas->xa_sibs) {
+ unsigned long mask;
+ mask = (((xas->xa_sibs + 1UL) << xas->xa_shift) - 1);
+ max |= mask;
+ if (mask == max)
+ max++;
+ }
+#endif
+
+ return max;
+}
+
+/* The maximum index that can be contained in the array without expanding it */
+static unsigned long max_index(void *entry)
+{
+ if (!xa_is_node(entry))
+ return 0;
+ return (XA_CHUNK_SIZE << xa_to_node(entry)->shift) - 1;
+}
+
+static void xas_shrink(struct xa_state *xas)
+{
+ struct xarray *xa = xas->xa;
+ struct xa_node *node = xas->xa_node;
+
+ for (;;) {
+ void *entry;
+
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ if (node->count != 1)
+ break;
+ entry = xa_entry_locked(xa, node, 0);
+ if (!entry)
+ break;
+ if (!xa_is_node(entry) && node->shift)
+ break;
+ xas->xa_node = XAS_BOUNDS;
+
+ RCU_INIT_POINTER(xa->xa_head, entry);
+
+ node->count = 0;
+ node->nr_values = 0;
+ if (!xa_is_node(entry))
+ RCU_INIT_POINTER(node->slots[0], XA_RETRY_ENTRY);
+ xas_update(xas, node);
+ xa_node_free(node);
+ if (!xa_is_node(entry))
+ break;
+ node = xa_to_node(entry);
+ node->parent = NULL;
+ }
+}
+
+/*
+ * xas_delete_node() - Attempt to delete an xa_node
+ * @xas: Array operation state.
+ *
+ * Attempts to delete the @xas->xa_node. This will fail if xa->node has
+ * a non-zero reference count.
+ */
+static void xas_delete_node(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ for (;;) {
+ struct xa_node *parent;
+
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ if (node->count)
+ break;
+
+ parent = xa_parent_locked(xas->xa, node);
+ xas->xa_node = parent;
+ xas->xa_offset = node->offset;
+ xa_node_free(node);
+
+ if (!parent) {
+ xas->xa->xa_head = NULL;
+ xas->xa_node = XAS_BOUNDS;
+ return;
+ }
+
+ parent->slots[xas->xa_offset] = NULL;
+ parent->count--;
+ XA_NODE_BUG_ON(parent, parent->count > XA_CHUNK_SIZE);
+ node = parent;
+ xas_update(xas, node);
+ }
+
+ if (!node->parent)
+ xas_shrink(xas);
+}
+
+/**
+ * xas_free_nodes() - Free this node and all nodes that it references
+ * @xas: Array operation state.
+ * @top: Node to free
+ *
+ * This node has been removed from the tree. We must now free it and all
+ * of its subnodes. There may be RCU walkers with references into the tree,
+ * so we must replace all entries with retry markers.
+ */
+static void xas_free_nodes(struct xa_state *xas, struct xa_node *top)
+{
+ unsigned int offset = 0;
+ struct xa_node *node = top;
+
+ for (;;) {
+ void *entry = xa_entry_locked(xas->xa, node, offset);
+
+ if (xa_is_node(entry)) {
+ node = xa_to_node(entry);
+ offset = 0;
+ continue;
+ }
+ if (entry)
+ RCU_INIT_POINTER(node->slots[offset], XA_RETRY_ENTRY);
+ offset++;
+ while (offset == XA_CHUNK_SIZE) {
+ struct xa_node *parent;
+
+ parent = xa_parent_locked(xas->xa, node);
+ offset = node->offset + 1;
+ node->count = 0;
+ node->nr_values = 0;
+ xas_update(xas, node);
+ xa_node_free(node);
+ if (node == top)
+ return;
+ node = parent;
+ }
+ }
+}
+
+/*
+ * xas_expand adds nodes to the head of the tree until it has reached
+ * sufficient height to be able to contain @xas->xa_index
+ */
+static int xas_expand(struct xa_state *xas, void *head)
+{
+ struct xarray *xa = xas->xa;
+ struct xa_node *node = NULL;
+ unsigned int shift = 0;
+ unsigned long max = xas_max(xas);
+
+ if (!head) {
+ if (max == 0)
+ return 0;
+ while ((max >> shift) >= XA_CHUNK_SIZE)
+ shift += XA_CHUNK_SHIFT;
+ return shift + XA_CHUNK_SHIFT;
+ } else if (xa_is_node(head)) {
+ node = xa_to_node(head);
+ shift = node->shift + XA_CHUNK_SHIFT;
+ }
+ xas->xa_node = NULL;
+
+ while (max > max_index(head)) {
+ xa_tag_t tag = 0;
+
+ XA_NODE_BUG_ON(node, shift > BITS_PER_LONG);
+ node = xas_alloc(xas, shift);
+ if (!node)
+ return -ENOMEM;
+
+ node->count = 1;
+ if (xa_is_value(head))
+ node->nr_values = 1;
+ RCU_INIT_POINTER(node->slots[0], head);
+
+ /* Propagate the aggregated tag info to the new child */
+ for (;;) {
+ if (xa_tagged(xa, tag))
+ node_set_tag(node, 0, tag);
+ if (tag == XA_TAG_MAX)
+ break;
+ tag_inc(tag);
+ }
+
+ /*
+ * Now that the new node is fully initialised, we can add
+ * it to the tree
+ */
+ if (xa_is_node(head)) {
+ xa_to_node(head)->offset = 0;
+ rcu_assign_pointer(xa_to_node(head)->parent, node);
+ }
+ head = xa_mk_node(node);
+ rcu_assign_pointer(xa->xa_head, head);
+ xas_update(xas, node);
+
+ shift += XA_CHUNK_SHIFT;
+ }
+
+ xas->xa_node = node;
+ return shift;
+}
+
+/**
+ * xas_create() - Create a slot to store an entry in.
+ * @xas: XArray operation state.
+ *
+ * Most users will not need to call this function directly, as it is called
+ * by xas_store(). It is useful for doing conditional store operations
+ * (see the xa_cmpxchg() implementation for an example).
+ *
+ * Return: If the slot already existed, returns the contents of this slot.
+ * If the slot was newly created, returns NULL. If it failed to create the
+ * slot, returns NULL and indicates the error in @xas.
+ */
+void *xas_create(struct xa_state *xas)
+{
+ struct xarray *xa = xas->xa;
+ void *entry;
+ void __rcu **slot;
+ struct xa_node *node = xas->xa_node;
+ int shift;
+ unsigned int order = xas->xa_shift;
+
+ if (xas_top(node)) {
+ entry = xa_head_locked(xa);
+ xas->xa_node = NULL;
+ shift = xas_expand(xas, entry);
+ if (shift < 0)
+ return NULL;
+ entry = xa_head_locked(xa);
+ slot = &xa->xa_head;
+ } else if (xas_error(xas)) {
+ return NULL;
+ } else if (node) {
+ unsigned int offset = xas->xa_offset;
+
+ shift = node->shift;
+ entry = xa_entry_locked(xa, node, offset);
+ slot = &node->slots[offset];
+ } else {
+ shift = 0;
+ entry = xa_head_locked(xa);
+ slot = &xa->xa_head;
+ }
+
+ while (shift > order) {
+ shift -= XA_CHUNK_SHIFT;
+ if (!entry) {
+ node = xas_alloc(xas, shift);
+ if (!node)
+ break;
+ rcu_assign_pointer(*slot, xa_mk_node(node));
+ } else if (xa_is_node(entry)) {
+ node = xa_to_node(entry);
+ } else {
+ break;
+ }
+ entry = xas_descend(xas, node);
+ slot = &node->slots[xas->xa_offset];
+ }
+
+ return entry;
+}
+EXPORT_SYMBOL_GPL(xas_create);
+
+static void store_siblings(struct xa_state *xas, void *entry, void *curr,
+ int *countp, int *valuesp)
+{
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ struct xa_node *node = xas->xa_node;
+ unsigned int sibs, offset = xas->xa_offset;
+ void *sibling = entry ? xa_mk_sibling(offset) : NULL;
+
+ if (!entry)
+ sibs = XA_CHUNK_MASK - offset;
+ else if (xas->xa_shift < node->shift)
+ sibs = 0;
+ else
+ sibs = xas->xa_sibs;
+
+ while (sibs--) {
+ void *next = xa_entry(xas->xa, node, ++offset);
+
+ if (!xa_is_sibling(next)) {
+ if (!entry)
+ break;
+ curr = next;
+ }
+ RCU_INIT_POINTER(node->slots[offset], sibling);
+ if (xa_is_node(next))
+ xas_free_nodes(xas, xa_to_node(next));
+ *countp += !next - !entry;
+ *valuesp += !xa_is_value(curr) - !xa_is_value(entry);
+ }
+#endif
+}
+
+/**
+ * xas_store() - Store this entry in the XArray.
+ * @xas: XArray operation state.
+ * @entry: New entry.
+ *
+ * Return: The old entry at this index.
+ */
+void *xas_store(struct xa_state *xas, void *entry)
+{
+ struct xa_node *node;
+ int count, values;
+ void *curr;
+
+ if (entry)
+ curr = xas_create(xas);
+ else
+ curr = xas_load(xas);
+ if (xas_invalid(xas))
+ return curr;
+ if ((curr == entry) && !xas->xa_sibs)
+ return curr;
+
+ node = xas->xa_node;
+ if (!entry)
+ xas_init_tags(xas);
+ /*
+ * Must clear the tags before setting the entry to NULL otherwise
+ * xas_for_each_tag may find a NULL entry and stop early.
+ */
+ if (node)
+ rcu_assign_pointer(node->slots[xas->xa_offset], entry);
+ else
+ rcu_assign_pointer(xas->xa->xa_head, entry);
+
+ values = !xa_is_value(curr) - !xa_is_value(entry);
+ count = !curr - !entry;
+ if (xa_is_node(curr))
+ xas_free_nodes(xas, xa_to_node(curr));
+
+ if (node) {
+ store_siblings(xas, entry, curr, &count, &values);
+ node->count += count;
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ node->nr_values += values;
+ XA_NODE_BUG_ON(node, node->nr_values > XA_CHUNK_SIZE);
+ if (count || values)
+ xas_update(xas, node);
+ if (count < 0)
+ xas_delete_node(xas);
+ }
+
+ return curr;
+}
+EXPORT_SYMBOL_GPL(xas_store);
+
/**
* xas_get_tag() - Returns the state of this tag.
* @xas: XArray operation state.
@@ -247,6 +767,30 @@ void xas_clear_tag(const struct xa_state *xas, xa_tag_t tag)
}
EXPORT_SYMBOL_GPL(xas_clear_tag);
+/**
+ * xas_init_tags() - Initialise all tags for the entry
+ * @xas: Array operations state.
+ *
+ * Initialise all tags for the entry specified by @xas. If we're tracking
+ * free entries with a tag, we need to set it on all entries. All other
+ * tags are cleared.
+ *
+ * This implementation is not as efficient as it could be; we may walk
+ * up the tree multiple times.
+ */
+void xas_init_tags(const struct xa_state *xas)
+{
+ xa_tag_t tag = 0;
+
+ for (;;) {
+ xas_clear_tag(xas, tag);
+ if (tag == XA_TAG_MAX)
+ break;
+ tag_inc(tag);
+ }
+}
+EXPORT_SYMBOL_GPL(xas_init_tags);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -260,9 +804,19 @@ EXPORT_SYMBOL_GPL(xas_clear_tag);
*/
void xa_init_flags(struct xarray *xa, gfp_t flags)
{
+ unsigned int lock_type;
+ static struct lock_class_key xa_lock_irq;
+ static struct lock_class_key xa_lock_bh;
+
spin_lock_init(&xa->xa_lock);
xa->xa_flags = flags;
xa->xa_head = NULL;
+
+ lock_type = xa_lock_type(xa);
+ if (lock_type == XA_LOCK_IRQ)
+ lockdep_set_class(&xa->xa_lock, &xa_lock_irq);
+ else if (lock_type == XA_LOCK_BH)
+ lockdep_set_class(&xa->xa_lock, &xa_lock_bh);
}
EXPORT_SYMBOL(xa_init_flags);
@@ -289,6 +843,100 @@ void *xa_load(struct xarray *xa, unsigned long index)
}
EXPORT_SYMBOL(xa_load);
+static void *xas_result(struct xa_state *xas, void *curr)
+{
+ XA_NODE_BUG_ON(xas->xa_node, xa_is_internal(curr));
+ if (xas_error(xas))
+ curr = xas->xa_node;
+ return curr;
+}
+
+/**
+ * __xa_erase() - Erase this entry from the XArray while locked.
+ * @xa: XArray.
+ * @index: Index into array.
+ *
+ * If the entry at this index is a multi-index entry then all indices will
+ * be erased, and the entry will no longer be a multi-index entry.
+ * This function expects the xa_lock to be held on entry.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry. May
+ * release and reacquire xa_lock if @gfp flags permit.
+ * Return: The old entry at this index.
+ */
+void *__xa_erase(struct xarray *xa, unsigned long index)
+{
+ XA_STATE(xas, xa, index);
+ return xas_result(&xas, xas_store(&xas, NULL));
+}
+EXPORT_SYMBOL_GPL(__xa_erase);
+
+/**
+ * xa_store() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * After this function returns, loads from this index will return @entry.
+ * Storing into an existing multislot entry updates the entry of every index.
+ * The tags associated with @index are unaffected unless @entry is %NULL.
+ *
+ * Context: Process context. Takes and releases the xa_lock. May sleep
+ * if the @gfp flags permit.
+ * Return: The old entry at this index on success, xa_err(-EINVAL) if @entry
+ * cannot be stored in an XArray, or xa_err(-ENOMEM) if memory allocation
+ * failed.
+ */
+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ xas_lock(&xas);
+ curr = xas_store(&xas, entry);
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(xa_store);
+
+/**
+ * __xa_store() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * You must already be holding the xa_lock when calling this function.
+ * It will drop the lock if needed to allocate memory, and then reacquire
+ * it afterwards.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry. May
+ * release and reacquire xa_lock if @gfp flags permit.
+ * Return: The old entry at this index or xa_err() if an error happened.
+ */
+void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ curr = xas_store(&xas, entry);
+ } while (__xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(__xa_store);
+
/**
* __xa_set_tag() - Set this tag on this entry while locked.
* @xa: XArray.
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 85a009001109..f900a2b7509d 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -37,4 +37,6 @@ static inline bool arch_spin_is_locked(arch_spinlock_t *mutex)
return true;
}
+#include <linux/lockdep.h>
+
#endif
diff --git a/tools/testing/radix-tree/linux/kernel.h b/tools/testing/radix-tree/linux/kernel.h
index 5d06ac75a14d..4568248222ae 100644
--- a/tools/testing/radix-tree/linux/kernel.h
+++ b/tools/testing/radix-tree/linux/kernel.h
@@ -18,4 +18,8 @@
#define pr_debug printk
#define pr_cont printk
+#define __acquires(x)
+#define __releases(x)
+#define __must_hold(x)
+
#endif /* _KERNEL_H */
diff --git a/tools/testing/radix-tree/linux/lockdep.h b/tools/testing/radix-tree/linux/lockdep.h
new file mode 100644
index 000000000000..565fccdfe6e9
--- /dev/null
+++ b/tools/testing/radix-tree/linux/lockdep.h
@@ -0,0 +1,11 @@
+#ifndef _LINUX_LOCKDEP_H
+#define _LINUX_LOCKDEP_H
+struct lock_class_key {
+ unsigned int a;
+};
+
+static inline void lockdep_set_class(spinlock_t *lock,
+ struct lock_class_key *key)
+{
+}
+#endif /* _LINUX_LOCKDEP_H */
diff --git a/tools/testing/radix-tree/linux/rcupdate.h b/tools/testing/radix-tree/linux/rcupdate.h
index 25010bf86c1d..fd280b070fdb 100644
--- a/tools/testing/radix-tree/linux/rcupdate.h
+++ b/tools/testing/radix-tree/linux/rcupdate.h
@@ -7,5 +7,6 @@
#define rcu_dereference_raw(p) rcu_dereference(p)
#define rcu_dereference_protected(p, cond) rcu_dereference(p)
#define rcu_dereference_check(p, cond) rcu_dereference(p)
+#define RCU_INIT_POINTER(p, v) (p) = (v)
#endif
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 6e1cc2040817..f151588d04a0 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -8,6 +8,38 @@
#include "test.h"
+void *xa_store_order(struct xarray *xa, unsigned long index, unsigned order,
+ void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, 0);
+ void *curr;
+
+ xas_set_order(&xas, index, order);
+ do {
+ curr = xas_store(&xas, entry);
+ } while (xas_nomem(&xas, gfp));
+
+ return curr;
+}
+
+int xa_insert_order(struct xarray *xa, unsigned long index, unsigned order,
+ void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, 0);
+ void *curr;
+
+ xas_set_order(&xas, index, order);
+ do {
+ curr = xas_create(&xas);
+ if (!curr)
+ xas_store(&xas, entry);
+ } while (xas_nomem(&xas, gfp));
+
+ if (xas_error(&xas))
+ return xas_error(&xas);
+ return curr ? -EEXIST : 0;
+}
+
struct item *
item_tag_set(struct radix_tree_root *root, unsigned long index, int tag)
{
diff --git a/tools/testing/radix-tree/test.h b/tools/testing/radix-tree/test.h
index d9c031dbeb1a..ffd162645c11 100644
--- a/tools/testing/radix-tree/test.h
+++ b/tools/testing/radix-tree/test.h
@@ -4,6 +4,11 @@
#include <linux/radix-tree.h>
#include <linux/rcupdate.h>
+void *xa_store_order(struct xarray *, unsigned long index, unsigned order,
+ void *entry, gfp_t);
+int xa_insert_order(struct xarray *, unsigned long index, unsigned order,
+ void *entry, gfp_t);
+
struct item {
unsigned long index;
unsigned int order;
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 3f8f19cb3739..5defd0b9f85c 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -19,6 +19,36 @@
#include "test.h"
+void check_xa_err(struct xarray *xa)
+{
+ assert(xa_err(xa_store(xa, 0, xa_mk_value(0), GFP_NOWAIT)) == 0);
+ assert(xa_err(xa_store(xa, 0, NULL, 0)) == 0);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_NOWAIT)) == -ENOMEM);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_NOWAIT)) == -ENOMEM);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL)) == 0);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(0), GFP_KERNEL)) == 0);
+ assert(xa_err(xa_store(xa, 1, NULL, 0)) == 0);
+// kills the test-suite :-(
+// assert(xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) == -EINVAL);
+}
+
+void check_xa_tag(struct xarray *xa)
+{
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ assert(xa_store(xa, 0, xa, GFP_KERNEL) == NULL);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == true);
+ assert(xa_get_tag(xa, 1, XA_TAG_0) == false);
+ assert(xa_store(xa, 0, NULL, GFP_KERNEL) == xa);
+ assert(xa_empty(xa));
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+}
+
void check_xa_load(struct xarray *xa)
{
unsigned long i, j;
@@ -31,16 +61,95 @@ void check_xa_load(struct xarray *xa)
else
assert(!entry);
}
- radix_tree_insert(xa, i, xa_mk_value(i));
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ }
+}
+
+void check_xa_shrink(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 1);
+ struct xa_node *node;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+
+ assert(xas_load(&xas) == xa_mk_value(1));
+ node = xas.xa_node;
+ assert(node->slots[0] == xa_mk_value(0));
+ rcu_read_lock();
+ xas_store(&xas, NULL);
+ assert(xas.xa_node == XAS_BOUNDS);
+ assert(node->slots[0] == XA_RETRY_ENTRY);
+ rcu_read_unlock();
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+}
+
+void check_multi_store(struct xarray *xa)
+{
+ unsigned long i, j, k;
+
+ xa_store_order(xa, 0, 1, xa_mk_value(0), GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+ assert(xa_load(xa, 1) == xa_mk_value(0));
+ assert(xa_load(xa, 2) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 2);
+ assert(xa_to_node(xa_head(xa))->nr_values == 2);
+
+ xa_store(xa, 3, xa, GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+ assert(xa_load(xa, 1) == xa_mk_value(0));
+ assert(xa_load(xa, 2) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 3);
+ assert(xa_to_node(xa_head(xa))->nr_values == 2);
+
+ xa_store_order(xa, 0, 2, xa_mk_value(1), GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(1));
+ assert(xa_load(xa, 1) == xa_mk_value(1));
+ assert(xa_load(xa, 2) == xa_mk_value(1));
+ assert(xa_load(xa, 3) == xa_mk_value(1));
+ assert(xa_load(xa, 4) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 4);
+ assert(xa_to_node(xa_head(xa))->nr_values == 4);
+
+ xa_store_order(xa, 0, 64, NULL, GFP_KERNEL);
+ assert(xa_empty(xa));
+
+ for (i = 0; i < 60; i++) {
+ for (j = 0; j < 60; j++) {
+ xa_store_order(xa, 0, i, xa_mk_value(i), GFP_KERNEL);
+ xa_store_order(xa, 0, j, xa_mk_value(j), GFP_KERNEL);
+
+ for (k = 0; k < 60; k++) {
+ void *entry = xa_load(xa, (1UL << k) - 1);
+ if ((i < k) && (j < k))
+ assert(entry == NULL);
+ else
+ assert(entry == xa_mk_value(j));
+ }
+
+ xa_erase(xa, 0);
+ assert(xa_empty(xa));
+ }
}
}
void xarray_checks(void)
{
- RADIX_TREE(array, GFP_KERNEL);
+ DEFINE_XARRAY(array);
+
+ check_xa_err(&array);
+ item_kill_tree(&array);
+
+ check_xa_tag(&array);
+ item_kill_tree(&array);
check_xa_load(&array);
+ item_kill_tree(&array);
+
+ check_xa_shrink(&array);
+ item_kill_tree(&array);
+ check_multi_store(&array);
item_kill_tree(&array);
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
This one is trivial.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/readahead.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index f64b31b3a84a..66bcaffd47f0 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -174,9 +174,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
if (page_offset > end_index)
break;
- rcu_read_lock();
- page = radix_tree_lookup(&mapping->pages, page_offset);
- rcu_read_unlock();
+ page = xa_load(&mapping->pages, page_offset);
if (page && !xa_is_value(page))
continue;
--
2.16.1
From: Matthew Wilcox <[email protected]>
Quite a straightforward conversion.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/huge_memory.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4b60f55f1f8b..e0a073f0a794 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2371,7 +2371,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
if (PageAnon(head) && !PageSwapCache(head)) {
page_ref_inc(page_tail);
} else {
- /* Additional pin to radix tree */
+ /* Additional pin to page cache */
page_ref_add(page_tail, 2);
}
@@ -2442,13 +2442,13 @@ static void __split_huge_page(struct page *page, struct list_head *list,
ClearPageCompound(head);
/* See comment in __split_huge_page_tail() */
if (PageAnon(head)) {
- /* Additional pin to radix tree of swap cache */
+ /* Additional pin to swap cache */
if (PageSwapCache(head))
page_ref_add(head, 2);
else
page_ref_inc(head);
} else {
- /* Additional pin to radix tree */
+ /* Additional pin to page cache */
page_ref_add(head, 2);
xa_unlock(&head->mapping->pages);
}
@@ -2560,7 +2560,7 @@ bool can_split_huge_page(struct page *page, int *pextra_pins)
{
int extra_pins;
- /* Additional pins from radix tree */
+ /* Additional pins from page cache */
if (PageAnon(page))
extra_pins = PageSwapCache(page) ? HPAGE_PMD_NR : 0;
else
@@ -2656,17 +2656,14 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
spin_lock_irqsave(zone_lru_lock(page_zone(head)), flags);
if (mapping) {
- void **pslot;
+ XA_STATE(xas, &mapping->pages, page_index(head));
- xa_lock(&mapping->pages);
- pslot = radix_tree_lookup_slot(&mapping->pages,
- page_index(head));
/*
- * Check if the head page is present in radix tree.
+ * Check if the head page is present in page cache.
* We assume all tail are present too, if head is there.
*/
- if (radix_tree_deref_slot_protected(pslot,
- &mapping->pages.xa_lock) != head)
+ xa_lock(&mapping->pages);
+ if (xas_load(&xas) != head)
goto fail;
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
shmem_radix_tree_replace() is renamed to shmem_xa_replace() and
converted to use the XArray API.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 2616f2d3be95..a8db3241f826 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -321,24 +321,20 @@ void shmem_uncharge(struct inode *inode, long pages)
}
/*
- * Replace item expected in radix tree by a new item, while holding tree lock.
+ * Replace item expected in xarray by a new item, while holding xa_lock.
*/
-static int shmem_radix_tree_replace(struct address_space *mapping,
+static int shmem_xa_replace(struct address_space *mapping,
pgoff_t index, void *expected, void *replacement)
{
- struct radix_tree_node *node;
- void **pslot;
+ XA_STATE(xas, &mapping->pages, index);
void *item;
VM_BUG_ON(!expected);
VM_BUG_ON(!replacement);
- item = __radix_tree_lookup(&mapping->pages, index, &node, &pslot);
- if (!item)
- return -ENOENT;
+ item = xas_load(&xas);
if (item != expected)
return -ENOENT;
- __radix_tree_replace(&mapping->pages, node, pslot,
- replacement, NULL);
+ xas_store(&xas, replacement);
return 0;
}
@@ -605,8 +601,7 @@ static int shmem_add_to_page_cache(struct page *page,
} else if (!expected) {
error = radix_tree_insert(&mapping->pages, index, page);
} else {
- error = shmem_radix_tree_replace(mapping, index, expected,
- page);
+ error = shmem_xa_replace(mapping, index, expected, page);
}
if (!error) {
@@ -635,7 +630,7 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
VM_BUG_ON_PAGE(PageCompound(page), page);
xa_lock_irq(&mapping->pages);
- error = shmem_radix_tree_replace(mapping, page->index, page, radswap);
+ error = shmem_xa_replace(mapping, page->index, page, radswap);
page->mapping = NULL;
mapping->nrpages--;
__dec_node_page_state(page, NR_FILE_PAGES);
@@ -1550,8 +1545,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
* a nice clean interface for us to replace oldpage by newpage there.
*/
xa_lock_irq(&swap_mapping->pages);
- error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,
- newpage);
+ error = shmem_xa_replace(swap_mapping, swap_index, oldpage, newpage);
if (!error) {
__inc_node_page_state(newpage, NR_FILE_PAGES);
__dec_node_page_state(oldpage, NR_FILE_PAGES);
--
2.16.1
From: Matthew Wilcox <[email protected]>
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/migrate.c | 41 ++++++++++++++++-------------------------
1 file changed, 16 insertions(+), 25 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 184bc1d0e187..bb3698066d79 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -322,7 +322,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
page = migration_entry_to_page(entry);
/*
- * Once radix-tree replacement of page migration started, page_count
+ * Once page cache replacement of page migration started, page_count
* *must* be zero. And, we don't want to call wait_on_page_locked()
* against a page without get_page().
* So, we use get_page_unless_zero(), here. Even failed, page fault
@@ -437,10 +437,10 @@ int migrate_page_move_mapping(struct address_space *mapping,
struct buffer_head *head, enum migrate_mode mode,
int extra_count)
{
+ XA_STATE(xas, &mapping->pages, page_index(page));
struct zone *oldzone, *newzone;
int dirty;
int expected_count = 1 + extra_count;
- void **pslot;
/*
* Device public or private pages have an extra refcount as they are
@@ -466,21 +466,16 @@ int migrate_page_move_mapping(struct address_space *mapping,
oldzone = page_zone(page);
newzone = page_zone(newpage);
- xa_lock_irq(&mapping->pages);
-
- pslot = radix_tree_lookup_slot(&mapping->pages,
- page_index(page));
+ xas_lock_irq(&xas);
expected_count += 1 + page_has_private(page);
- if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot,
- &mapping->pages.xa_lock) != page) {
- xa_unlock_irq(&mapping->pages);
+ if (page_count(page) != expected_count || xas_load(&xas) != page) {
+ xas_unlock_irq(&xas);
return -EAGAIN;
}
if (!page_ref_freeze(page, expected_count)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}
@@ -494,7 +489,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (mode == MIGRATE_ASYNC && head &&
!buffer_migrate_lock_buffers(head, mode)) {
page_ref_unfreeze(page, expected_count);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}
@@ -522,7 +517,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
SetPageDirty(newpage);
}
- radix_tree_replace_slot(&mapping->pages, pslot, newpage);
+ xas_store(&xas, newpage);
/*
* Drop cache reference from old page by unfreezing
@@ -531,7 +526,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
page_ref_unfreeze(page, expected_count - 1);
- xa_unlock(&mapping->pages);
+ xas_unlock(&xas);
/* Leave irq disabled to prevent preemption while updating stats */
/*
@@ -571,22 +566,18 @@ EXPORT_SYMBOL(migrate_page_move_mapping);
int migrate_huge_page_move_mapping(struct address_space *mapping,
struct page *newpage, struct page *page)
{
+ XA_STATE(xas, &mapping->pages, page_index(page));
int expected_count;
- void **pslot;
-
- xa_lock_irq(&mapping->pages);
-
- pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page));
+ xas_lock_irq(&xas);
expected_count = 2 + page_has_private(page);
- if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) {
- xa_unlock_irq(&mapping->pages);
+ if (page_count(page) != expected_count || xas_load(&xas) != page) {
+ xas_unlock_irq(&xas);
return -EAGAIN;
}
if (!page_ref_freeze(page, expected_count)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}
@@ -595,11 +586,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
get_page(newpage);
- radix_tree_replace_slot(&mapping->pages, pslot, newpage);
+ xas_store(&xas, newpage);
page_ref_unfreeze(page, expected_count - 1);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return MIGRATEPAGE_SUCCESS;
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
Both callers of __delete_from_swap_cache have the swp_entry_t already,
so pass that in to make constructing the XA_STATE easier.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 5 +++--
mm/swap_state.c | 24 ++++++++++--------------
mm/vmscan.c | 2 +-
3 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 5933f02a3219..8c6f46797b82 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -406,7 +406,7 @@ extern void show_swap_cache_info(void);
extern int add_to_swap(struct page *page);
extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t);
extern int __add_to_swap_cache(struct page *page, swp_entry_t entry);
-extern void __delete_from_swap_cache(struct page *);
+extern void __delete_from_swap_cache(struct page *, swp_entry_t entry);
extern void delete_from_swap_cache(struct page *);
extern void free_page_and_swap_cache(struct page *);
extern void free_pages_and_swap_cache(struct page **, int);
@@ -581,7 +581,8 @@ static inline int add_to_swap_cache(struct page *page, swp_entry_t entry,
return -1;
}
-static inline void __delete_from_swap_cache(struct page *page)
+static inline void __delete_from_swap_cache(struct page *page,
+ swp_entry_t entry)
{
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index a57b5ad4c503..219e3b4f09e6 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -154,23 +154,22 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
* This must be called only on pages that have
* been verified to be in the swap cache.
*/
-void __delete_from_swap_cache(struct page *page)
+void __delete_from_swap_cache(struct page *page, swp_entry_t entry)
{
- struct address_space *address_space;
+ struct address_space *address_space = swap_address_space(entry);
int i, nr = hpage_nr_pages(page);
- swp_entry_t entry;
- pgoff_t idx;
+ pgoff_t idx = swp_offset(entry);
+ XA_STATE(xas, &address_space->pages, idx);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(!PageSwapCache(page), page);
VM_BUG_ON_PAGE(PageWriteback(page), page);
- entry.val = page_private(page);
- address_space = swap_address_space(entry);
- idx = swp_offset(entry);
for (i = 0; i < nr; i++) {
- radix_tree_delete(&address_space->pages, idx + i);
+ void *entry = xas_store(&xas, NULL);
+ VM_BUG_ON_PAGE(entry != page + i, entry);
set_page_private(page + i, 0);
+ xas_next(&xas);
}
ClearPageSwapCache(page);
address_space->nrpages -= nr;
@@ -246,14 +245,11 @@ int add_to_swap(struct page *page)
*/
void delete_from_swap_cache(struct page *page)
{
- swp_entry_t entry;
- struct address_space *address_space;
-
- entry.val = page_private(page);
+ swp_entry_t entry = { .val = page_private(page) };
+ struct address_space *address_space = swap_address_space(entry);
- address_space = swap_address_space(entry);
xa_lock_irq(&address_space->pages);
- __delete_from_swap_cache(page);
+ __delete_from_swap_cache(page, entry);
xa_unlock_irq(&address_space->pages);
put_swap_page(page, entry);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 93f4b4634431..3eb9f7c732bb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -697,7 +697,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
if (PageSwapCache(page)) {
swp_entry_t swap = { .val = page_private(page) };
mem_cgroup_swapout(page, swap);
- __delete_from_swap_cache(page);
+ __delete_from_swap_cache(page, swap);
xa_unlock_irqrestore(&mapping->pages, flags);
put_swap_page(page, swap);
} else {
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is essentially xa_cmpxchg() with the locking handled above us,
and it doesn't have to handle replacing a NULL entry.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/truncate.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/mm/truncate.c b/mm/truncate.c
index 3fe1e7461684..9267c1a12a31 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -33,15 +33,12 @@
static inline void __clear_shadow_entry(struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_node *node;
- void **slot;
+ XA_STATE(xas, &mapping->pages, index);
- if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot))
+ xas_set_update(&xas, workingset_update_node);
+ if (xas_load(&xas) != entry)
return;
- if (*slot != entry)
- return;
- __radix_tree_replace(&mapping->pages, node, slot, NULL,
- workingset_update_node);
+ xas_store(&xas, NULL);
mapping->nrexceptional--;
}
@@ -738,10 +735,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
index++;
}
/*
- * For DAX we invalidate page tables after invalidating radix tree. We
+ * For DAX we invalidate page tables after invalidating page cache. We
* could invalidate page tables while invalidating each entry however
* that would be expensive. And doing range unmapping before doesn't
- * work as we have no cheap way to find whether radix tree entry didn't
+ * work as we have no cheap way to find whether page cache entry didn't
* get remapped later.
*/
if (dax_mapping(mapping)) {
--
2.16.1
From: Matthew Wilcox <[email protected]>
This first function in the XArray API brings with it a lot of support
infrastructure. The advanced API is based around the xa_state which is
a more capable version of the radix_tree_iter.
As the test-suite demonstrates, it is possible to use the xarray and
radix tree APIs on the same data structure.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 304 ++++++++++++++++++++++++++++
lib/radix-tree.c | 43 ----
lib/xarray.c | 191 +++++++++++++++++
tools/testing/radix-tree/.gitignore | 1 +
tools/testing/radix-tree/Makefile | 7 +-
tools/testing/radix-tree/linux/kernel.h | 1 +
tools/testing/radix-tree/linux/radix-tree.h | 1 -
tools/testing/radix-tree/linux/rcupdate.h | 1 +
tools/testing/radix-tree/linux/xarray.h | 1 +
tools/testing/radix-tree/xarray-test.c | 56 +++++
10 files changed, 560 insertions(+), 46 deletions(-)
create mode 100644 tools/testing/radix-tree/xarray-test.c
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index b51f354dfbf0..5845187c1ce8 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -12,6 +12,8 @@
#include <linux/bug.h>
#include <linux/compiler.h>
#include <linux/kconfig.h>
+#include <linux/kernel.h>
+#include <linux/rcupdate.h>
#include <linux/spinlock.h>
#include <linux/types.h>
@@ -30,6 +32,10 @@
*
* 0-62: Sibling entries
* 256: Retry entry
+ *
+ * Errors are also represented as internal entries, but use the negative
+ * space (-4094 to -2). They're never stored in the slots array; only
+ * returned by the normal API.
*/
#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
@@ -107,6 +113,42 @@ static inline bool xa_is_internal(const void *entry)
return ((unsigned long)entry & 3) == 2;
}
+/**
+ * xa_is_err() - Report whether an XArray operation returned an error
+ * @entry: Result from calling an XArray function
+ *
+ * If an XArray operation cannot complete an operation, it will return
+ * a special value indicating an error. This function tells you
+ * whether an error occurred; xa_err() tells you which error occurred.
+ *
+ * Context: Any context.
+ * Return: %true if the entry indicates an error.
+ */
+static inline bool xa_is_err(const void *entry)
+{
+ return unlikely(xa_is_internal(entry));
+}
+
+/**
+ * xa_err() - Turn an XArray result into an errno.
+ * @entry: Result from calling an XArray function.
+ *
+ * If an XArray operation cannot complete an operation, it will return
+ * a special pointer value which encodes an errno. This function extracts
+ * the errno from the pointer value, or returns 0 if the pointer does not
+ * represent an errno.
+ *
+ * Context: Any context.
+ * Return: A negative errno or 0.
+ */
+static inline int xa_err(void *entry)
+{
+ /* xa_to_internal() would not do sign extension. */
+ if (xa_is_err(entry))
+ return (long)entry >> 2;
+ return 0;
+}
+
/**
* struct xarray - The anchor of the XArray.
* @xa_lock: Lock that protects the contents of the XArray.
@@ -152,6 +194,7 @@ struct xarray {
struct xarray name = XARRAY_INIT_FLAGS(name, flags)
void xa_init_flags(struct xarray *, gfp_t flags);
+void *xa_load(struct xarray *, unsigned long index);
/**
* xa_init() - Initialise an empty XArray.
@@ -220,6 +263,62 @@ struct xa_node {
unsigned long tags[XA_MAX_TAGS][XA_TAG_LONGS];
};
+#ifdef XA_DEBUG
+void xa_dump(const struct xarray *);
+void xa_dump_node(const struct xa_node *);
+#define XA_BUG_ON(xa, x) do { \
+ if (x) \
+ xa_dump(xa); \
+ BUG_ON(x); \
+ } while (0)
+#define XA_NODE_BUG_ON(node, x) do { \
+ if ((x) && (node)) \
+ xa_dump_node(node); \
+ BUG_ON(x); \
+ } while (0)
+#else
+#define XA_BUG_ON(xa, x) do { } while (0)
+#define XA_NODE_BUG_ON(node, x) do { } while (0)
+#endif
+
+/* Private */
+static inline void *xa_head(struct xarray *xa)
+{
+ return rcu_dereference_check(xa->xa_head,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_head_locked(struct xarray *xa)
+{
+ return rcu_dereference_protected(xa->xa_head,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_entry(struct xarray *xa,
+ const struct xa_node *node, unsigned int offset)
+{
+ XA_NODE_BUG_ON(node, offset >= XA_CHUNK_SIZE);
+ return rcu_dereference_check(node->slots[offset],
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_entry_locked(struct xarray *xa,
+ const struct xa_node *node, unsigned int offset)
+{
+ XA_NODE_BUG_ON(node, offset >= XA_CHUNK_SIZE);
+ return rcu_dereference_protected(node->slots[offset],
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline struct xa_node *xa_to_node(const void *entry)
+{
+ return (struct xa_node *)((unsigned long)entry - 2);
+}
+
/* Private */
static inline bool xa_is_node(const void *entry)
{
@@ -253,4 +352,209 @@ static inline bool xa_is_sibling(const void *entry)
#define XA_RETRY_ENTRY xa_mk_internal(256)
+/**
+ * xa_is_retry() - Is the entry a retry entry?
+ * @entry: Entry retrieved from the XArray
+ *
+ * Return: %true if the entry is a retry entry.
+ */
+static inline bool xa_is_retry(const void *entry)
+{
+ return unlikely(entry == XA_RETRY_ENTRY);
+}
+
+/**
+ * typedef xa_update_node_t - A callback function from the XArray.
+ * @node: The node which is being processed
+ *
+ * This function is called every time the XArray updates the count of
+ * present and value entries in a node. It allows advanced users to
+ * maintain the private_list in the node.
+ *
+ * Context: The xa_lock is held and interrupts may be disabled.
+ * Implementations should not drop the xa_lock, nor re-enable
+ * interrupts.
+ */
+typedef void (*xa_update_node_t)(struct xa_node *node);
+
+/*
+ * The xa_state is opaque to its users. It contains various different pieces
+ * of state involved in the current operation on the XArray. It should be
+ * declared on the stack and passed between the various internal routines.
+ * The various elements in it should not be accessed directly, but only
+ * through the provided accessor functions. The below documentation is for
+ * the benefit of those working on the code, not for users of the XArray.
+ *
+ * @xa_node usually points to the xa_node containing the slot we're operating
+ * on (and @xa_offset is the offset in the slots array). If there is a
+ * single entry in the array at index 0, there are no allocated xa_nodes to
+ * point to, and so we store %NULL in @xa_node. @xa_node is set to
+ * the value %XAS_RESTART if the xa_state is not walked to the correct
+ * position in the tree of nodes for this operation. If an error occurs
+ * during an operation, it is set to an %XAS_ERROR value. If we run off the
+ * end of the allocated nodes, it is set to %XAS_BOUNDS.
+ */
+struct xa_state {
+ struct xarray *xa;
+ unsigned long xa_index;
+ unsigned char xa_shift;
+ unsigned char xa_sibs;
+ unsigned char xa_offset;
+ unsigned char xa_pad; /* Helps gcc generate better code */
+ struct xa_node *xa_node;
+ struct xa_node *xa_alloc;
+ xa_update_node_t xa_update;
+};
+
+/*
+ * We encode errnos in the xas->xa_node. If an error has happened, we need to
+ * drop the lock to fix it, and once we've done so the xa_state is invalid.
+ */
+#define XA_ERROR(errno) ((struct xa_node *)(((long)errno << 2) | 2UL))
+#define XAS_BOUNDS ((struct xa_node *)1UL)
+#define XAS_RESTART ((struct xa_node *)3UL)
+
+#define __XA_STATE(array, index) { \
+ .xa = array, \
+ .xa_index = index, \
+ .xa_shift = 0, \
+ .xa_sibs = 0, \
+ .xa_offset = 0, \
+ .xa_pad = 0, \
+ .xa_node = XAS_RESTART, \
+ .xa_alloc = NULL, \
+ .xa_update = NULL \
+}
+
+/**
+ * XA_STATE() - Declare an XArray operation state.
+ * @name: Name of this operation state (usually xas).
+ * @array: Array to operate on.
+ * @index: Initial index of interest.
+ *
+ * Declare and initialise an xa_state on the stack.
+ */
+#define XA_STATE(name, array, index) \
+ struct xa_state name = __XA_STATE(array, index)
+
+#define xas_tagged(xas, tag) xa_tagged((xas)->xa, (tag))
+#define xas_trylock(xas) xa_trylock((xas)->xa)
+#define xas_lock(xas) xa_lock((xas)->xa)
+#define xas_unlock(xas) xa_unlock((xas)->xa)
+#define xas_lock_bh(xas) xa_lock_bh((xas)->xa)
+#define xas_unlock_bh(xas) xa_unlock_bh((xas)->xa)
+#define xas_lock_irq(xas) xa_lock_irq((xas)->xa)
+#define xas_unlock_irq(xas) xa_unlock_irq((xas)->xa)
+#define xas_lock_irqsave(xas, flags) \
+ xa_lock_irqsave((xas)->xa, flags)
+#define xas_unlock_irqrestore(xas, flags) \
+ xa_unlock_irqrestore((xas)->xa, flags)
+
+/**
+ * xas_error() - Return an errno stored in the xa_state.
+ * @xas: XArray operation state.
+ *
+ * Return: 0 if no error has been noted. A negative errno if one has.
+ */
+static inline int xas_error(const struct xa_state *xas)
+{
+ return xa_err(xas->xa_node);
+}
+
+/**
+ * xas_set_err() - Note an error in the xa_state.
+ * @xas: XArray operation state.
+ * @err: Negative error number.
+ *
+ * Only call this function with a negative @err; zero or positive errors
+ * will probably not behave the way you think they should. If you want
+ * to clear the error from an xa_state, use xas_reset().
+ */
+static inline void xas_set_err(struct xa_state *xas, long err)
+{
+ xas->xa_node = XA_ERROR(err);
+}
+
+/**
+ * xas_invalid() - Is the xas in a retry or error state?
+ * @xas: XArray operation state.
+ *
+ * Return: %true if the xas cannot be used for operations.
+ */
+static inline bool xas_invalid(const struct xa_state *xas)
+{
+ return (unsigned long)xas->xa_node & 3;
+}
+
+/**
+ * xas_valid() - Is the xas a valid cursor into the array?
+ * @xas: XArray operation state.
+ *
+ * Return: %true if the xas can be used for operations.
+ */
+static inline bool xas_valid(const struct xa_state *xas)
+{
+ return !xas_invalid(xas);
+}
+
+/**
+ * xas_reset() - Reset an XArray operation state.
+ * @xas: XArray operation state.
+ *
+ * Resets the error or walk state of the @xas so future walks of the
+ * array will start from the root. Use this if you have dropped the
+ * xarray lock and want to reuse the xa_state.
+ *
+ * Context: Any context.
+ */
+static inline void xas_reset(struct xa_state *xas)
+{
+ xas->xa_node = XAS_RESTART;
+}
+
+/**
+ * xas_retry() - Handle a retry entry.
+ * @xas: XArray operation state.
+ * @entry: Entry from xarray.
+ *
+ * An RCU-protected read may see a retry entry as a side-effect of a
+ * simultaneous modification. This function sets up the @xas to retry
+ * the walk from the head of the array.
+ *
+ * Context: Any context.
+ * Return: true if the operation needs to be retried.
+ */
+static inline bool xas_retry(struct xa_state *xas, const void *entry)
+{
+ if (!xa_is_retry(entry))
+ return false;
+ xas_reset(xas);
+ return true;
+}
+
+void *xas_load(struct xa_state *);
+
+/**
+ * xas_reload() - Refetch an entry from the xarray.
+ * @xas: XArray operation state.
+ *
+ * Use this function to check that a previously loaded entry still has
+ * the same value. This is useful for the lockless pagecache lookup where
+ * we walk the array with only the RCU lock to protect us, lock the page,
+ * then check that the page hasn't moved since we looked it up.
+ *
+ * The caller guarantees that @xas is still valid. If it may be in an
+ * error or restart state, call xas_load() instead.
+ *
+ * Return: The entry at this location in the xarray.
+ */
+static inline void *xas_reload(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (node)
+ return xa_entry(xas->xa, node, xas->xa_offset);
+ return xa_head(xas->xa);
+}
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index ed3e8d641cba..d3cb26104589 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -256,49 +256,6 @@ static unsigned long next_index(unsigned long index,
}
#ifndef __KERNEL__
-static void dump_node(struct radix_tree_node *node, unsigned long index)
-{
- unsigned long i;
-
- pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d nr_values %d\n",
- node, node->offset, index, index | node_maxindex(node),
- node->parent,
- node->tags[0][0], node->tags[1][0], node->tags[2][0],
- node->shift, node->count, node->nr_values);
-
- for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
- unsigned long first = index | (i << node->shift);
- unsigned long last = first | ((1UL << node->shift) - 1);
- void *entry = node->slots[i];
- if (!entry)
- continue;
- if (entry == RADIX_TREE_RETRY) {
- pr_debug("radix retry offset %ld indices %lu-%lu parent %p\n",
- i, first, last, node);
- } else if (!radix_tree_is_internal_node(entry)) {
- pr_debug("radix entry %p offset %ld indices %lu-%lu parent %p\n",
- entry, i, first, last, node);
- } else if (xa_is_sibling(entry)) {
- pr_debug("radix sblng %p offset %ld indices %lu-%lu parent %p val %p\n",
- entry, i, first, last, node,
- node->slots[xa_to_sibling(entry)]);
- } else {
- dump_node(entry_to_node(entry), first);
- }
- }
-}
-
-/* For debug */
-static void radix_tree_dump(struct radix_tree_root *root)
-{
- pr_debug("radix root: %p xa_head %p tags %x\n",
- root, root->xa_head,
- root->xa_flags >> ROOT_TAG_SHIFT);
- if (!radix_tree_is_internal_node(root->xa_head))
- return;
- dump_node(entry_to_node(root->xa_head), 0);
-}
-
static void dump_ida_node(void *entry, unsigned long index)
{
unsigned long i;
diff --git a/lib/xarray.c b/lib/xarray.c
index 382458f602cc..195cb130d53d 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -24,6 +24,100 @@
* @entry refers to something stored in a slot in the xarray
*/
+/* extracts the offset within this node from the index */
+static unsigned int get_offset(unsigned long index, struct xa_node *node)
+{
+ return (index >> node->shift) & XA_CHUNK_MASK;
+}
+
+/* move the index either forwards (find) or backwards (sibling slot) */
+static void xas_move_index(struct xa_state *xas, unsigned long offset)
+{
+ unsigned int shift = xas->xa_node->shift;
+ xas->xa_index &= ~XA_CHUNK_MASK << shift;
+ xas->xa_index += offset << shift;
+}
+
+static void *set_bounds(struct xa_state *xas)
+{
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+
+/*
+ * Starts a walk. If the @xas is already valid, we assume that it's on
+ * the right path and just return where we've got to. If we're in an
+ * error state, return NULL. If the index is outside the current scope
+ * of the xarray, return NULL without changing @xas->xa_node. Otherwise
+ * set @xas->xa_node to NULL and return the current head of the array.
+ */
+static void *xas_start(struct xa_state *xas)
+{
+ void *entry;
+
+ if (xas_valid(xas))
+ return xas_reload(xas);
+ if (xas_error(xas))
+ return NULL;
+
+ entry = xa_head(xas->xa);
+ if (!xa_is_node(entry)) {
+ if (xas->xa_index)
+ return set_bounds(xas);
+ } else {
+ if ((xas->xa_index >> xa_to_node(entry)->shift) > XA_CHUNK_MASK)
+ return set_bounds(xas);
+ }
+
+ xas->xa_node = NULL;
+ return entry;
+}
+
+static void *xas_descend(struct xa_state *xas, struct xa_node *node)
+{
+ unsigned int offset = get_offset(xas->xa_index, node);
+ void *entry = xa_entry(xas->xa, node, offset);
+
+ xas->xa_node = node;
+ if (xa_is_sibling(entry)) {
+ offset = xa_to_sibling(entry);
+ entry = xa_entry(xas->xa, node, offset);
+ xas_move_index(xas, offset);
+ }
+
+ xas->xa_offset = offset;
+ return entry;
+}
+
+/**
+ * xas_load() - Load an entry from the XArray (advanced).
+ * @xas: XArray operation state.
+ *
+ * Usually walks the @xas to the appropriate state to load the entry stored
+ * at xa_index. However, it will do nothing and return NULL if @xas is
+ * holding an error. If the xa_shift indicates we're operating on a
+ * multislot entry, it will terminate early and potentially return an
+ * internal entry. xas_load() will never expand the tree (see xas_create()).
+ *
+ * The caller should hold the xa_lock or the RCU lock.
+ *
+ * Return: Usually an entry in the XArray, but see description for exceptions.
+ */
+void *xas_load(struct xa_state *xas)
+{
+ void *entry = xas_start(xas);
+
+ while (xa_is_node(entry)) {
+ struct xa_node *node = xa_to_node(entry);
+
+ if (xas->xa_shift > node->shift)
+ break;
+ entry = xas_descend(xas, node);
+ }
+ return entry;
+}
+EXPORT_SYMBOL_GPL(xas_load);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -42,3 +136,100 @@ void xa_init_flags(struct xarray *xa, gfp_t flags)
xa->xa_head = NULL;
}
EXPORT_SYMBOL(xa_init_flags);
+
+/**
+ * xa_load() - Load an entry from an XArray.
+ * @xa: XArray.
+ * @index: index into array.
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ * Return: The entry at @index in @xa.
+ */
+void *xa_load(struct xarray *xa, unsigned long index)
+{
+ XA_STATE(xas, xa, index);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ entry = xas_load(&xas);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ return entry;
+}
+EXPORT_SYMBOL(xa_load);
+
+#ifdef XA_DEBUG
+void xa_dump_node(const struct xa_node *node)
+{
+ unsigned i, j;
+
+ if (!node)
+ return;
+ if ((unsigned long)node & 3) {
+ pr_cont("node %px\n", node);
+ return;
+ }
+
+ pr_cont("node %px %s %d parent %px shift %d count %d values %d "
+ "array %px list %px %px tags",
+ node, node->parent ? "offset" : "max", node->offset,
+ node->parent, node->shift, node->count, node->nr_values,
+ node->array, node->private_list.prev, node->private_list.next);
+ for (i = 0; i < XA_MAX_TAGS; i++)
+ for (j = 0; j < XA_TAG_LONGS; j++)
+ pr_cont(" %lx", node->tags[i][j]);
+ pr_cont("\n");
+}
+
+void xa_dump_index(unsigned long index, unsigned int shift)
+{
+ if (!shift)
+ pr_info("%lu: ", index);
+ else if (shift >= BITS_PER_LONG)
+ pr_info("0-%lu: ", ~0UL);
+ else
+ pr_info("%lu-%lu: ", index, index | ((1UL << shift) - 1));
+}
+
+void xa_dump_entry(const void *entry, unsigned long index, unsigned long shift)
+{
+ if (!entry)
+ return;
+
+ xa_dump_index(index, shift);
+
+ if (xa_is_node(entry)) {
+ unsigned long i;
+ struct xa_node *node = xa_to_node(entry);
+ xa_dump_node(node);
+ for (i = 0; i < XA_CHUNK_SIZE; i++)
+ xa_dump_entry(node->slots[i],
+ index + (i << node->shift), node->shift);
+ } else if (xa_is_value(entry))
+ pr_cont("value %ld (0x%lx)\n", xa_to_value(entry),
+ xa_to_value(entry));
+ else if (!xa_is_internal(entry))
+ pr_cont("%px\n", entry);
+ else if (xa_is_retry(entry))
+ pr_cont("retry (%ld)\n", xa_to_internal(entry));
+ else if (xa_is_sibling(entry))
+ pr_cont("sibling (slot %ld)\n", xa_to_sibling(entry));
+ else
+ pr_cont("UNKNOWN ENTRY (%px)\n", entry);
+}
+
+void xa_dump(const struct xarray *xa)
+{
+ void *entry = xa->xa_head;
+ unsigned int shift = 0;
+
+ pr_info("xarray: %px head %px flags %x tags %d %d %d\n", xa, entry,
+ xa->xa_flags, xa_tagged(xa, XA_TAG_0),
+ xa_tagged(xa, XA_TAG_1), xa_tagged(xa, XA_TAG_2));
+ if (xa_is_node(entry))
+ shift = xa_to_node(entry)->shift + XA_CHUNK_SHIFT;
+ xa_dump_entry(entry, 0, shift);
+}
+#endif
diff --git a/tools/testing/radix-tree/.gitignore b/tools/testing/radix-tree/.gitignore
index 8d4df7a72a8e..833136896b91 100644
--- a/tools/testing/radix-tree/.gitignore
+++ b/tools/testing/radix-tree/.gitignore
@@ -5,3 +5,4 @@ main
multiorder
radix-tree.c
xarray.c
+xarray-test
diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
index 3868bc189199..951a8fbf15bd 100644
--- a/tools/testing/radix-tree/Makefile
+++ b/tools/testing/radix-tree/Makefile
@@ -3,10 +3,11 @@
CFLAGS += -I. -I../../include -g -O2 -Wall -D_LGPL_SOURCE -fsanitize=address
LDFLAGS += -fsanitize=address
LDLIBS+= -lpthread -lurcu
-TARGETS = main idr-test multiorder
+TARGETS = main idr-test multiorder xarray-test
CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o
OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \
- tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o
+ tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o \
+ xarray-test.o
ifndef SHIFT
SHIFT=3
@@ -23,6 +24,8 @@ main: $(OFILES)
idr-test: idr-test.o $(CORE_OFILES)
+xarray-test: $(CORE_OFILES)
+
multiorder: multiorder.o $(CORE_OFILES)
clean:
diff --git a/tools/testing/radix-tree/linux/kernel.h b/tools/testing/radix-tree/linux/kernel.h
index 426f32f28547..5d06ac75a14d 100644
--- a/tools/testing/radix-tree/linux/kernel.h
+++ b/tools/testing/radix-tree/linux/kernel.h
@@ -14,6 +14,7 @@
#include "../../../include/linux/kconfig.h"
#define printk printf
+#define pr_info printk
#define pr_debug printk
#define pr_cont printk
diff --git a/tools/testing/radix-tree/linux/radix-tree.h b/tools/testing/radix-tree/linux/radix-tree.h
index de3f655caca3..24f13d27a8da 100644
--- a/tools/testing/radix-tree/linux/radix-tree.h
+++ b/tools/testing/radix-tree/linux/radix-tree.h
@@ -4,7 +4,6 @@
#include "generated/map-shift.h"
#include "../../../../include/linux/radix-tree.h"
-#include <linux/xarray.h>
extern int kmalloc_verbose;
extern int test_verbose;
diff --git a/tools/testing/radix-tree/linux/rcupdate.h b/tools/testing/radix-tree/linux/rcupdate.h
index 73ed33658203..25010bf86c1d 100644
--- a/tools/testing/radix-tree/linux/rcupdate.h
+++ b/tools/testing/radix-tree/linux/rcupdate.h
@@ -6,5 +6,6 @@
#define rcu_dereference_raw(p) rcu_dereference(p)
#define rcu_dereference_protected(p, cond) rcu_dereference(p)
+#define rcu_dereference_check(p, cond) rcu_dereference(p)
#endif
diff --git a/tools/testing/radix-tree/linux/xarray.h b/tools/testing/radix-tree/linux/xarray.h
index df3812cda376..3eaf9596c2a6 100644
--- a/tools/testing/radix-tree/linux/xarray.h
+++ b/tools/testing/radix-tree/linux/xarray.h
@@ -1,2 +1,3 @@
#include "generated/map-shift.h"
+#define XA_DEBUG
#include "../../../../include/linux/xarray.h"
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
new file mode 100644
index 000000000000..3f8f19cb3739
--- /dev/null
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -0,0 +1,56 @@
+/*
+ * xarray-test.c: Test the XArray API
+ * Copyright (c) 2017 Microsoft Corporation <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/bitmap.h>
+#include <linux/xarray.h>
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+
+#include "test.h"
+
+void check_xa_load(struct xarray *xa)
+{
+ unsigned long i, j;
+
+ for (i = 0; i < 1024; i++) {
+ for (j = 0; j < 1024; j++) {
+ void *entry = xa_load(xa, j);
+ if (j < i)
+ assert(xa_to_value(entry) == j);
+ else
+ assert(!entry);
+ }
+ radix_tree_insert(xa, i, xa_mk_value(i));
+ }
+}
+
+void xarray_checks(void)
+{
+ RADIX_TREE(array, GFP_KERNEL);
+
+ check_xa_load(&array);
+
+ item_kill_tree(&array);
+}
+
+int __weak main(void)
+{
+ radix_tree_init();
+ xarray_checks();
+ radix_tree_cpu_dead(1);
+ rcu_barrier();
+ if (nr_allocated)
+ printf("nr_allocated = %d\n", nr_allocated);
+ return 0;
+}
--
2.16.1
From: Matthew Wilcox <[email protected]>
We construct a fake XA_STATE and use it to delete the node with xa_store()
rather than adding a special function for this unique use case.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 9 ---------
mm/workingset.c | 51 ++++++++++++++++++++++-----------------------------
2 files changed, 22 insertions(+), 38 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index c306e14b5ab1..5933f02a3219 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -306,15 +306,6 @@ void workingset_update_node(struct xa_node *node);
xas_set_update(xas, workingset_update_node); \
} while (0)
-/* Returns workingset_update_node() if the mapping has shadow entries. */
-#define workingset_lookup_update(mapping) \
-({ \
- radix_tree_update_node_t __helper = workingset_update_node; \
- if (dax_mapping(mapping) || shmem_mapping(mapping)) \
- __helper = NULL; \
- __helper; \
-})
-
/* linux/mm/page_alloc.c */
extern unsigned long totalram_pages;
extern unsigned long totalreserve_pages;
diff --git a/mm/workingset.c b/mm/workingset.c
index 91b6e16ad4c1..f7ca6ea5d8b1 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -148,7 +148,7 @@
* and activations is maintained (node->inactive_age).
*
* On eviction, a snapshot of this counter (along with some bits to
- * identify the node) is stored in the now empty page cache radix tree
+ * identify the node) is stored in the now empty page cache
* slot of the evicted page. This is called a shadow entry.
*
* On cache misses for which there are shadow entries, an eligible
@@ -162,7 +162,7 @@
/*
* Eviction timestamps need to be able to cover the full range of
- * actionable refaults. However, bits are tight in the radix tree
+ * actionable refaults. However, bits are tight in the xarray
* entry, and after storing the identifier for the lruvec there might
* not be enough left to represent every single actionable refault. In
* that case, we have to sacrifice granularity for distance, and group
@@ -338,7 +338,7 @@ void workingset_activation(struct page *page)
static struct list_lru shadow_nodes;
-void workingset_update_node(struct radix_tree_node *node)
+void workingset_update_node(struct xa_node *node)
{
/*
* Track non-empty nodes that contain only shadow entries;
@@ -370,7 +370,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
local_irq_enable();
/*
- * Approximate a reasonable limit for the radix tree nodes
+ * Approximate a reasonable limit for the nodes
* containing shadow entries. We don't need to keep more
* shadow entries than possible pages on the active list,
* since refault distances bigger than that are dismissed.
@@ -385,11 +385,11 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
* worst-case density of 1/8th. Below that, not all eligible
* refaults can be detected anymore.
*
- * On 64-bit with 7 radix_tree_nodes per page and 64 slots
+ * On 64-bit with 7 xa_nodes per page and 64 slots
* each, this will reclaim shadow entries when they consume
* ~1.8% of available memory:
*
- * PAGE_SIZE / radix_tree_nodes / node_entries * 8 / PAGE_SIZE
+ * PAGE_SIZE / xa_nodes / node_entries * 8 / PAGE_SIZE
*/
if (sc->memcg) {
cache = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid,
@@ -398,7 +398,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
cache = node_page_state(NODE_DATA(sc->nid), NR_ACTIVE_FILE) +
node_page_state(NODE_DATA(sc->nid), NR_INACTIVE_FILE);
}
- max_nodes = cache >> (RADIX_TREE_MAP_SHIFT - 3);
+ max_nodes = cache >> (XA_CHUNK_SHIFT - 3);
if (nodes <= max_nodes)
return 0;
@@ -408,11 +408,11 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
static enum lru_status shadow_lru_isolate(struct list_head *item,
struct list_lru_one *lru,
spinlock_t *lru_lock,
- void *arg)
+ void *arg) __must_hold(lru_lock)
{
+ XA_STATE(xas, NULL, 0);
struct address_space *mapping;
- struct radix_tree_node *node;
- unsigned int i;
+ struct xa_node *node;
int ret;
/*
@@ -420,7 +420,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* the shadow node LRU under the mapping->pages.xa_lock and the
* lru_lock. Because the page cache tree is emptied before
* the inode can be destroyed, holding the lru_lock pins any
- * address_space that has radix tree nodes on the LRU.
+ * address_space that has nodes on the LRU.
*
* We can then safely transition to the mapping->pages.xa_lock to
* pin only the address_space of the particular node we want
@@ -449,25 +449,18 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
goto out_invalid;
if (WARN_ON_ONCE(node->count != node->nr_values))
goto out_invalid;
- for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
- if (node->slots[i]) {
- if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
- goto out_invalid;
- if (WARN_ON_ONCE(!node->nr_values))
- goto out_invalid;
- if (WARN_ON_ONCE(!mapping->nrexceptional))
- goto out_invalid;
- node->slots[i] = NULL;
- node->nr_values--;
- node->count--;
- mapping->nrexceptional--;
- }
- }
- if (WARN_ON_ONCE(node->nr_values))
- goto out_invalid;
+ mapping->nrexceptional -= node->nr_values;
+ xas.xa = node->array;
+ xas.xa_node = rcu_dereference_protected(node->parent,
+ lockdep_is_held(&mapping->pages.xa_lock));
+ xas.xa_offset = node->offset;
+ xas.xa_update = workingset_update_node;
+ /*
+ * We could store a shadow entry here which was the minimum of the
+ * shadow entries we were tracking ...
+ */
+ xas_store(&xas, NULL);
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
- __radix_tree_delete_node(&mapping->pages, node,
- workingset_lookup_update(mapping));
out_invalid:
xa_unlock(&mapping->pages);
--
2.16.1
From: Matthew Wilcox <[email protected]>
Combine __add_to_swap_cache and add_to_swap_cache into one function
since there is no more need to preload.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/swap_state.c | 93 ++++++++++++++++++---------------------------------------
1 file changed, 29 insertions(+), 64 deletions(-)
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3f95e8fc4cb2..a57b5ad4c503 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -107,14 +107,15 @@ void show_swap_cache_info(void)
}
/*
- * __add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
+ * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
* but sets SwapCache flag and private instead of mapping and index.
*/
-int __add_to_swap_cache(struct page *page, swp_entry_t entry)
+int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
{
- int error, i, nr = hpage_nr_pages(page);
- struct address_space *address_space;
+ struct address_space *address_space = swap_address_space(entry);
pgoff_t idx = swp_offset(entry);
+ XA_STATE(xas, &address_space->pages, idx);
+ unsigned long i, nr = 1UL << compound_order(page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapCache(page), page);
@@ -123,50 +124,30 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
page_ref_add(page, nr);
SetPageSwapCache(page);
- address_space = swap_address_space(entry);
- xa_lock_irq(&address_space->pages);
- for (i = 0; i < nr; i++) {
- set_page_private(page + i, entry.val + i);
- error = radix_tree_insert(&address_space->pages,
- idx + i, page + i);
- if (unlikely(error))
- break;
- }
- if (likely(!error)) {
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, idx + nr - 1);
+ if (xas_error(&xas))
+ goto unlock;
+ for (i = 0; i < nr; i++) {
+ VM_BUG_ON_PAGE(xas.xa_index != idx + i, page);
+ set_page_private(page + i, entry.val + i);
+ xas_store(&xas, page + i);
+ xas_next(&xas);
+ }
address_space->nrpages += nr;
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
ADD_CACHE_INFO(add_total, nr);
- } else {
- /*
- * Only the context which have set SWAP_HAS_CACHE flag
- * would call add_to_swap_cache().
- * So add_to_swap_cache() doesn't returns -EEXIST.
- */
- VM_BUG_ON(error == -EEXIST);
- set_page_private(page + i, 0UL);
- while (i--) {
- radix_tree_delete(&address_space->pages, idx + i);
- set_page_private(page + i, 0UL);
- }
- ClearPageSwapCache(page);
- page_ref_sub(page, nr);
- }
- xa_unlock_irq(&address_space->pages);
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp));
- return error;
-}
-
-
-int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp_mask)
-{
- int error;
+ if (!xas_error(&xas))
+ return 0;
- error = radix_tree_maybe_preload_order(gfp_mask, compound_order(page));
- if (!error) {
- error = __add_to_swap_cache(page, entry);
- radix_tree_preload_end();
- }
- return error;
+ ClearPageSwapCache(page);
+ page_ref_sub(page, nr);
+ return xas_error(&xas);
}
/*
@@ -220,7 +201,7 @@ int add_to_swap(struct page *page)
goto fail;
/*
- * Radix-tree node allocations from PF_MEMALLOC contexts could
+ * XArray node allocations from PF_MEMALLOC contexts could
* completely exhaust the page allocator. __GFP_NOMEMALLOC
* stops emergency reserves from being allocated.
*
@@ -232,7 +213,6 @@ int add_to_swap(struct page *page)
*/
err = add_to_swap_cache(page, entry,
__GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN);
- /* -ENOMEM radix-tree allocation failure */
if (err)
/*
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
@@ -400,19 +380,11 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
break; /* Out of memory */
}
- /*
- * call radix_tree_preload() while we can wait.
- */
- err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
- if (err)
- break;
-
/*
* Swap entry may have been freed since our caller observed it.
*/
err = swapcache_prepare(entry);
if (err == -EEXIST) {
- radix_tree_preload_end();
/*
* We might race against get_swap_page() and stumble
* across a SWAP_HAS_CACHE swap_map entry whose page
@@ -420,26 +392,19 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
*/
cond_resched();
continue;
- }
- if (err) { /* swp entry is obsolete ? */
- radix_tree_preload_end();
+ } else if (err) /* swp entry is obsolete ? */
break;
- }
- /* May fail (-ENOMEM) if radix-tree node allocation failed. */
+ /* May fail (-ENOMEM) if XArray node allocation failed. */
__SetPageLocked(new_page);
__SetPageSwapBacked(new_page);
- err = __add_to_swap_cache(new_page, entry);
+ err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
if (likely(!err)) {
- radix_tree_preload_end();
- /*
- * Initiate read into locked page and return.
- */
+ /* Initiate read into locked page */
lru_cache_add_anon(new_page);
*new_page_allocated = true;
return new_page;
}
- radix_tree_preload_end();
__ClearPageLocked(new_page);
/*
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
--
2.16.1
From: Matthew Wilcox <[email protected]>
Includes moving mapping_tagged() to fs.h as a static inline, and
changing it to return bool.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/fs.h | 17 +++++++++------
mm/page-writeback.c | 62 +++++++++++++++++++----------------------------------
2 files changed, 32 insertions(+), 47 deletions(-)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index b9ea6961947a..b134f80ca498 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -470,15 +470,18 @@ struct block_device {
struct mutex bd_fsfreeze_mutex;
} __randomize_layout;
+/* XArray tags, for tagging dirty and writeback pages in the pagecache. */
+#define PAGECACHE_TAG_DIRTY XA_TAG_0
+#define PAGECACHE_TAG_WRITEBACK XA_TAG_1
+#define PAGECACHE_TAG_TOWRITE XA_TAG_2
+
/*
- * Radix-tree tags, for tagging dirty and writeback pages within the pagecache
- * radix trees
+ * Returns true if any of the pages in the mapping are marked with the tag.
*/
-#define PAGECACHE_TAG_DIRTY 0
-#define PAGECACHE_TAG_WRITEBACK 1
-#define PAGECACHE_TAG_TOWRITE 2
-
-int mapping_tagged(struct address_space *mapping, int tag);
+static inline bool mapping_tagged(struct address_space *mapping, xa_tag_t tag)
+{
+ return xa_tagged(&mapping->pages, tag);
+}
static inline void i_mmap_lock_write(struct address_space *mapping)
{
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 588ce729d199..0407436a8305 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2098,33 +2098,25 @@ void __init page_writeback_init(void)
* dirty pages in the file (thus it is important for this function to be quick
* so that it can tag pages faster than a dirtying process can create them).
*/
-/*
- * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency.
- */
void tag_pages_for_writeback(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
-#define WRITEBACK_TAG_BATCH 4096
- unsigned long tagged = 0;
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ unsigned int tagged = 0;
+ void *page;
- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start,
- PAGECACHE_TAG_DIRTY) {
- if (iter.index > end)
- break;
- radix_tree_iter_tag_set(&mapping->pages, &iter,
- PAGECACHE_TAG_TOWRITE);
- tagged++;
- if ((tagged % WRITEBACK_TAG_BATCH) != 0)
+ xas_lock_irq(&xas);
+ xas_for_each_tag(&xas, page, end, PAGECACHE_TAG_DIRTY) {
+ xas_set_tag(&xas, PAGECACHE_TAG_TOWRITE);
+ if (++tagged % XA_CHECK_SCHED)
continue;
- slot = radix_tree_iter_resume(slot, &iter);
- xa_unlock_irq(&mapping->pages);
+
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
cond_resched();
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
}
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
}
EXPORT_SYMBOL(tag_pages_for_writeback);
@@ -2164,7 +2156,7 @@ int write_cache_pages(struct address_space *mapping,
pgoff_t done_index;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;
pagevec_init(&pvec);
if (wbc->range_cyclic) {
@@ -2445,7 +2437,7 @@ void account_page_cleaned(struct page *page, struct address_space *mapping,
/*
* For address_spaces which do not use buffers. Just tag the page as dirty in
- * its radix tree.
+ * the xarray.
*
* This is also used when a single buffer is being dirtied: we want to set the
* page dirty in that case, but not all the buffers. This is a "bottom-up"
@@ -2471,7 +2463,7 @@ int __set_page_dirty_nobuffers(struct page *page)
BUG_ON(page_mapping(page) != mapping);
WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages, page_index(page),
+ __xa_set_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);
@@ -2634,13 +2626,13 @@ EXPORT_SYMBOL(__cancel_dirty_page);
* Returns true if the page was previously dirty.
*
* This is for preparing to put the page under writeout. We leave the page
- * tagged as dirty in the radix tree so that a concurrent write-for-sync
+ * tagged as dirty in the xarray so that a concurrent write-for-sync
* can discover it via a PAGECACHE_TAG_DIRTY walk. The ->writepage
* implementation will run either set_page_writeback() or set_page_dirty(),
- * at which stage we bring the page's dirty flag and radix-tree dirty tag
+ * at which stage we bring the page's dirty flag and xarray dirty tag
* back into sync.
*
- * This incoherency between the page's dirty flag and radix-tree tag is
+ * This incoherency between the page's dirty flag and xarray tag is
* unfortunate, but it only exists while the page is locked.
*/
int clear_page_dirty_for_io(struct page *page)
@@ -2721,7 +2713,7 @@ int test_clear_page_writeback(struct page *page)
xa_lock_irqsave(&mapping->pages, flags);
ret = TestClearPageWriteback(page);
if (ret) {
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi)) {
struct bdi_writeback *wb = inode_to_wb(inode);
@@ -2773,7 +2765,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
on_wblist = mapping_tagged(mapping,
PAGECACHE_TAG_WRITEBACK);
- radix_tree_tag_set(&mapping->pages, page_index(page),
+ __xa_set_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi))
inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK);
@@ -2787,10 +2779,10 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
sb_mark_inode_writeback(mapping->host);
}
if (!PageDirty(page))
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
if (!keep_write)
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_TOWRITE);
xa_unlock_irqrestore(&mapping->pages, flags);
} else {
@@ -2806,16 +2798,6 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
}
EXPORT_SYMBOL(__test_set_page_writeback);
-/*
- * Return true if any of the pages in the mapping are marked with the
- * passed tag.
- */
-int mapping_tagged(struct address_space *mapping, int tag)
-{
- return radix_tree_tagged(&mapping->pages, tag);
-}
-EXPORT_SYMBOL(mapping_tagged);
-
/**
* wait_for_stable_page() - wait for writeback to finish, if necessary.
* @page: The page to wait on.
--
2.16.1
From: Matthew Wilcox <[email protected]>
The code is slightly shorter and simpler.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index fcfdc146591b..0223f8054e3a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -111,30 +111,28 @@
* ->tasklist_lock (memory_failure, collect_procs_ao)
*/
-static void page_cache_tree_delete(struct address_space *mapping,
+static void page_cache_delete(struct address_space *mapping,
struct page *page, void *shadow)
{
- int i, nr;
+ XA_STATE(xas, &mapping->pages, page->index);
+ unsigned int i, nr;
- /* hugetlb pages are represented by one entry in the radix tree */
+ mapping_set_update(&xas, mapping);
+
+ /* hugetlb pages are represented by a single entry in the xarray */
nr = PageHuge(page) ? 1 : hpage_nr_pages(page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageTail(page), page);
VM_BUG_ON_PAGE(nr != 1 && shadow, page);
- for (i = 0; i < nr; i++) {
- struct radix_tree_node *node;
- void **slot;
-
- __radix_tree_lookup(&mapping->pages, page->index + i,
- &node, &slot);
-
- VM_BUG_ON_PAGE(!node && nr != 1, page);
-
- radix_tree_clear_tags(&mapping->pages, node, slot);
- __radix_tree_replace(&mapping->pages, node, slot, shadow,
- workingset_lookup_update(mapping));
+ i = nr;
+repeat:
+ xas_store(&xas, shadow);
+ xas_init_tags(&xas);
+ if (--i) {
+ xas_next(&xas);
+ goto repeat;
}
page->mapping = NULL;
@@ -234,7 +232,7 @@ void __delete_from_page_cache(struct page *page, void *shadow)
trace_mm_filemap_delete_from_page_cache(page);
unaccount_page_cache_page(mapping, page);
- page_cache_tree_delete(mapping, page, shadow);
+ page_cache_delete(mapping, page, shadow);
}
static void page_cache_free_page(struct address_space *mapping,
--
2.16.1
From: Matthew Wilcox <[email protected]>
XArray tags are slightly more strongly typed than the radix tree tags,
but occupy the same bits. This commit also adds the xas_ family of tag
operations, for cases where the caller is already holding the lock, and
xa_tagged() to ask whether any array member has a particular tag set.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 41 +++++++
lib/xarray.c | 235 +++++++++++++++++++++++++++++++++++++++++
tools/include/linux/spinlock.h | 6 ++
3 files changed, 282 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 5845187c1ce8..1cf012256eab 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -11,6 +11,7 @@
#include <linux/bug.h>
#include <linux/compiler.h>
+#include <linux/gfp.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/rcupdate.h>
@@ -149,6 +150,20 @@ static inline int xa_err(void *entry)
return 0;
}
+typedef unsigned __bitwise xa_tag_t;
+#define XA_TAG_0 ((__force xa_tag_t)0U)
+#define XA_TAG_1 ((__force xa_tag_t)1U)
+#define XA_TAG_2 ((__force xa_tag_t)2U)
+#define XA_PRESENT ((__force xa_tag_t)8U)
+#define XA_TAG_MAX XA_TAG_2
+
+/*
+ * Values for xa_flags. The radix tree stores its GFP flags in the xa_flags,
+ * and we remain compatible with that.
+ */
+#define XA_FLAGS_TAG(tag) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
+ (__force unsigned)(tag)))
+
/**
* struct xarray - The anchor of the XArray.
* @xa_lock: Lock that protects the contents of the XArray.
@@ -195,6 +210,9 @@ struct xarray {
void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
+bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
+void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
+void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
/**
* xa_init() - Initialise an empty XArray.
@@ -209,6 +227,19 @@ static inline void xa_init(struct xarray *xa)
xa_init_flags(xa, 0);
}
+/**
+ * xa_tagged() - Inquire whether any entry in this array has a tag set
+ * @xa: Array
+ * @tag: Tag value
+ *
+ * Context: Any context.
+ * Return: %true if any entry has this tag set.
+ */
+static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
+{
+ return xa->xa_flags & XA_FLAGS_TAG(tag);
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -221,6 +252,12 @@ static inline void xa_init(struct xarray *xa)
#define xa_unlock_irqrestore(xa, flags) \
spin_unlock_irqrestore(&(xa)->xa_lock, flags)
+/*
+ * Versions of the normal API which require the caller to hold the xa_lock.
+ */
+void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
+void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
+
/* Everything below here is the Advanced API. Proceed with caution. */
/*
@@ -534,6 +571,10 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
void *xas_load(struct xa_state *);
+bool xas_get_tag(const struct xa_state *, xa_tag_t);
+void xas_set_tag(const struct xa_state *, xa_tag_t);
+void xas_clear_tag(const struct xa_state *, xa_tag_t);
+
/**
* xas_reload() - Refetch an entry from the xarray.
* @xas: XArray operation state.
diff --git a/lib/xarray.c b/lib/xarray.c
index 195cb130d53d..ca25a7a4a4fa 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -5,6 +5,7 @@
* Author: Matthew Wilcox <[email protected]>
*/
+#include <linux/bitmap.h>
#include <linux/export.h>
#include <linux/xarray.h>
@@ -24,6 +25,55 @@
* @entry refers to something stored in a slot in the xarray
*/
+static inline struct xa_node *xa_parent(struct xarray *xa,
+ const struct xa_node *node)
+{
+ return rcu_dereference_check(node->parent,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+static inline struct xa_node *xa_parent_locked(struct xarray *xa,
+ const struct xa_node *node)
+{
+ return rcu_dereference_protected(node->parent,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+static inline void xa_tag_set(struct xarray *xa, xa_tag_t tag)
+{
+ if (!(xa->xa_flags & XA_FLAGS_TAG(tag)))
+ xa->xa_flags |= XA_FLAGS_TAG(tag);
+}
+
+static inline void xa_tag_clear(struct xarray *xa, xa_tag_t tag)
+{
+ if (xa->xa_flags & XA_FLAGS_TAG(tag))
+ xa->xa_flags &= ~(XA_FLAGS_TAG(tag));
+}
+
+static inline bool node_get_tag(const struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ return test_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline void node_set_tag(struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ __set_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline void node_clear_tag(struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ __clear_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline bool node_any_tag(struct xa_node *node, xa_tag_t tag)
+{
+ return !bitmap_empty(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
+}
+
/* extracts the offset within this node from the index */
static unsigned int get_offset(unsigned long index, struct xa_node *node)
{
@@ -118,6 +168,85 @@ void *xas_load(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_load);
+/**
+ * xas_get_tag() - Returns the state of this tag.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Return: true if the tag is set, false if the tag is clear or @xas
+ * is in an error state.
+ */
+bool xas_get_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ if (xas_invalid(xas))
+ return false;
+ if (!xas->xa_node)
+ return xa_tagged(xas->xa, tag);
+ return node_get_tag(xas->xa_node, xas->xa_offset, tag);
+}
+EXPORT_SYMBOL_GPL(xas_get_tag);
+
+/**
+ * xas_set_tag() - Sets the tag on this entry and its parents.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Sets the specified tag on this entry, and walks up the tree setting it
+ * on all the ancestor entries. Does nothing if @xas has not been walked to
+ * an entry, or is in an error state.
+ */
+void xas_set_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset = xas->xa_offset;
+
+ if (xas_invalid(xas))
+ return;
+
+ while (node) {
+ if (node_get_tag(node, offset, tag))
+ return;
+ node_set_tag(node, offset, tag);
+ offset = node->offset;
+ node = xa_parent_locked(xas->xa, node);
+ }
+
+ if (!xa_tagged(xas->xa, tag))
+ xa_tag_set(xas->xa, tag);
+}
+EXPORT_SYMBOL_GPL(xas_set_tag);
+
+/**
+ * xas_clear_tag() - Clears the tag on this entry and its parents.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Clears the specified tag on this entry, and walks back to the head
+ * attempting to clear it on all the ancestor entries. Does nothing if
+ * @xas has not been walked to an entry, or is in an error state.
+ */
+void xas_clear_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset = xas->xa_offset;
+
+ if (xas_invalid(xas))
+ return;
+
+ while (node) {
+ node_clear_tag(node, offset, tag);
+ if (node_any_tag(node, tag))
+ return;
+
+ offset = node->offset;
+ node = xa_parent_locked(xas->xa, node);
+ }
+
+ if (xa_tagged(xas->xa, tag))
+ xa_tag_clear(xas->xa, tag);
+}
+EXPORT_SYMBOL_GPL(xas_clear_tag);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -160,6 +289,112 @@ void *xa_load(struct xarray *xa, unsigned long index)
}
EXPORT_SYMBOL(xa_load);
+/**
+ * __xa_set_tag() - Set this tag on this entry while locked.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Attempting to set a tag on a NULL entry does not succeed.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry.
+ */
+void __xa_set_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry = xas_load(&xas);
+
+ if (entry)
+ xas_set_tag(&xas, tag);
+}
+EXPORT_SYMBOL_GPL(__xa_set_tag);
+
+/**
+ * __xa_clear_tag() - Clear this tag on this entry while locked.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry.
+ */
+void __xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry = xas_load(&xas);
+
+ if (entry)
+ xas_clear_tag(&xas, tag);
+}
+EXPORT_SYMBOL_GPL(__xa_clear_tag);
+
+/**
+ * xa_get_tag() - Inquire whether this tag is set on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * This function uses the RCU read lock, so the result may be out of date
+ * by the time it returns. If you need the result to be stable, use a lock.
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ * Return: True if the entry at @index has this tag set, false if it doesn't.
+ */
+bool xa_get_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry;
+
+ rcu_read_lock();
+ entry = xas_start(&xas);
+ while (xas_get_tag(&xas, tag)) {
+ if (!xa_is_node(entry))
+ goto found;
+ entry = xas_descend(&xas, xa_to_node(entry));
+ }
+ rcu_read_unlock();
+ return false;
+ found:
+ rcu_read_unlock();
+ return true;
+}
+EXPORT_SYMBOL(xa_get_tag);
+
+/**
+ * xa_set_tag() - Set this tag on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Attempting to set a tag on a NULL entry does not succeed.
+ *
+ * Context: Process context. Takes and releases the xa_lock.
+ */
+void xa_set_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ xa_lock(xa);
+ __xa_set_tag(xa, index, tag);
+ xa_unlock(xa);
+}
+EXPORT_SYMBOL(xa_set_tag);
+
+/**
+ * xa_clear_tag() - Clear this tag on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Clearing a tag always succeeds.
+ *
+ * Context: Process context. Takes and releases the xa_lock.
+ */
+void xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ xa_lock(xa);
+ __xa_clear_tag(xa, index, tag);
+ xa_unlock(xa);
+}
+EXPORT_SYMBOL(xa_clear_tag);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 34fed5c38da2..85a009001109 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -10,6 +10,12 @@
#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
#define spin_lock_init(x) pthread_mutex_init(x, NULL);
+#define spin_lock(x) pthread_mutex_lock(x)
+#define spin_unlock(x) pthread_mutex_unlock(x)
+#define spin_lock_bh(x) pthread_mutex_lock(x)
+#define spin_unlock_bh(x) pthread_mutex_unlock(x)
+#define spin_lock_irq(x) pthread_mutex_lock(x)
+#define spin_unlock_irq(x) pthread_mutex_unlock(x)
#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
--
2.16.1
From: Matthew Wilcox <[email protected]>
Introduce page_cache_pin() to factor out the common logic between the
various lookup routines:
find_get_entry
find_get_entries
find_get_pages_range
find_get_pages_contig
find_get_pages_range_tag
find_get_entries_tag
filemap_map_pages
By using the xa_state to control the iteration, we can remove most of
the gotos and just use the normal break/continue loop control flow.
Also convert the regression1 read-side to XArray since that simulates
the functions being modified here.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/pagemap.h | 6 +-
mm/filemap.c | 380 +++++++++------------------------
tools/testing/radix-tree/regression1.c | 68 +++---
3 files changed, 129 insertions(+), 325 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 34d4fa3ad1c5..1a59f4a5424a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -365,17 +365,17 @@ static inline unsigned find_get_pages(struct address_space *mapping,
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start,
unsigned int nr_pages, struct page **pages);
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
- pgoff_t end, int tag, unsigned int nr_pages,
+ pgoff_t end, xa_tag_t tag, unsigned int nr_pages,
struct page **pages);
static inline unsigned find_get_pages_tag(struct address_space *mapping,
- pgoff_t *index, int tag, unsigned int nr_pages,
+ pgoff_t *index, xa_tag_t tag, unsigned int nr_pages,
struct page **pages)
{
return find_get_pages_range_tag(mapping, index, (pgoff_t)-1, tag,
nr_pages, pages);
}
unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
- int tag, unsigned int nr_entries,
+ xa_tag_t tag, unsigned int nr_entries,
struct page **entries, pgoff_t *indices);
struct page *grab_cache_page_write_begin(struct address_space *mapping,
diff --git a/mm/filemap.c b/mm/filemap.c
index 0223f8054e3a..fa8aa015096e 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1400,6 +1400,32 @@ bool page_cache_range_empty(struct address_space *mapping, pgoff_t index,
}
EXPORT_SYMBOL_GPL(page_cache_range_empty);
+/*
+ * page_cache_pin() - Try to pin a page in the page cache.
+ * @xas: The XArray operation state.
+ * @pagep: The page which has been previously found at this location.
+ *
+ * On success, the page has an elevated refcount, but is not locked.
+ * This implements the lockless pagecache protocol as described in
+ * include/linux/pagemap.h; see page_cache_get_speculative().
+ *
+ * Return: True if the page is still in the cache.
+ */
+static bool page_cache_pin(struct xa_state *xas, struct page *page)
+{
+ struct page *head = compound_head(page);
+ bool got = page_cache_get_speculative(head);
+
+ if (likely(got && (xas_reload(xas) == page) &&
+ (compound_head(page) == head)))
+ return true;
+
+ if (got)
+ put_page(head);
+ xas_reset(xas);
+ return false;
+}
+
/**
* find_get_entry - find and get a page cache entry
* @mapping: the address_space to search
@@ -1415,51 +1441,21 @@ EXPORT_SYMBOL_GPL(page_cache_range_empty);
*/
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
{
- void **pagep;
- struct page *head, *page;
+ XA_STATE(xas, &mapping->pages, offset);
+ struct page *page;
rcu_read_lock();
-repeat:
- page = NULL;
- pagep = radix_tree_lookup_slot(&mapping->pages, offset);
- if (pagep) {
- page = radix_tree_deref_slot(pagep);
- if (unlikely(!page))
- goto out;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page))
- goto repeat;
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Return
- * it without attempting to raise page count.
- */
- goto out;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
+ do {
+ page = xas_load(&xas);
+ if (xas_retry(&xas, page))
+ continue;
+ if (!page || xa_is_value(page))
+ break;
+ if (!page_cache_pin(&xas, page))
+ continue;
+ } while (0);
- /*
- * Has the page moved?
- * This is part of the lockless pagecache protocol. See
- * include/linux/pagemap.h for details.
- */
- if (unlikely(page != *pagep)) {
- put_page(head);
- goto repeat;
- }
- }
-out:
rcu_read_unlock();
-
return page;
}
EXPORT_SYMBOL(find_get_entry);
@@ -1486,7 +1482,7 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
repeat:
page = find_get_entry(mapping, offset);
- if (page && !radix_tree_exception(page)) {
+ if (page && !xa_is_value(page)) {
lock_page(page);
/* Has the page been truncated? */
if (unlikely(page_mapping(page) != mapping)) {
@@ -1619,50 +1615,21 @@ unsigned find_get_entries(struct address_space *mapping,
pgoff_t start, unsigned int nr_entries,
struct page **entries, pgoff_t *indices)
{
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ struct page *page;
unsigned int ret = 0;
- struct radix_tree_iter iter;
if (!nr_entries)
return 0;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!xa_is_value(page) && !page_cache_pin(&xas, page))
continue;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page, a swap
- * entry from shmem/tmpfs or a DAX entry. Return it
- * without attempting to raise page count.
- */
- goto export;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
-export:
- indices[ret] = iter.index;
+ indices[ret] = xas.xa_index;
entries[ret] = page;
if (++ret == nr_entries)
break;
@@ -1696,56 +1663,26 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
pgoff_t end, unsigned int nr_pages,
struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, *start);
+ struct page *page;
unsigned ret = 0;
if (unlikely(!nr_pages))
return 0;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) {
- struct page *head, *page;
-
- if (iter.index > end)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each(&xas, page, end) {
+ if (xas_retry(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Skip
- * over it.
- */
+ /* Skip over shadow or swap entries */
+ if (xa_is_value(page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
pages[ret] = page;
if (++ret == nr_pages) {
- *start = pages[ret - 1]->index + 1;
+ *start = page->index + 1;
goto out;
}
}
@@ -1753,7 +1690,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
/*
* We come here when there is no page beyond @end. We take care to not
* overflow the index @start as it confuses some of the callers. This
- * breaks the iteration when there is page at index -1 but that is
+ * breaks the iteration when there is a page at index -1 but that is
* already broken anyway.
*/
if (end == (pgoff_t)-1)
@@ -1781,57 +1718,28 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
unsigned int nr_pages, struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, index);
+ struct page *page;
unsigned int ret = 0;
if (unlikely(!nr_pages))
return 0;
rcu_read_lock();
- radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- /* The hole, there no reason to continue */
- if (unlikely(!page))
- break;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Stop
- * looking for contiguous pages.
- */
+ for (page = xas_load(&xas); page; page = xas_next(&xas)) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (xa_is_value(page))
break;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
+ if (!page_cache_pin(&xas, page))
+ continue;
/*
* must check mapping and index after taking the ref.
* otherwise we can get both false positives and false
* negatives, which is just confusing to the caller.
*/
- if (page->mapping == NULL || page_to_pgoff(page) != iter.index) {
+ if (!page->mapping || page_to_pgoff(page) != xas.xa_index) {
put_page(page);
break;
}
@@ -1858,74 +1766,42 @@ EXPORT_SYMBOL(find_get_pages_contig);
* @tag. We update @index to index the next page for the traversal.
*/
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
- pgoff_t end, int tag, unsigned int nr_pages,
+ pgoff_t end, xa_tag_t tag, unsigned int nr_pages,
struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, *index);
+ struct page *page;
unsigned ret = 0;
if (unlikely(!nr_pages))
return 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) {
- struct page *head, *page;
-
- if (iter.index > end)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each_tag(&xas, page, end, tag) {
+ if (xas_retry(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page.
- *
- * Those entries should never be tagged, but
- * this tree walk is lockless and the tags are
- * looked up in bulk, one radix tree node at a
- * time, so there is a sizable window for page
- * reclaim to evict a page we saw tagged.
- *
- * Skip over it.
- */
+ /*
+ * Shadow entries should never be tagged, but this iteration
+ * is lockless so there is a window for page reclaim to evict
+ * a page we saw tagged. Skip over it.
+ */
+ if (xa_is_value(page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
pages[ret] = page;
if (++ret == nr_pages) {
- *index = pages[ret - 1]->index + 1;
+ *index = page->index + 1;
goto out;
}
}
/*
- * We come here when we got at @end. We take care to not overflow the
+ * We come here when we got to @end. We take care to not overflow the
* index @index as it confuses some of the callers. This breaks the
- * iteration when there is page at index -1 but that is already broken
- * anyway.
+ * iteration when there is a page at index -1 but that is already
+ * broken anyway.
*/
if (end == (pgoff_t)-1)
*index = (pgoff_t)-1;
@@ -1951,54 +1827,24 @@ EXPORT_SYMBOL(find_get_pages_range_tag);
* @tag.
*/
unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
- int tag, unsigned int nr_entries,
+ xa_tag_t tag, unsigned int nr_entries,
struct page **entries, pgoff_t *indices)
{
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ struct page *page;
unsigned int ret = 0;
- struct radix_tree_iter iter;
if (!nr_entries)
return 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each_tag(&xas, page, ULONG_MAX, tag) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!xa_is_value(page) && !page_cache_pin(&xas, page))
continue;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
- /*
- * A shadow entry of a recently evicted page, a swap
- * entry from shmem/tmpfs or a DAX entry. Return it
- * without attempting to raise page count.
- */
- goto export;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
-export:
- indices[ret] = iter.index;
+ indices[ret] = xas.xa_index;
entries[ret] = page;
if (++ret == nr_entries)
break;
@@ -2607,45 +2453,21 @@ EXPORT_SYMBOL(filemap_fault);
void filemap_map_pages(struct vm_fault *vmf,
pgoff_t start_pgoff, pgoff_t end_pgoff)
{
- struct radix_tree_iter iter;
- void **slot;
struct file *file = vmf->vma->vm_file;
struct address_space *mapping = file->f_mapping;
pgoff_t last_pgoff = start_pgoff;
unsigned long max_idx;
- struct page *head, *page;
+ XA_STATE(xas, &mapping->pages, start_pgoff);
+ struct page *page;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) {
- if (iter.index > end_pgoff)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
- goto next;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
+ xas_for_each(&xas, page, end_pgoff) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (xa_is_value(page))
goto next;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
+ if (!page_cache_pin(&xas, page))
+ continue;
if (!PageUptodate(page) ||
PageReadahead(page) ||
@@ -2664,10 +2486,10 @@ void filemap_map_pages(struct vm_fault *vmf,
if (file->f_ra.mmap_miss > 0)
file->f_ra.mmap_miss--;
- vmf->address += (iter.index - last_pgoff) << PAGE_SHIFT;
+ vmf->address += (xas.xa_index - last_pgoff) << PAGE_SHIFT;
if (vmf->pte)
- vmf->pte += iter.index - last_pgoff;
- last_pgoff = iter.index;
+ vmf->pte += xas.xa_index - last_pgoff;
+ last_pgoff = xas.xa_index;
if (alloc_set_pte(vmf, NULL, page))
goto unlock;
unlock_page(page);
@@ -2680,8 +2502,6 @@ void filemap_map_pages(struct vm_fault *vmf,
/* Huge page is mapped? No need to proceed. */
if (pmd_trans_huge(*vmf->pmd))
break;
- if (iter.index == end_pgoff)
- break;
}
rcu_read_unlock();
}
diff --git a/tools/testing/radix-tree/regression1.c b/tools/testing/radix-tree/regression1.c
index 0aece092f40e..cd84d7de45e6 100644
--- a/tools/testing/radix-tree/regression1.c
+++ b/tools/testing/radix-tree/regression1.c
@@ -58,7 +58,7 @@ static struct page *page_alloc(void)
struct page *p;
p = malloc(sizeof(struct page));
p->count = 1;
- p->index = 1;
+ p->index = (unsigned long)p;
pthread_mutex_init(&p->lock, NULL);
return p;
@@ -77,53 +77,37 @@ static void page_free(struct page *p)
call_rcu(&p->rcu, page_rcu_free);
}
+static bool page_cache_pin(struct xa_state *xas, struct page *page)
+{
+ pthread_mutex_lock(&page->lock);
+ if (!page->count) {
+ pthread_mutex_unlock(&page->lock);
+ goto fail;
+ }
+ /* don't actually update page refcount */
+ pthread_mutex_unlock(&page->lock);
+
+ /* Has the page moved? */
+ if (xas_reload(xas) == page)
+ return true;
+fail:
+ xas_reset(xas);
+ return false;
+}
+
static unsigned find_get_pages(unsigned long start,
unsigned int nr_pages, struct page **pages)
{
- unsigned int i;
- unsigned int ret;
- unsigned int nr_found;
+ XA_STATE(xas, &mt_tree, start);
+ struct page *page;
+ unsigned int ret = 0;
rcu_read_lock();
-restart:
- nr_found = radix_tree_gang_lookup_slot(&mt_tree,
- (void ***)pages, NULL, start, nr_pages);
- ret = 0;
- for (i = 0; i < nr_found; i++) {
- struct page *page;
-repeat:
- page = radix_tree_deref_slot((void **)pages[i]);
- if (unlikely(!page))
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- /*
- * Transient condition which can only trigger
- * when entry at index 0 moves out of or back
- * to root: none yet gotten, safe to restart.
- */
- assert((start | i) == 0);
- goto restart;
- }
- /*
- * No exceptional entries are inserted in this test.
- */
- assert(0);
- }
-
- pthread_mutex_lock(&page->lock);
- if (!page->count) {
- pthread_mutex_unlock(&page->lock);
- goto repeat;
- }
- /* don't actually update page refcount */
- pthread_mutex_unlock(&page->lock);
-
- /* Has the page moved? */
- if (unlikely(page != *((void **)pages[i]))) {
- goto repeat;
- }
pages[ret] = page;
ret++;
--
2.16.1
From: Matthew Wilcox <[email protected]>
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 49effa6301b9..3735ea6e3d19 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2605,7 +2605,7 @@ static struct page *do_read_cache_page(struct address_space *mapping,
put_page(page);
if (err == -EEXIST)
goto repeat;
- /* Presumably ENOMEM for radix tree node */
+ /* Presumably ENOMEM for xarray node */
return ERR_PTR(err);
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
Use the XArray APIs to add and replace pages in the page cache. This
removes two uses of the radix tree preload API and is significantly
shorter code.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 8 ++-
mm/filemap.c | 143 ++++++++++++++++++++++-----------------------------
2 files changed, 67 insertions(+), 84 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 7b6a59f722a3..c306e14b5ab1 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -299,8 +299,12 @@ void *workingset_eviction(struct address_space *mapping, struct page *page);
bool workingset_refault(void *shadow);
void workingset_activation(struct page *page);
-/* Do not use directly, use workingset_lookup_update */
-void workingset_update_node(struct radix_tree_node *node);
+/* Only track the nodes of mappings with shadow entries */
+void workingset_update_node(struct xa_node *node);
+#define mapping_set_update(xas, mapping) do { \
+ if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \
+ xas_set_update(xas, workingset_update_node); \
+} while (0)
/* Returns workingset_update_node() if the mapping has shadow entries. */
#define workingset_lookup_update(mapping) \
diff --git a/mm/filemap.c b/mm/filemap.c
index 778a551f6713..fcfdc146591b 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -111,35 +111,6 @@
* ->tasklist_lock (memory_failure, collect_procs_ao)
*/
-static int page_cache_tree_insert(struct address_space *mapping,
- struct page *page, void **shadowp)
-{
- struct radix_tree_node *node;
- void **slot;
- int error;
-
- error = __radix_tree_create(&mapping->pages, page->index, 0,
- &node, &slot);
- if (error)
- return error;
- if (*slot) {
- void *p;
-
- p = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (!xa_is_value(p))
- return -EEXIST;
-
- mapping->nrexceptional--;
- if (shadowp)
- *shadowp = p;
- }
- __radix_tree_replace(&mapping->pages, node, slot, page,
- workingset_lookup_update(mapping));
- mapping->nrpages++;
- return 0;
-}
-
static void page_cache_tree_delete(struct address_space *mapping,
struct page *page, void *shadow)
{
@@ -775,51 +746,44 @@ EXPORT_SYMBOL(file_write_and_wait_range);
* locked. This function does not add the new page to the LRU, the
* caller must do that.
*
- * The remove + add is atomic. The only way this function can fail is
- * memory allocation failure.
+ * The remove + add is atomic. This function cannot fail.
*/
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
{
- int error;
+ struct address_space *mapping = old->mapping;
+ void (*freepage)(struct page *) = mapping->a_ops->freepage;
+ pgoff_t offset = old->index;
+ XA_STATE(xas, &mapping->pages, offset);
+ unsigned long flags;
VM_BUG_ON_PAGE(!PageLocked(old), old);
VM_BUG_ON_PAGE(!PageLocked(new), new);
VM_BUG_ON_PAGE(new->mapping, new);
- error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
- if (!error) {
- struct address_space *mapping = old->mapping;
- void (*freepage)(struct page *);
- unsigned long flags;
-
- pgoff_t offset = old->index;
- freepage = mapping->a_ops->freepage;
-
- get_page(new);
- new->mapping = mapping;
- new->index = offset;
+ get_page(new);
+ new->mapping = mapping;
+ new->index = offset;
- xa_lock_irqsave(&mapping->pages, flags);
- __delete_from_page_cache(old, NULL);
- error = page_cache_tree_insert(mapping, new, NULL);
- BUG_ON(error);
+ xas_lock_irqsave(&xas, flags);
+ xas_store(&xas, new);
- /*
- * hugetlb pages do not participate in page cache accounting.
- */
- if (!PageHuge(new))
- __inc_node_page_state(new, NR_FILE_PAGES);
- if (PageSwapBacked(new))
- __inc_node_page_state(new, NR_SHMEM);
- xa_unlock_irqrestore(&mapping->pages, flags);
- mem_cgroup_migrate(old, new);
- radix_tree_preload_end();
- if (freepage)
- freepage(old);
- put_page(old);
- }
+ old->mapping = NULL;
+ /* hugetlb pages do not participate in page cache accounting. */
+ if (!PageHuge(old))
+ __dec_node_page_state(new, NR_FILE_PAGES);
+ if (!PageHuge(new))
+ __inc_node_page_state(new, NR_FILE_PAGES);
+ if (PageSwapBacked(old))
+ __dec_node_page_state(new, NR_SHMEM);
+ if (PageSwapBacked(new))
+ __inc_node_page_state(new, NR_SHMEM);
+ xas_unlock_irqrestore(&xas, flags);
+ mem_cgroup_migrate(old, new);
+ if (freepage)
+ freepage(old);
+ put_page(old);
- return error;
+ return 0;
}
EXPORT_SYMBOL_GPL(replace_page_cache_page);
@@ -828,12 +792,15 @@ static int __add_to_page_cache_locked(struct page *page,
pgoff_t offset, gfp_t gfp_mask,
void **shadowp)
{
+ XA_STATE(xas, &mapping->pages, offset);
int huge = PageHuge(page);
struct mem_cgroup *memcg;
int error;
+ void *old;
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+ mapping_set_update(&xas, mapping);
if (!huge) {
error = mem_cgroup_try_charge(page, current->mm,
@@ -842,39 +809,51 @@ static int __add_to_page_cache_locked(struct page *page,
return error;
}
- error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM);
- if (error) {
- if (!huge)
- mem_cgroup_cancel_charge(page, memcg, false);
- return error;
- }
-
get_page(page);
page->mapping = mapping;
page->index = offset;
- xa_lock_irq(&mapping->pages);
- error = page_cache_tree_insert(mapping, page, shadowp);
- radix_tree_preload_end();
- if (unlikely(error))
- goto err_insert;
+ do {
+ xas_lock_irq(&xas);
+ old = xas_create(&xas);
+ if (xas_error(&xas))
+ goto unlock;
+ if (xa_is_value(old)) {
+ mapping->nrexceptional--;
+ if (shadowp)
+ *shadowp = old;
+ } else if (old) {
+ xas_set_err(&xas, -EEXIST);
+ goto unlock;
+ }
+
+ xas_store(&xas, page);
+ mapping->nrpages++;
+
+ /*
+ * hugetlb pages do not participate in
+ * page cache accounting.
+ */
+ if (!huge)
+ __inc_node_page_state(page, NR_FILE_PAGES);
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp_mask & ~__GFP_HIGHMEM));
+
+ if (xas_error(&xas))
+ goto error;
- /* hugetlb pages do not participate in page cache accounting. */
- if (!huge)
- __inc_node_page_state(page, NR_FILE_PAGES);
- xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_commit_charge(page, memcg, false, false);
trace_mm_filemap_add_to_page_cache(page);
return 0;
-err_insert:
+error:
page->mapping = NULL;
/* Leave page->index set: truncation relies upon it */
- xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
- return error;
+ return xas_error(&xas);
}
/**
--
2.16.1
From: Matthew Wilcox <[email protected]>
Rename the function from page_cache_tree_delete_batch to just
page_cache_delete_batch.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index fa8aa015096e..49effa6301b9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -275,7 +275,7 @@ void delete_from_page_cache(struct page *page)
EXPORT_SYMBOL(delete_from_page_cache);
/*
- * page_cache_tree_delete_batch - delete several pages from page cache
+ * page_cache_delete_batch - delete several pages from page cache
* @mapping: the mapping to which pages belong
* @pvec: pagevec with pages to delete
*
@@ -288,23 +288,18 @@ EXPORT_SYMBOL(delete_from_page_cache);
*
* The function expects xa_lock to be held.
*/
-static void
-page_cache_tree_delete_batch(struct address_space *mapping,
+static void page_cache_delete_batch(struct address_space *mapping,
struct pagevec *pvec)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, pvec->pages[0]->index);
int total_pages = 0;
int i = 0, tail_pages = 0;
struct page *page;
- pgoff_t start;
- start = pvec->pages[0]->index;
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
+ mapping_set_update(&xas, mapping);
+ xas_for_each(&xas, page, ULONG_MAX) {
if (i >= pagevec_count(pvec) && !tail_pages)
break;
- page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
if (xa_is_value(page))
continue;
if (!tail_pages) {
@@ -313,8 +308,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
* have our pages locked so they are protected from
* being removed.
*/
- if (page != pvec->pages[i])
+ if (page != pvec->pages[i]) {
+ VM_BUG_ON_PAGE(page->index >
+ pvec->pages[i]->index, page);
continue;
+ }
WARN_ON_ONCE(!PageLocked(page));
if (PageTransHuge(page) && !PageHuge(page))
tail_pages = HPAGE_PMD_NR - 1;
@@ -325,11 +323,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
*/
i++;
} else {
+ VM_BUG_ON_PAGE(page->index + HPAGE_PMD_NR - tail_pages
+ != pvec->pages[i]->index, page);
tail_pages--;
}
- radix_tree_clear_tags(&mapping->pages, iter.node, slot);
- __radix_tree_replace(&mapping->pages, iter.node, slot, NULL,
- workingset_lookup_update(mapping));
+ xas_store(&xas, NULL);
total_pages++;
}
mapping->nrpages -= total_pages;
@@ -350,7 +348,7 @@ void delete_from_page_cache_batch(struct address_space *mapping,
unaccount_page_cache_page(mapping, pvec->pages[i]);
}
- page_cache_tree_delete_batch(mapping, pvec);
+ page_cache_delete_batch(mapping, pvec);
xa_unlock_irqrestore(&mapping->pages, flags);
for (i = 0; i < pagevec_count(pvec); i++)
--
2.16.1
From: Matthew Wilcox <[email protected]>
This hopefully temporary function is useful for users who have not yet
been converted to multi-index entries.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 2 ++
lib/xarray.c | 22 ++++++++++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index c8a0ddc1b3df..387be18d05ba 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -744,6 +744,8 @@ void xas_init_tags(const struct xa_state *);
bool xas_nomem(struct xa_state *, gfp_t);
void xas_pause(struct xa_state *);
+void xas_create_range(struct xa_state *, unsigned long max);
+
/**
* xas_reload() - Refetch an entry from the xarray.
* @xas: XArray operation state.
diff --git a/lib/xarray.c b/lib/xarray.c
index 7cf195b6e740..1d94ecc2dca3 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -612,6 +612,28 @@ void *xas_create(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_create);
+/**
+ * xas_create_range() - Ensure that stores to this range will succeed
+ * @xas: XArray operation state.
+ * @max: The highest index to create a slot for.
+ *
+ * Creates all of the slots in the range between the current position of
+ * @xas and @max. This is for the benefit of users who have not yet been
+ * converted to multi-index entries.
+ *
+ * The implementation is naive.
+ */
+void xas_create_range(struct xa_state *xas, unsigned long max)
+{
+ XA_STATE(tmp, xas->xa, xas->xa_index);
+
+ do {
+ xas_create(&tmp);
+ xas_set(&tmp, tmp.xa_index + XA_CHUNK_SIZE);
+ } while (tmp.xa_index < max);
+}
+EXPORT_SYMBOL_GPL(xas_create_range);
+
static void store_siblings(struct xa_state *xas, void *entry, void *curr,
int *countp, int *valuesp)
{
--
2.16.1
From: Matthew Wilcox <[email protected]>
Add myself as XArray and IDR maintainer.
Signed-off-by: Matthew Wilcox <[email protected]>
---
MAINTAINERS | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 3bdc260e36b7..d30f592091ce 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15171,6 +15171,18 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso
S: Maintained
F: arch/x86/entry/vdso/
+XARRAY
+M: Matthew Wilcox <[email protected]>
+M: Matthew Wilcox <[email protected]>
+L: [email protected]
+S: Supported
+F: Documentation/core-api/xarray.rst
+F: lib/idr.c
+F: lib/xarray.c
+F: include/linux/idr.h
+F: include/linux/xarray.h
+F: tools/testing/radix-tree
+
XC2028/3028 TUNER DRIVER
M: Mauro Carvalho Chehab <[email protected]>
M: Mauro Carvalho Chehab <[email protected]>
--
2.16.1
From: Matthew Wilcox <[email protected]>
Instead of calling find_get_pages_range() and putting any reference,
just use xa_find() to look for a page in the right range.
Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 3735ea6e3d19..a02d69a957e4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -460,18 +460,11 @@ bool filemap_range_has_page(struct address_space *mapping,
{
pgoff_t index = start_byte >> PAGE_SHIFT;
pgoff_t end = end_byte >> PAGE_SHIFT;
- struct page *page;
if (end_byte < start_byte)
return false;
- if (mapping->nrpages == 0)
- return false;
-
- if (!find_get_pages_range(mapping, &index, end, 1, &page))
- return false;
- put_page(page);
- return true;
+ return xa_find(&mapping->pages, &index, end, XA_PRESENT);
}
EXPORT_SYMBOL(filemap_range_has_page);
--
2.16.1
From: Matthew Wilcox <[email protected]>
This function frees all the internal memory allocated to the xarray
and reinitialises it to be empty.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 1 +
lib/xarray.c | 28 ++++++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 85dd909586f0..96773f83ae03 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -229,6 +229,7 @@ void *xa_find_after(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
unsigned long max, unsigned int n, xa_tag_t);
+void xa_destroy(struct xarray *);
/**
* xa_init() - Initialise an empty XArray.
diff --git a/lib/xarray.c b/lib/xarray.c
index 124bbfec66ae..080ed0fc3feb 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1468,6 +1468,34 @@ unsigned int xa_extract(struct xarray *xa, void **dst, unsigned long start,
}
EXPORT_SYMBOL(xa_extract);
+/**
+ * xa_destroy() - Free all internal data structures.
+ * @xa: XArray.
+ *
+ * After calling this function, the XArray is empty and has freed all memory
+ * allocated for its internal data structures. You are responsible for
+ * freeing the objects referenced by the XArray.
+ *
+ * Context: Any context. Takes and releases the xa_lock, interrupt-safe.
+ */
+void xa_destroy(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ unsigned long flags;
+ void *entry;
+
+ xas.xa_node = NULL;
+ xas_lock_irqsave(&xas, flags);
+ entry = xa_head_locked(xa);
+ RCU_INIT_POINTER(xa->xa_head, NULL);
+ xas_init_tags(&xas);
+ /* lockdep checks we're still holding the lock in xas_free_nodes() */
+ if (xa_is_node(entry))
+ xas_free_nodes(&xas, xa_to_node(entry));
+ xas_unlock_irqrestore(&xas, flags);
+}
+EXPORT_SYMBOL(xa_destroy);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
--
2.16.1
From: Matthew Wilcox <[email protected]>
btrfs has its own custom function for determining whether the page cache
has any pages in a particular range. Move this functionality to the
page cache, and call it from btrfs.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/btrfs_inode.h | 7 ++++-
fs/btrfs/inode.c | 70 -------------------------------------------------
include/linux/pagemap.h | 2 ++
mm/filemap.c | 26 ++++++++++++++++++
4 files changed, 34 insertions(+), 71 deletions(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 63f0ccc92a71..a48bd6e0a0bb 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -365,6 +365,11 @@ static inline void btrfs_print_data_csum_error(struct btrfs_inode *inode,
logical_start, csum, csum_expected, mirror_num);
}
-bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end);
+static inline bool btrfs_page_exists_in_range(struct inode *inode,
+ loff_t start, loff_t end)
+{
+ return page_cache_range_empty(inode->i_mapping, start >> PAGE_SHIFT,
+ end >> PAGE_SHIFT);
+}
#endif
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index fbb4291fcc57..89ce4d90e7f6 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7425,76 +7425,6 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
return ret;
}
-bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
-{
- struct radix_tree_root *root = &inode->i_mapping->pages;
- bool found = false;
- void **pagep = NULL;
- struct page *page = NULL;
- unsigned long start_idx;
- unsigned long end_idx;
-
- start_idx = start >> PAGE_SHIFT;
-
- /*
- * end is the last byte in the last page. end == start is legal
- */
- end_idx = end >> PAGE_SHIFT;
-
- rcu_read_lock();
-
- /* Most of the code in this while loop is lifted from
- * find_get_page. It's been modified to begin searching from a
- * page and return just the first page found in that range. If the
- * found idx is less than or equal to the end idx then we know that
- * a page exists. If no pages are found or if those pages are
- * outside of the range then we're fine (yay!) */
- while (page == NULL &&
- radix_tree_gang_lookup_slot(root, &pagep, NULL, start_idx, 1)) {
- page = radix_tree_deref_slot(pagep);
- if (unlikely(!page))
- break;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- page = NULL;
- continue;
- }
- /*
- * Otherwise, shmem/tmpfs must be storing a swap entry
- * here so return it without attempting to raise page
- * count.
- */
- page = NULL;
- break; /* TODO: Is this relevant for this use case? */
- }
-
- if (!page_cache_get_speculative(page)) {
- page = NULL;
- continue;
- }
-
- /*
- * Has the page moved?
- * This is part of the lockless pagecache protocol. See
- * include/linux/pagemap.h for details.
- */
- if (unlikely(page != *pagep)) {
- put_page(page);
- page = NULL;
- }
- }
-
- if (page) {
- if (page->index <= end_idx)
- found = true;
- put_page(page);
- }
-
- rcu_read_unlock();
- return found;
-}
-
static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend,
struct extent_state **cached_state, int writing)
{
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 0db127c3ccac..34d4fa3ad1c5 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -245,6 +245,8 @@ pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
+bool page_cache_range_empty(struct address_space *mapping,
+ pgoff_t index, pgoff_t max);
#define FGP_ACCESSED 0x00000001
#define FGP_LOCK 0x00000002
diff --git a/mm/filemap.c b/mm/filemap.c
index ce74aba33c85..778a551f6713 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1397,6 +1397,32 @@ pgoff_t page_cache_prev_gap(struct address_space *mapping,
}
EXPORT_SYMBOL(page_cache_prev_gap);
+bool page_cache_range_empty(struct address_space *mapping, pgoff_t index,
+ pgoff_t max)
+{
+ struct page *page;
+ XA_STATE(xas, &mapping->pages, index);
+
+ rcu_read_lock();
+ do {
+ page = xas_find(&xas, max);
+ if (xas_retry(&xas, page))
+ continue;
+ /* Shadow entries don't count */
+ if (xa_is_value(page))
+ continue;
+ /*
+ * We don't need to try to pin this page; we're about to
+ * release the RCU lock anyway. It is enough to know that
+ * there was a page here recently.
+ */
+ } while (0);
+ rcu_read_unlock();
+
+ return page != NULL;
+}
+EXPORT_SYMBOL_GPL(page_cache_range_empty);
+
/**
* find_get_entry - find and get a page cache entry
* @mapping: the address_space to search
--
2.16.1
From: Matthew Wilcox <[email protected]>
The page cache offers the ability to search for a miss in the previous or
next N locations. Rather than teach the XArray about the page cache's
definition of a miss, use xas_prev() and xas_next() to search the page
array. This should be more efficient as it does not have to start the
lookup from the top for each index.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 2 +-
include/linux/pagemap.h | 4 +-
mm/filemap.c | 110 ++++++++++++++++++---------------------
mm/readahead.c | 4 +-
4 files changed, 55 insertions(+), 65 deletions(-)
diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 7cb5c38c19e4..961901685007 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -895,7 +895,7 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
end = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
if (end != inode->i_mapping->nrpages) {
rcu_read_lock();
- end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX);
+ end = page_cache_next_gap(mapping, idx + 1, ULONG_MAX);
rcu_read_unlock();
}
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 80a6149152d4..0db127c3ccac 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -241,9 +241,9 @@ static inline gfp_t readahead_gfp_mask(struct address_space *x)
typedef int filler_t(void *, struct page *);
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
#define FGP_ACCESSED 0x00000001
diff --git a/mm/filemap.c b/mm/filemap.c
index 92f344f0f9ce..ce74aba33c85 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1326,86 +1326,76 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
}
/**
- * page_cache_next_hole - find the next hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the
- * lowest indexed hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'return - index >=
- * max_scan' will be true). In rare cases of index wrap-around, 0 will
- * be returned.
- *
- * page_cache_next_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 5, then subsequently a hole is created at
- * index 10, page_cache_next_hole covering both indexes may return 10
- * if called under rcu_read_lock.
+ * page_cache_next_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the
+ * gap with the lowest index.
+ *
+ * This function may be called under the rcu_read_lock. However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 5, then subsequently a gap is
+ * created at index 10, page_cache_next_gap covering both indices may
+ * return 10 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'return - index >= max_scan' will be true).
+ * In the rare case of index wrap-around, 0 will be returned.
*/
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan)
{
- unsigned long i;
+ XA_STATE(xas, &mapping->pages, index);
- for (i = 0; i < max_scan; i++) {
- struct page *page;
-
- page = radix_tree_lookup(&mapping->pages, index);
- if (!page || xa_is_value(page))
+ while (max_scan--) {
+ void *entry = xas_next(&xas);
+ if (!entry || xa_is_value(entry))
break;
- index++;
- if (index == 0)
+ if (xas.xa_index == 0)
break;
}
- return index;
+ return xas.xa_index;
}
-EXPORT_SYMBOL(page_cache_next_hole);
+EXPORT_SYMBOL(page_cache_next_gap);
/**
- * page_cache_prev_hole - find the prev hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search backwards in the range [max(index-max_scan+1, 0), index] for
- * the first hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'index - return >=
- * max_scan' will be true). In rare cases of wrap-around, ULONG_MAX
- * will be returned.
- *
- * page_cache_prev_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 10, then subsequently a hole is created at
- * index 5, page_cache_prev_hole covering both indexes may return 5 if
- * called under rcu_read_lock.
+ * page_cache_prev_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [max(index - max_scan + 1, 0), index] for the
+ * gap with the highest index.
+ *
+ * This function may be called under the rcu_read_lock. However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 10, then subsequently a gap is
+ * created at index 5, page_cache_prev_gap() covering both indices may
+ * return 5 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'index - return >= max_scan' will be true).
+ * In the rare case of wrap-around, ULONG_MAX will be returned.
*/
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan)
{
- unsigned long i;
-
- for (i = 0; i < max_scan; i++) {
- struct page *page;
+ XA_STATE(xas, &mapping->pages, index);
- page = radix_tree_lookup(&mapping->pages, index);
- if (!page || xa_is_value(page))
+ while (max_scan--) {
+ void *entry = xas_prev(&xas);
+ if (!entry || xa_is_value(entry))
break;
- index--;
- if (index == ULONG_MAX)
+ if (xas.xa_index == ULONG_MAX)
break;
}
- return index;
+ return xas.xa_index;
}
-EXPORT_SYMBOL(page_cache_prev_hole);
+EXPORT_SYMBOL(page_cache_prev_gap);
/**
* find_get_entry - find and get a page cache entry
diff --git a/mm/readahead.c b/mm/readahead.c
index 4851f002605f..f64b31b3a84a 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -329,7 +329,7 @@ static pgoff_t count_history_pages(struct address_space *mapping,
pgoff_t head;
rcu_read_lock();
- head = page_cache_prev_hole(mapping, offset - 1, max);
+ head = page_cache_prev_gap(mapping, offset - 1, max);
rcu_read_unlock();
return offset - 1 - head;
@@ -417,7 +417,7 @@ ondemand_readahead(struct address_space *mapping,
pgoff_t start;
rcu_read_lock();
- start = page_cache_next_hole(mapping, offset + 1, max_pages);
+ start = page_cache_next_gap(mapping, offset + 1, max_pages);
rcu_read_unlock();
if (!start || start - offset > max_pages)
--
2.16.1
From: Matthew Wilcox <[email protected]>
These two functions move the xas index by one position, and adjust the
rest of the iterator state to match it. This is more efficient than
calling xas_set() as it keeps the iterator at the leaves of the tree
instead of walking the iterator from the root each time.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 67 +++++++++
lib/xarray.c | 74 ++++++++++
tools/testing/radix-tree/xarray-test.c | 261 ++++++++++++++++++++++++++++++++-
3 files changed, 401 insertions(+), 1 deletion(-)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 96773f83ae03..c8a0ddc1b3df 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -683,6 +683,12 @@ static inline bool xas_not_node(struct xa_node *node)
return ((unsigned long)node & 3) || !node;
}
+/* True if the node represents RESTART or an error */
+static inline bool xas_frozen(struct xa_node *node)
+{
+ return (unsigned long)node & 2;
+}
+
/* True if the node represents head-of-tree, RESTART or BOUNDS */
static inline bool xas_top(struct xa_node *node)
{
@@ -940,6 +946,67 @@ enum {
for (entry = xas_find_tag(xas, max, tag); entry; \
entry = xas_next_tag(xas, max, tag))
+void *__xas_next(struct xa_state *);
+void *__xas_prev(struct xa_state *);
+
+/**
+ * xas_prev() - Move iterator to previous index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * subtracted from the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index 0, this function wraps
+ * around to %ULONG_MAX.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_prev(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == 0))
+ return __xas_prev(xas);
+
+ xas->xa_index--;
+ xas->xa_offset--;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
+/**
+ * xas_next() - Move state to next index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * added to the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index %ULONG_MAX, this function wraps
+ * around to 0.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_next(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == XA_CHUNK_MASK))
+ return __xas_next(xas);
+
+ xas->xa_index++;
+ xas->xa_offset++;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
void xas_destroy(struct xa_state *);
diff --git a/lib/xarray.c b/lib/xarray.c
index 080ed0fc3feb..7cf195b6e740 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -838,6 +838,80 @@ void xas_pause(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_pause);
+/*
+ * __xas_prev() - Find the previous entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_prev() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_prev(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index--;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset--;
+
+ while (xas->xa_offset == 255) {
+ xas->xa_offset = xas->xa_node->offset - 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_prev);
+
+/*
+ * __xas_next() - Find the next entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_next() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_next(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index++;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset++;
+
+ while (xas->xa_offset == XA_CHUNK_SIZE) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_next);
+
/**
* xas_find() - Find the next present entry in the XArray.
* @xas: XArray operation state.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 26b25be81656..ce20c22ad55d 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -49,6 +49,147 @@ void check_xa_tag(struct xarray *xa)
assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
}
+/* Check that putting the xas into an error state works correctly */
+void check_xas_error(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+
+ assert(xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL) == 0);
+ assert(xa_load(xa, 1) == xa_mk_value(1));
+
+ assert(xas_error(&xas) == 0);
+
+ xas_set_err(&xas, -ENOTTY);
+ assert(xas_error(&xas) == -ENOTTY);
+
+ xas_set_err(&xas, -ENOSPC);
+ assert(xas_error(&xas) == -ENOSPC);
+
+ xas_set_err(&xas, -ENOMEM);
+ assert(xas_error(&xas) == -ENOMEM);
+
+ assert(xas_load(&xas) == NULL);
+ assert(xas_store(&xas, &xas) == NULL);
+ assert(xas_load(&xas) == NULL);
+
+ assert(xas.xa_index == 0);
+ assert(xas_next(&xas) == NULL);
+ assert(xas.xa_index == 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == 0);
+
+ xas_retry(&xas, XA_RETRY_ENTRY);
+ assert(xas_error(&xas) == 0);
+
+ assert(xas_find(&xas, ULONG_MAX) == xa_mk_value(1));
+ assert(xas.xa_index == 1);
+ assert(xas_error(&xas) == 0);
+
+ assert(xas_find(&xas, ULONG_MAX) == NULL);
+ assert(xas.xa_index > 1);
+ assert(xas_error(&xas) == 0);
+ assert(xas.xa_node == XAS_BOUNDS);
+}
+
+void check_xas_pause(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *entry;
+ unsigned int seen;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_set_tag(xa, 0, XA_TAG_0);
+
+ seen = 0;
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++) {
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ }
+ }
+ rcu_read_unlock();
+ /* We don't see an entry that was added after we started */
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ seen = 0;
+ xas_set(&xas, 0);
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+}
+
void check_xas_retry(struct xarray *xa)
{
XA_STATE(xas, xa, 0);
@@ -61,7 +202,7 @@ void check_xas_retry(struct xarray *xa)
assert(xa_is_retry(xas_reload(&xas)));
assert(!xas_retry(&xas, NULL));
assert(!xas_retry(&xas, xa_mk_value(0)));
- assert(xas_retry(&xas, XA_RETRY_ENTRY));
+ xas_reset(&xas);
assert(xas.xa_node == XAS_RESTART);
assert(xas_next_entry(&xas, ULONG_MAX) == xa_mk_value(0));
assert(xas.xa_node == NULL);
@@ -257,9 +398,108 @@ void check_xas_delete(struct xarray *xa)
}
}
+void check_move_small(struct xarray *xa, unsigned long idx)
+{
+ XA_STATE(xas, xa, 0);
+ unsigned long i;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, idx, xa_mk_value(idx), GFP_KERNEL);
+
+ for (i = 0; i < idx * 4; i++) {
+ void *entry = xas_next(&xas);
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ }
+ xas_next(&xas);
+ assert(xas.xa_index == i);
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ } while (i > 0);
+
+ xas_set(&xas, ULONG_MAX);
+ assert(xas_next(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+ assert(xas_next(&xas) == xa_mk_value(0));
+ assert(xas.xa_index == 0);
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+}
+
+void check_move(struct xarray *xa)
+{
+ XA_STATE(xas, xa, (1 << 16) - 1);
+ unsigned long i;
+
+ for (i = 0; i < (1 << 16); i++) {
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ }
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+
+ for (i = (1 << 8); i < (1 << 15); i++) {
+ xa_erase(xa, i);
+ }
+
+ i = xas.xa_index;
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
+ unsigned long i;
check_xa_err(&array);
item_kill_tree(&array);
@@ -267,9 +507,15 @@ void xarray_checks(void)
check_xa_tag(&array);
item_kill_tree(&array);
+ check_xas_error(&array);
+ item_kill_tree(&array);
+
check_xas_retry(&array);
item_kill_tree(&array);
+ check_xas_pause(&array);
+ item_kill_tree(&array);
+
check_xa_load(&array);
item_kill_tree(&array);
@@ -283,6 +529,19 @@ void xarray_checks(void)
check_find(&array);
check_xas_delete(&array);
item_kill_tree(&array);
+
+ for (i = 0; i < 16; i++) {
+ check_move_small(&array, 1UL << i);
+ item_kill_tree(&array);
+ }
+
+ for (i = 2; i < 16; i++) {
+ check_move_small(&array, (1UL << i) - 1);
+ item_kill_tree(&array);
+ }
+
+ check_move(&array);
+ item_kill_tree(&array);
}
int __weak main(void)
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a direct replacement for struct radix_tree_node. A couple of
struct members have changed name, so convert those. Use a #define so
that radix tree users continue to work without change.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/radix-tree.h | 29 +++------------------
include/linux/xarray.h | 24 ++++++++++++++++++
lib/radix-tree.c | 48 +++++++++++++++++------------------
mm/workingset.c | 16 ++++++------
tools/testing/radix-tree/multiorder.c | 30 +++++++++++-----------
5 files changed, 74 insertions(+), 73 deletions(-)
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index c8a33e9e9a3c..f64beb9ba175 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -32,6 +32,7 @@
/* Keep unconverted code working */
#define radix_tree_root xarray
+#define radix_tree_node xa_node
/*
* The bottom two bits of the slot determine how the remaining bits in the
@@ -60,41 +61,17 @@ static inline bool radix_tree_is_internal_node(void *ptr)
/*** radix-tree API starts here ***/
-#define RADIX_TREE_MAX_TAGS 3
-
#define RADIX_TREE_MAP_SHIFT XA_CHUNK_SHIFT
#define RADIX_TREE_MAP_SIZE (1UL << RADIX_TREE_MAP_SHIFT)
#define RADIX_TREE_MAP_MASK (RADIX_TREE_MAP_SIZE-1)
-#define RADIX_TREE_TAG_LONGS \
- ((RADIX_TREE_MAP_SIZE + BITS_PER_LONG - 1) / BITS_PER_LONG)
+#define RADIX_TREE_MAX_TAGS XA_MAX_TAGS
+#define RADIX_TREE_TAG_LONGS XA_TAG_LONGS
#define RADIX_TREE_INDEX_BITS (8 /* CHAR_BIT */ * sizeof(unsigned long))
#define RADIX_TREE_MAX_PATH (DIV_ROUND_UP(RADIX_TREE_INDEX_BITS, \
RADIX_TREE_MAP_SHIFT))
-/*
- * @count is the count of every non-NULL element in the ->slots array
- * whether that is a data entry, a retry entry, a user pointer,
- * a sibling entry or a pointer to the next level of the tree.
- * @exceptional is the count of every element in ->slots which is
- * either a data entry or a sibling entry for data.
- */
-struct radix_tree_node {
- unsigned char shift; /* Bits remaining in each slot */
- unsigned char offset; /* Slot offset in parent */
- unsigned char count; /* Total entry count */
- unsigned char exceptional; /* Exceptional entry count */
- struct radix_tree_node *parent; /* Used when ascending tree */
- struct radix_tree_root *root; /* The tree we belong to */
- union {
- struct list_head private_list; /* For tree user */
- struct rcu_head rcu_head; /* Used when freeing node */
- };
- void __rcu *slots[RADIX_TREE_MAP_SIZE];
- unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
-};
-
/* The IDR tag is stored in the low bits of xa_flags */
#define ROOT_IS_IDR ((__force gfp_t)4)
/* The top bits of xa_flags are used to store the root tags */
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 9b05b907062b..b51f354dfbf0 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -195,6 +195,30 @@ static inline void xa_init(struct xarray *xa)
#endif
#define XA_CHUNK_SIZE (1UL << XA_CHUNK_SHIFT)
#define XA_CHUNK_MASK (XA_CHUNK_SIZE - 1)
+#define XA_MAX_TAGS 3
+#define XA_TAG_LONGS DIV_ROUND_UP(XA_CHUNK_SIZE, BITS_PER_LONG)
+
+/*
+ * @count is the count of every non-NULL element in the ->slots array
+ * whether that is a value entry, a retry entry, a user pointer,
+ * a sibling entry or a pointer to the next level of the tree.
+ * @nr_values is the count of every element in ->slots which is
+ * either a value entry or a sibling entry to a value entry.
+ */
+struct xa_node {
+ unsigned char shift; /* Bits remaining in each slot */
+ unsigned char offset; /* Slot offset in parent */
+ unsigned char count; /* Total entry count */
+ unsigned char nr_values; /* Value entry count */
+ struct xa_node __rcu *parent; /* NULL at top of tree */
+ struct xarray *array; /* The array we belong to */
+ union {
+ struct list_head private_list; /* For tree user */
+ struct rcu_head rcu_head; /* Used when freeing node */
+ };
+ void __rcu *slots[XA_CHUNK_SIZE];
+ unsigned long tags[XA_MAX_TAGS][XA_TAG_LONGS];
+};
/* Private */
static inline bool xa_is_node(const void *entry)
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index ea0b57f35dd6..ed3e8d641cba 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -260,11 +260,11 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
{
unsigned long i;
- pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d exceptional %d\n",
+ pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d nr_values %d\n",
node, node->offset, index, index | node_maxindex(node),
node->parent,
node->tags[0][0], node->tags[1][0], node->tags[2][0],
- node->shift, node->count, node->exceptional);
+ node->shift, node->count, node->nr_values);
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
unsigned long first = index | (i << node->shift);
@@ -354,7 +354,7 @@ static struct radix_tree_node *
radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
struct radix_tree_root *root,
unsigned int shift, unsigned int offset,
- unsigned int count, unsigned int exceptional)
+ unsigned int count, unsigned int nr_values)
{
struct radix_tree_node *ret = NULL;
@@ -401,9 +401,9 @@ radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
ret->shift = shift;
ret->offset = offset;
ret->count = count;
- ret->exceptional = exceptional;
+ ret->nr_values = nr_values;
ret->parent = parent;
- ret->root = root;
+ ret->array = root;
}
return ret;
}
@@ -633,8 +633,8 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
} else if (xa_is_value(entry)) {
- /* Moving an exceptional root->xa_head to a node */
- node->exceptional = 1;
+ /* Moving a value entry root->xa_head to a node */
+ node->nr_values = 1;
}
/*
* entry was already in the radix tree, so we do not need
@@ -920,12 +920,12 @@ static inline int insert_entries(struct radix_tree_node *node,
if (xa_is_node(old))
radix_tree_free_nodes(old);
if (xa_is_value(old))
- node->exceptional--;
+ node->nr_values--;
}
if (node) {
node->count += n;
if (xa_is_value(item))
- node->exceptional += n;
+ node->nr_values += n;
}
return n;
}
@@ -939,7 +939,7 @@ static inline int insert_entries(struct radix_tree_node *node,
if (node) {
node->count++;
if (xa_is_value(item))
- node->exceptional++;
+ node->nr_values++;
}
return 1;
}
@@ -1073,7 +1073,7 @@ void *radix_tree_lookup(const struct radix_tree_root *root, unsigned long index)
EXPORT_SYMBOL(radix_tree_lookup);
static inline void replace_sibling_entries(struct radix_tree_node *node,
- void __rcu **slot, int count, int exceptional)
+ void __rcu **slot, int count, int values)
{
#ifdef CONFIG_RADIX_TREE_MULTIORDER
unsigned offset = get_slot_offset(node, slot);
@@ -1086,21 +1086,21 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
node->slots[offset] = NULL;
node->count--;
}
- node->exceptional += exceptional;
+ node->nr_values += values;
}
#endif
}
static void replace_slot(void __rcu **slot, void *item,
- struct radix_tree_node *node, int count, int exceptional)
+ struct radix_tree_node *node, int count, int values)
{
if (WARN_ON_ONCE(radix_tree_is_internal_node(item)))
return;
- if (node && (count || exceptional)) {
+ if (node && (count || values)) {
node->count += count;
- node->exceptional += exceptional;
- replace_sibling_entries(node, slot, count, exceptional);
+ node->nr_values += values;
+ replace_sibling_entries(node, slot, count, values);
}
rcu_assign_pointer(*slot, item);
@@ -1154,17 +1154,17 @@ void __radix_tree_replace(struct radix_tree_root *root,
radix_tree_update_node_t update_node)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = !!xa_is_value(item) - !!xa_is_value(old);
+ int values = !!xa_is_value(item) - !!xa_is_value(old);
int count = calculate_count(root, node, slot, item, old);
/*
- * This function supports replacing exceptional entries and
+ * This function supports replacing value entries and
* deleting entries, but that needs accounting against the
* node unless the slot is root->xa_head.
*/
WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->xa_head) &&
- (count || exceptional));
- replace_slot(slot, item, node, count, exceptional);
+ (count || values));
+ replace_slot(slot, item, node, count, values);
if (!node)
return;
@@ -1186,7 +1186,7 @@ void __radix_tree_replace(struct radix_tree_root *root,
* across slot lookup and replacement.
*
* NOTE: This cannot be used to switch between non-entries (empty slots),
- * regular entries, and exceptional entries, as that requires accounting
+ * regular entries, and value entries, as that requires accounting
* inside the radix tree node. When switching from one type of entry or
* deleting, use __radix_tree_lookup() and __radix_tree_replace() or
* radix_tree_iter_replace().
@@ -1294,7 +1294,7 @@ int radix_tree_split(struct radix_tree_root *root, unsigned long index,
rcu_assign_pointer(parent->slots[end], RADIX_TREE_RETRY);
}
rcu_assign_pointer(parent->slots[offset], RADIX_TREE_RETRY);
- parent->exceptional -= (end - offset);
+ parent->nr_values -= (end - offset);
if (order == parent->shift)
return 0;
@@ -1954,7 +1954,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
struct radix_tree_node *node, void __rcu **slot)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = xa_is_value(old) ? -1 : 0;
+ int values = xa_is_value(old) ? -1 : 0;
unsigned offset = get_slot_offset(node, slot);
int tag;
@@ -1964,7 +1964,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
node_tag_clear(root, node, tag, offset);
- replace_slot(slot, NULL, node, -1, exceptional);
+ replace_slot(slot, NULL, node, -1, values);
return node && delete_node(root, node, NULL);
}
diff --git a/mm/workingset.c b/mm/workingset.c
index 3afeb84720f4..91b6e16ad4c1 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -348,7 +348,7 @@ void workingset_update_node(struct radix_tree_node *node)
* already where they should be. The list_empty() test is safe
* as node->private_list is protected by mapping->pages.xa_lock.
*/
- if (node->count && node->count == node->exceptional) {
+ if (node->count && node->count == node->nr_values) {
if (list_empty(&node->private_list))
list_lru_add(&shadow_nodes, &node->private_list);
} else {
@@ -427,8 +427,8 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* to reclaim, take the node off-LRU, and drop the lru_lock.
*/
- node = container_of(item, struct radix_tree_node, private_list);
- mapping = container_of(node->root, struct address_space, pages);
+ node = container_of(item, struct xa_node, private_list);
+ mapping = container_of(node->array, struct address_space, pages);
/* Coming from the list, invert the lock order */
if (!xa_trylock(&mapping->pages)) {
@@ -445,25 +445,25 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* no pages, so we expect to be able to remove them all and
* delete and free the empty node afterwards.
*/
- if (WARN_ON_ONCE(!node->exceptional))
+ if (WARN_ON_ONCE(!node->nr_values))
goto out_invalid;
- if (WARN_ON_ONCE(node->count != node->exceptional))
+ if (WARN_ON_ONCE(node->count != node->nr_values))
goto out_invalid;
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
if (node->slots[i]) {
if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
goto out_invalid;
- if (WARN_ON_ONCE(!node->exceptional))
+ if (WARN_ON_ONCE(!node->nr_values))
goto out_invalid;
if (WARN_ON_ONCE(!mapping->nrexceptional))
goto out_invalid;
node->slots[i] = NULL;
- node->exceptional--;
+ node->nr_values--;
node->count--;
mapping->nrexceptional--;
}
}
- if (WARN_ON_ONCE(node->exceptional))
+ if (WARN_ON_ONCE(node->nr_values))
goto out_invalid;
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
__radix_tree_delete_node(&mapping->pages, node,
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 24293a2fd82d..ed51edc008fd 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -392,7 +392,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
radix_tree_insert(&tree, 1 << order2, xa_mk_value(5));
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
assert(item2 == xa_mk_value(5));
- assert(node->exceptional == 1);
+ assert(node->nr_values == 1);
item2 = radix_tree_lookup(&tree, 0);
free(item2);
@@ -400,7 +400,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
radix_tree_join(&tree, 0, order1, item1);
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
assert(item2 == item1);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
item_kill_tree(&tree);
}
@@ -408,7 +408,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
* This test revealed an accounting bug for inline data entries at one point.
* Nodes were being freed back into the pool with an elevated exception count
* by radix_tree_join() and then radix_tree_split() was failing to zero the
- * count of exceptional entries.
+ * count of value entries.
*/
static void multiorder_join3(unsigned int order)
{
@@ -432,7 +432,7 @@ static void multiorder_join3(unsigned int order)
}
__radix_tree_lookup(&tree, 0, &node, NULL);
- assert(node->exceptional == node->count);
+ assert(node->nr_values == node->count);
item_kill_tree(&tree);
}
@@ -519,7 +519,7 @@ static void __multiorder_split2(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);
radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -529,7 +529,7 @@ static void __multiorder_split2(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item != xa_mk_value(5));
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
item_kill_tree(&tree);
}
@@ -546,7 +546,7 @@ static void __multiorder_split3(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);
radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -555,7 +555,7 @@ static void __multiorder_split3(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(7));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);
item_kill_tree(&tree);
@@ -563,7 +563,7 @@ static void __multiorder_split3(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);
radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -576,13 +576,13 @@ static void __multiorder_split3(int old_order, int new_order)
item = __radix_tree_lookup(&tree, 1 << new_order, &node, NULL);
assert(item == xa_mk_value(7));
- assert(node->count == node->exceptional);
+ assert(node->count == node->nr_values);
do {
node = node->parent;
if (!node)
break;
assert(node->count == 1);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
} while (1);
item_kill_tree(&tree);
@@ -610,15 +610,15 @@ static void multiorder_account(void)
__radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 0, &node, NULL);
- assert(node->count == node->exceptional * 2);
+ assert(node->count == node->nr_values * 2);
radix_tree_delete(&tree, 1 << 5);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
__radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 1 << 5, &node, &slot);
- assert(node->count == node->exceptional * 2);
+ assert(node->count == node->nr_values * 2);
__radix_tree_replace(&tree, node, slot, NULL, NULL);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
item_kill_tree(&tree);
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a direct replacement for struct radix_tree_root. Some of the
struct members have changed name; convert those, and use a #define so
that radix_tree users continue to work without change.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/radix-tree.h | 33 ++++----------
include/linux/xarray.h | 61 ++++++++++++++++++++++++++
lib/Makefile | 2 +-
lib/idr.c | 4 +-
lib/radix-tree.c | 75 ++++++++++++++++----------------
lib/xarray.c | 44 +++++++++++++++++++
tools/include/linux/spinlock.h | 1 +
tools/testing/radix-tree/.gitignore | 1 +
tools/testing/radix-tree/Makefile | 8 +++-
tools/testing/radix-tree/linux/bug.h | 1 +
tools/testing/radix-tree/linux/kconfig.h | 1 +
tools/testing/radix-tree/linux/xarray.h | 2 +
tools/testing/radix-tree/multiorder.c | 6 +--
tools/testing/radix-tree/test.c | 6 +--
14 files changed, 172 insertions(+), 73 deletions(-)
create mode 100644 lib/xarray.c
create mode 100644 tools/testing/radix-tree/linux/kconfig.h
create mode 100644 tools/testing/radix-tree/linux/xarray.h
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 87f35fe00e55..c8a33e9e9a3c 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -30,6 +30,9 @@
#include <linux/types.h>
#include <linux/xarray.h>
+/* Keep unconverted code working */
+#define radix_tree_root xarray
+
/*
* The bottom two bits of the slot determine how the remaining bits in the
* slot are interpreted:
@@ -59,10 +62,7 @@ static inline bool radix_tree_is_internal_node(void *ptr)
#define RADIX_TREE_MAX_TAGS 3
-#ifndef RADIX_TREE_MAP_SHIFT
-#define RADIX_TREE_MAP_SHIFT (CONFIG_BASE_SMALL ? 4 : 6)
-#endif
-
+#define RADIX_TREE_MAP_SHIFT XA_CHUNK_SHIFT
#define RADIX_TREE_MAP_SIZE (1UL << RADIX_TREE_MAP_SHIFT)
#define RADIX_TREE_MAP_MASK (RADIX_TREE_MAP_SIZE-1)
@@ -95,36 +95,21 @@ struct radix_tree_node {
unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
};
-/* The IDR tag is stored in the low bits of the GFP flags */
+/* The IDR tag is stored in the low bits of xa_flags */
#define ROOT_IS_IDR ((__force gfp_t)4)
-/* The top bits of gfp_mask are used to store the root tags */
+/* The top bits of xa_flags are used to store the root tags */
#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
-struct radix_tree_root {
- spinlock_t xa_lock;
- gfp_t gfp_mask;
- struct radix_tree_node __rcu *rnode;
-};
-
-#define RADIX_TREE_INIT(name, mask) { \
- .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
- .gfp_mask = (mask), \
- .rnode = NULL, \
-}
+#define RADIX_TREE_INIT(name, mask) XARRAY_INIT_FLAGS(name, mask)
#define RADIX_TREE(name, mask) \
struct radix_tree_root name = RADIX_TREE_INIT(name, mask)
-#define INIT_RADIX_TREE(root, mask) \
-do { \
- spin_lock_init(&(root)->xa_lock); \
- (root)->gfp_mask = (mask); \
- (root)->rnode = NULL; \
-} while (0)
+#define INIT_RADIX_TREE(root, mask) xa_init_flags(root, mask)
static inline bool radix_tree_empty(const struct radix_tree_root *root)
{
- return root->rnode == NULL;
+ return root->xa_head == NULL;
}
/**
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 283beb5aac58..9b05b907062b 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -10,6 +10,8 @@
*/
#include <linux/bug.h>
+#include <linux/compiler.h>
+#include <linux/kconfig.h>
#include <linux/spinlock.h>
#include <linux/types.h>
@@ -105,6 +107,65 @@ static inline bool xa_is_internal(const void *entry)
return ((unsigned long)entry & 3) == 2;
}
+/**
+ * struct xarray - The anchor of the XArray.
+ * @xa_lock: Lock that protects the contents of the XArray.
+ *
+ * To use the xarray, define it statically or embed it in your data structure.
+ * It is a very small data structure, so it does not usually make sense to
+ * allocate it separately and keep a pointer to it in your data structure.
+ *
+ * You may use the xa_lock to protect your own data structures as well.
+ */
+/*
+ * If all of the entries in the array are NULL, @xa_head is a NULL pointer.
+ * If the only non-NULL entry in the array is at index 0, @xa_head is that
+ * entry. If any other entry in the array is non-NULL, @xa_head points
+ * to an @xa_node.
+ */
+struct xarray {
+ spinlock_t xa_lock;
+/* private: The rest of the data structure is not to be used directly. */
+ gfp_t xa_flags;
+ void __rcu * xa_head;
+};
+
+#define XARRAY_INIT_FLAGS(name, flags) { \
+ .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
+ .xa_flags = flags, \
+ .xa_head = NULL, \
+}
+
+#define XARRAY_INIT(name) XARRAY_INIT_FLAGS(name, 0)
+
+/**
+ * DEFINE_XARRAY() - Define an XArray
+ * @name: A string that names your XArray
+ *
+ * This is intended for file scope definitions of XArrays. It declares
+ * and initialises an empty XArray with the chosen name. It is equivalent
+ * to calling xa_init() on the array, but it does the initialisation at
+ * compiletime instead of runtime.
+ */
+#define DEFINE_XARRAY(name) struct xarray name = XARRAY_INIT(name)
+#define DEFINE_XARRAY_FLAGS(name, flags) \
+ struct xarray name = XARRAY_INIT_FLAGS(name, flags)
+
+void xa_init_flags(struct xarray *, gfp_t flags);
+
+/**
+ * xa_init() - Initialise an empty XArray.
+ * @xa: XArray.
+ *
+ * An empty XArray is full of NULL entries.
+ *
+ * Context: Any context.
+ */
+static inline void xa_init(struct xarray *xa)
+{
+ xa_init_flags(xa, 0);
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
diff --git a/lib/Makefile b/lib/Makefile
index a90d4fcd748f..00bbe98d0da7 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,7 +18,7 @@ KCOV_INSTRUMENT_debugobjects.o := n
KCOV_INSTRUMENT_dynamic_debug.o := n
lib-y := ctype.o string.o vsprintf.o cmdline.o \
- rbtree.o radix-tree.o dump_stack.o timerqueue.o\
+ rbtree.o radix-tree.o dump_stack.o timerqueue.o xarray.o \
idr.o int_sqrt.o extable.o \
sha1.o chacha20.o irq_regs.o argv_split.o \
flex_proportions.o ratelimit.o show_mem.o \
diff --git a/lib/idr.c b/lib/idr.c
index 756e82c66a30..f338365549e8 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -42,8 +42,8 @@ int idr_alloc_u32(struct idr *idr, void *ptr, u32 *nextid,
if (WARN_ON_ONCE(radix_tree_is_internal_node(ptr)))
return -EINVAL;
- if (WARN_ON_ONCE(!(idr->idr_rt.gfp_mask & ROOT_IS_IDR)))
- idr->idr_rt.gfp_mask |= IDR_RT_MARKER;
+ if (WARN_ON_ONCE(!(idr->idr_rt.xa_flags & ROOT_IS_IDR)))
+ idr->idr_rt.xa_flags |= IDR_RT_MARKER;
id = (id < base) ? 0 : id - base;
radix_tree_iter_init(&iter, id);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 02863c54810d..ea0b57f35dd6 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -124,7 +124,7 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,
static inline gfp_t root_gfp_mask(const struct radix_tree_root *root)
{
- return root->gfp_mask & ((__GFP_BITS_MASK >> 4) << 4);
+ return root->xa_flags & ((__GFP_BITS_MASK >> 4) << 4);
}
static inline void tag_set(struct radix_tree_node *node, unsigned int tag,
@@ -147,32 +147,32 @@ static inline int tag_get(const struct radix_tree_node *node, unsigned int tag,
static inline void root_tag_set(struct radix_tree_root *root, unsigned tag)
{
- root->gfp_mask |= (__force gfp_t)(1 << (tag + ROOT_TAG_SHIFT));
+ root->xa_flags |= (__force gfp_t)(1 << (tag + ROOT_TAG_SHIFT));
}
static inline void root_tag_clear(struct radix_tree_root *root, unsigned tag)
{
- root->gfp_mask &= (__force gfp_t)~(1 << (tag + ROOT_TAG_SHIFT));
+ root->xa_flags &= (__force gfp_t)~(1 << (tag + ROOT_TAG_SHIFT));
}
static inline void root_tag_clear_all(struct radix_tree_root *root)
{
- root->gfp_mask &= (1 << ROOT_TAG_SHIFT) - 1;
+ root->xa_flags &= (__force gfp_t)((1 << ROOT_TAG_SHIFT) - 1);
}
static inline int root_tag_get(const struct radix_tree_root *root, unsigned tag)
{
- return (__force int)root->gfp_mask & (1 << (tag + ROOT_TAG_SHIFT));
+ return (__force int)root->xa_flags & (1 << (tag + ROOT_TAG_SHIFT));
}
static inline unsigned root_tags_get(const struct radix_tree_root *root)
{
- return (__force unsigned)root->gfp_mask >> ROOT_TAG_SHIFT;
+ return (__force unsigned)root->xa_flags >> ROOT_TAG_SHIFT;
}
static inline bool is_idr(const struct radix_tree_root *root)
{
- return !!(root->gfp_mask & ROOT_IS_IDR);
+ return !!(root->xa_flags & ROOT_IS_IDR);
}
/*
@@ -291,12 +291,12 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
/* For debug */
static void radix_tree_dump(struct radix_tree_root *root)
{
- pr_debug("radix root: %p rnode %p tags %x\n",
- root, root->rnode,
- root->gfp_mask >> ROOT_TAG_SHIFT);
- if (!radix_tree_is_internal_node(root->rnode))
+ pr_debug("radix root: %p xa_head %p tags %x\n",
+ root, root->xa_head,
+ root->xa_flags >> ROOT_TAG_SHIFT);
+ if (!radix_tree_is_internal_node(root->xa_head))
return;
- dump_node(entry_to_node(root->rnode), 0);
+ dump_node(entry_to_node(root->xa_head), 0);
}
static void dump_ida_node(void *entry, unsigned long index)
@@ -340,9 +340,9 @@ static void dump_ida_node(void *entry, unsigned long index)
static void ida_dump(struct ida *ida)
{
struct radix_tree_root *root = &ida->ida_rt;
- pr_debug("ida: %p node %p free %d\n", ida, root->rnode,
- root->gfp_mask >> ROOT_TAG_SHIFT);
- dump_ida_node(root->rnode, 0);
+ pr_debug("ida: %p node %p free %d\n", ida, root->xa_head,
+ root->xa_flags >> ROOT_TAG_SHIFT);
+ dump_ida_node(root->xa_head, 0);
}
#endif
@@ -576,7 +576,7 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order)
static unsigned radix_tree_load_root(const struct radix_tree_root *root,
struct radix_tree_node **nodep, unsigned long *maxindex)
{
- struct radix_tree_node *node = rcu_dereference_raw(root->rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(root->xa_head);
*nodep = node;
@@ -605,7 +605,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
while (index > shift_maxindex(maxshift))
maxshift += RADIX_TREE_MAP_SHIFT;
- entry = rcu_dereference_raw(root->rnode);
+ entry = rcu_dereference_raw(root->xa_head);
if (!entry && (!is_idr(root) || root_tag_get(root, IDR_FREE)))
goto out;
@@ -633,7 +633,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
} else if (xa_is_value(entry)) {
- /* Moving an exceptional root->rnode to a node */
+ /* Moving an exceptional root->xa_head to a node */
node->exceptional = 1;
}
/*
@@ -642,7 +642,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
*/
node->slots[0] = (void __rcu *)entry;
entry = node_to_entry(node);
- rcu_assign_pointer(root->rnode, entry);
+ rcu_assign_pointer(root->xa_head, entry);
shift += RADIX_TREE_MAP_SHIFT;
} while (shift <= maxshift);
out:
@@ -659,7 +659,7 @@ static inline bool radix_tree_shrink(struct radix_tree_root *root,
bool shrunk = false;
for (;;) {
- struct radix_tree_node *node = rcu_dereference_raw(root->rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(root->xa_head);
struct radix_tree_node *child;
if (!radix_tree_is_internal_node(node))
@@ -687,9 +687,9 @@ static inline bool radix_tree_shrink(struct radix_tree_root *root,
* moving the node from one part of the tree to another: if it
* was safe to dereference the old pointer to it
* (node->slots[0]), it will be safe to dereference the new
- * one (root->rnode) as far as dependent read barriers go.
+ * one (root->xa_head) as far as dependent read barriers go.
*/
- root->rnode = (void __rcu *)child;
+ root->xa_head = (void __rcu *)child;
if (is_idr(root) && !tag_get(node, IDR_FREE, 0))
root_tag_clear(root, IDR_FREE);
@@ -737,9 +737,8 @@ static bool delete_node(struct radix_tree_root *root,
if (node->count) {
if (node_to_entry(node) ==
- rcu_dereference_raw(root->rnode))
- deleted |= radix_tree_shrink(root,
- update_node);
+ rcu_dereference_raw(root->xa_head))
+ deleted |= radix_tree_shrink(root, update_node);
return deleted;
}
@@ -754,7 +753,7 @@ static bool delete_node(struct radix_tree_root *root,
*/
if (!is_idr(root))
root_tag_clear_all(root);
- root->rnode = NULL;
+ root->xa_head = NULL;
}
WARN_ON_ONCE(!list_empty(&node->private_list));
@@ -779,7 +778,7 @@ static bool delete_node(struct radix_tree_root *root,
* at position @index in the radix tree @root.
*
* Until there is more than one item in the tree, no nodes are
- * allocated and @root->rnode is used as a direct slot instead of
+ * allocated and @root->xa_head is used as a direct slot instead of
* pointing to a node, in which case *@nodep will be NULL.
*
* Returns -ENOMEM, or 0 for success.
@@ -789,7 +788,7 @@ int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
void __rcu ***slotp)
{
struct radix_tree_node *node = NULL, *child;
- void __rcu **slot = (void __rcu **)&root->rnode;
+ void __rcu **slot = (void __rcu **)&root->xa_head;
unsigned long maxindex;
unsigned int shift, offset = 0;
unsigned long max = index | ((1UL << order) - 1);
@@ -805,7 +804,7 @@ int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
if (error < 0)
return error;
shift = error;
- child = rcu_dereference_raw(root->rnode);
+ child = rcu_dereference_raw(root->xa_head);
}
while (shift > order) {
@@ -996,7 +995,7 @@ EXPORT_SYMBOL(__radix_tree_insert);
* tree @root.
*
* Until there is more than one item in the tree, no nodes are
- * allocated and @root->rnode is used as a direct slot instead of
+ * allocated and @root->xa_head is used as a direct slot instead of
* pointing to a node, in which case *@nodep will be NULL.
*/
void *__radix_tree_lookup(const struct radix_tree_root *root,
@@ -1009,7 +1008,7 @@ void *__radix_tree_lookup(const struct radix_tree_root *root,
restart:
parent = NULL;
- slot = (void __rcu **)&root->rnode;
+ slot = (void __rcu **)&root->xa_head;
radix_tree_load_root(root, &node, &maxindex);
if (index > maxindex)
return NULL;
@@ -1161,9 +1160,9 @@ void __radix_tree_replace(struct radix_tree_root *root,
/*
* This function supports replacing exceptional entries and
* deleting entries, but that needs accounting against the
- * node unless the slot is root->rnode.
+ * node unless the slot is root->xa_head.
*/
- WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->rnode) &&
+ WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->xa_head) &&
(count || exceptional));
replace_slot(slot, item, node, count, exceptional);
@@ -1715,7 +1714,7 @@ void __rcu **radix_tree_next_chunk(const struct radix_tree_root *root,
iter->tags = 1;
iter->node = NULL;
__set_iter_shift(iter, 0);
- return (void __rcu **)&root->rnode;
+ return (void __rcu **)&root->xa_head;
}
do {
@@ -2109,7 +2108,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
unsigned long max)
{
struct radix_tree_node *node = NULL, *child;
- void __rcu **slot = (void __rcu **)&root->rnode;
+ void __rcu **slot = (void __rcu **)&root->xa_head;
unsigned long maxindex, start = iter->next_index;
unsigned int shift, offset = 0;
@@ -2125,7 +2124,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
if (error < 0)
return ERR_PTR(error);
shift = error;
- child = rcu_dereference_raw(root->rnode);
+ child = rcu_dereference_raw(root->xa_head);
}
while (shift) {
@@ -2188,10 +2187,10 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
*/
void idr_destroy(struct idr *idr)
{
- struct radix_tree_node *node = rcu_dereference_raw(idr->idr_rt.rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(idr->idr_rt.xa_head);
if (radix_tree_is_internal_node(node))
radix_tree_free_nodes(node);
- idr->idr_rt.rnode = NULL;
+ idr->idr_rt.xa_head = NULL;
root_tag_set(&idr->idr_rt, IDR_FREE);
}
EXPORT_SYMBOL(idr_destroy);
diff --git a/lib/xarray.c b/lib/xarray.c
new file mode 100644
index 000000000000..382458f602cc
--- /dev/null
+++ b/lib/xarray.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * XArray implementation
+ * Copyright (c) 2017 Microsoft Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ */
+
+#include <linux/export.h>
+#include <linux/xarray.h>
+
+/*
+ * Coding conventions in this file:
+ *
+ * @xa is used to refer to the entire xarray.
+ * @xas is the 'xarray operation state'. It may be either a pointer to
+ * an xa_state, or an xa_state stored on the stack. This is an unfortunate
+ * ambiguity.
+ * @index is the index of the entry being operated on
+ * @tag is an xa_tag_t; a small number indicating one of the tag bits.
+ * @node refers to an xa_node; usually the primary one being operated on by
+ * this function.
+ * @offset is the index into the slots array inside an xa_node.
+ * @parent refers to the @xa_node closer to the head than @node.
+ * @entry refers to something stored in a slot in the xarray
+ */
+
+/**
+ * xa_init_flags() - Initialise an empty XArray with flags.
+ * @xa: XArray.
+ * @flags: XA_FLAG values.
+ *
+ * If you need to initialise an XArray with special flags (eg you need
+ * to take the lock from interrupt context), use this function instead
+ * of xa_init().
+ *
+ * Context: Any context.
+ */
+void xa_init_flags(struct xarray *xa, gfp_t flags)
+{
+ spin_lock_init(&xa->xa_lock);
+ xa->xa_flags = flags;
+ xa->xa_head = NULL;
+}
+EXPORT_SYMBOL(xa_init_flags);
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index b21b586b9854..34fed5c38da2 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -8,6 +8,7 @@
#define spinlock_t pthread_mutex_t
#define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
+#define spin_lock_init(x) pthread_mutex_init(x, NULL);
#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
diff --git a/tools/testing/radix-tree/.gitignore b/tools/testing/radix-tree/.gitignore
index d4706c0ffceb..8d4df7a72a8e 100644
--- a/tools/testing/radix-tree/.gitignore
+++ b/tools/testing/radix-tree/.gitignore
@@ -4,3 +4,4 @@ idr-test
main
multiorder
radix-tree.c
+xarray.c
diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
index fa7ee369b3c9..3868bc189199 100644
--- a/tools/testing/radix-tree/Makefile
+++ b/tools/testing/radix-tree/Makefile
@@ -4,7 +4,7 @@ CFLAGS += -I. -I../../include -g -O2 -Wall -D_LGPL_SOURCE -fsanitize=address
LDFLAGS += -fsanitize=address
LDLIBS+= -lpthread -lurcu
TARGETS = main idr-test multiorder
-CORE_OFILES := radix-tree.o idr.o linux.o test.o find_bit.o
+CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o
OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \
tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o
@@ -33,9 +33,13 @@ vpath %.c ../../lib
$(OFILES): Makefile *.h */*.h generated/map-shift.h \
../../include/linux/*.h \
../../include/asm/*.h \
+ ../../../include/linux/xarray.h \
../../../include/linux/radix-tree.h \
../../../include/linux/idr.h
+xarray.c: ../../../lib/xarray.c
+ sed -e 's/^static //' -e 's/__always_inline //' -e 's/inline //' < $< > $@
+
radix-tree.c: ../../../lib/radix-tree.c
sed -e 's/^static //' -e 's/__always_inline //' -e 's/inline //' < $< > $@
@@ -46,6 +50,6 @@ idr.c: ../../../lib/idr.c
mapshift:
@if ! grep -qws $(SHIFT) generated/map-shift.h; then \
- echo "#define RADIX_TREE_MAP_SHIFT $(SHIFT)" > \
+ echo "#define XA_CHUNK_SHIFT $(SHIFT)" > \
generated/map-shift.h; \
fi
diff --git a/tools/testing/radix-tree/linux/bug.h b/tools/testing/radix-tree/linux/bug.h
index 23b8ed52f8c8..03dc8a57eb99 100644
--- a/tools/testing/radix-tree/linux/bug.h
+++ b/tools/testing/radix-tree/linux/bug.h
@@ -1 +1,2 @@
+#include <stdio.h>
#include "asm/bug.h"
diff --git a/tools/testing/radix-tree/linux/kconfig.h b/tools/testing/radix-tree/linux/kconfig.h
new file mode 100644
index 000000000000..6c8675859913
--- /dev/null
+++ b/tools/testing/radix-tree/linux/kconfig.h
@@ -0,0 +1 @@
+#include "../../../../include/linux/kconfig.h"
diff --git a/tools/testing/radix-tree/linux/xarray.h b/tools/testing/radix-tree/linux/xarray.h
new file mode 100644
index 000000000000..df3812cda376
--- /dev/null
+++ b/tools/testing/radix-tree/linux/xarray.h
@@ -0,0 +1,2 @@
+#include "generated/map-shift.h"
+#include "../../../../include/linux/xarray.h"
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 684e76f79f4a..24293a2fd82d 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -191,13 +191,13 @@ static void multiorder_shrink(unsigned long index, int order)
assert(item_insert_order(&tree, 0, order) == 0);
- node = tree.rnode;
+ node = tree.xa_head;
assert(item_insert(&tree, index) == 0);
- assert(node != tree.rnode);
+ assert(node != tree.xa_head);
assert(item_delete(&tree, index) != 0);
- assert(node == tree.rnode);
+ assert(node == tree.xa_head);
for (i = 0; i < max; i++) {
struct item *item = item_lookup(&tree, i);
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 0d69c49177c6..6e1cc2040817 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -262,7 +262,7 @@ static int verify_node(struct radix_tree_node *slot, unsigned int tag,
void verify_tag_consistency(struct radix_tree_root *root, unsigned int tag)
{
- struct radix_tree_node *node = root->rnode;
+ struct radix_tree_node *node = root->xa_head;
if (!radix_tree_is_internal_node(node))
return;
verify_node(node, tag, !!root_tag_get(root, tag));
@@ -292,13 +292,13 @@ void item_kill_tree(struct radix_tree_root *root)
}
}
assert(radix_tree_gang_lookup(root, (void **)items, 0, 32) == 0);
- assert(root->rnode == NULL);
+ assert(root->xa_head == NULL);
}
void tree_verify_min_height(struct radix_tree_root *root, int maxindex)
{
unsigned shift;
- struct radix_tree_node *node = root->rnode;
+ struct radix_tree_node *node = root->xa_head;
if (!radix_tree_is_internal_node(node)) {
assert(maxindex == 0);
return;
--
2.16.1
From: Matthew Wilcox <[email protected]>
Like cmpxchg(), xa_cmpxchg will only store to the index if the current
entry matches the old entry. It returns the current entry, which is
usually more useful than the errno returned by radix_tree_insert().
For the users who really only want the errno, the xa_insert() wrapper
provides a more convenient calling convention.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 60 ++++++++++++++++++++++++++++
lib/xarray.c | 71 ++++++++++++++++++++++++++++++++++
tools/testing/radix-tree/xarray-test.c | 10 +++++
3 files changed, 141 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 38e290df2ff0..e95ebe2488f9 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -218,6 +218,8 @@ struct xarray {
void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
+void *xa_cmpxchg(struct xarray *, unsigned long index,
+ void *old, void *entry, gfp_t);
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -277,6 +279,34 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
return xa->xa_flags & XA_FLAGS_TAG(tag);
}
+/**
+ * xa_insert() - Store this entry in the XArray unless another entry is
+ * already present.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * If you would rather see the existing entry in the array, use xa_cmpxchg().
+ * This function is for users who don't care what the entry is, only that
+ * one is present.
+ *
+ * Context: Process context. Takes and releases the xa_lock.
+ * May sleep if the @gfp flags permit.
+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.
+ * -ENOMEM if memory could not be allocated.
+ */
+static inline int xa_insert(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
+ if (!curr)
+ return 0;
+ if (xa_is_err(curr))
+ return xa_err(curr);
+ return -EEXIST;
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -296,9 +326,39 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
*/
void *__xa_erase(struct xarray *, unsigned long index);
void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
+void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
+ void *entry, gfp_t);
void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
+/**
+ * __xa_insert() - Store this entry in the XArray unless another entry is
+ * already present.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * If you would rather see the existing entry in the array, use __xa_cmpxchg().
+ * This function is for users who don't care what the entry is, only that
+ * one is present.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry. May
+ * release and reacquire xa_lock if the @gfp flags permit.
+ * Return: 0 if the store succeeded. -EEXIST if another entry was present.
+ * -ENOMEM if memory could not be allocated.
+ */
+static inline int __xa_insert(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr = __xa_cmpxchg(xa, index, NULL, entry, gfp);
+ if (!curr)
+ return 0;
+ if (xa_is_err(curr))
+ return xa_err(curr);
+ return -EEXIST;
+}
+
/* Everything below here is the Advanced API. Proceed with caution. */
/*
diff --git a/lib/xarray.c b/lib/xarray.c
index 9e50804f168c..a231699d894a 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -937,6 +937,77 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
}
EXPORT_SYMBOL(__xa_store);
+/**
+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @old: Old value to test against.
+ * @entry: New value to place in array.
+ * @gfp: Memory allocation flags.
+ *
+ * If the entry at @index is the same as @old, replace it with @entry.
+ * If the return value is equal to @old, then the exchange was successful.
+ *
+ * Context: Process context. Takes and releases the xa_lock. May sleep
+ * if the @gfp flags permit.
+ * Return: The old value at this index or xa_err() if an error happened.
+ */
+void *xa_cmpxchg(struct xarray *xa, unsigned long index,
+ void *old, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ xas_lock(&xas);
+ curr = xas_load(&xas);
+ if (curr == old)
+ xas_store(&xas, entry);
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(xa_cmpxchg);
+
+/**
+ * __xa_cmpxchg() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @old: Old value to test against.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * You must already be holding the xa_lock when calling this function.
+ * It will drop the lock if needed to allocate memory, and then reacquire
+ * it afterwards.
+ *
+ * Context: Any context. Expects xa_lock to be held on entry. May
+ * release and reacquire xa_lock if @gfp flags permit.
+ * Return: The old entry at this index or xa_err() if an error happened.
+ */
+void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
+ void *old, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ curr = xas_load(&xas);
+ if (curr == old)
+ xas_store(&xas, entry);
+ } while (__xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(__xa_cmpxchg);
+
/**
* __xa_set_tag() - Set this tag on this entry while locked.
* @xa: XArray.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 5defd0b9f85c..d6a969d999d9 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -84,6 +84,15 @@ void check_xa_shrink(struct xarray *xa)
assert(xa_load(xa, 0) == xa_mk_value(0));
}
+void check_cmpxchg(struct xarray *xa)
+{
+ assert(xa_empty(xa));
+ assert(!xa_store(xa, 12345678, xa_mk_value(12345678), GFP_KERNEL));
+ assert(!xa_cmpxchg(xa, 5, xa_mk_value(5), NULL, GFP_KERNEL));
+ assert(xa_erase(xa, 12345678) == xa_mk_value(12345678));
+ assert(xa_empty(xa));
+}
+
void check_multi_store(struct xarray *xa)
{
unsigned long i, j, k;
@@ -149,6 +158,7 @@ void xarray_checks(void)
check_xa_shrink(&array);
item_kill_tree(&array);
+ check_cmpxchg(&array);
check_multi_store(&array);
item_kill_tree(&array);
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
This iterator allows the user to efficiently walk a range of the array,
executing the loop body once for each entry in that range that matches
the filter. This commit also includes xa_find() and xa_find_above()
which are helper functions for xa_for_each() but may also be useful in
their own right.
In the xas family of functions, we also have xas_for_each(), xas_find(),
xas_next_entry(), xas_for_each_tag(), xas_find_tag(), xas_next_tag()
and xas_pause().
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 173 +++++++++++++++++++++
lib/xarray.c | 274 +++++++++++++++++++++++++++++++++
tools/testing/radix-tree/test.c | 13 ++
tools/testing/radix-tree/test.h | 1 +
tools/testing/radix-tree/xarray-test.c | 122 +++++++++++++++
5 files changed, 583 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index e95ebe2488f9..cf7966bfdd3e 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -223,6 +223,10 @@ void *xa_cmpxchg(struct xarray *, unsigned long index,
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
+void *xa_find(struct xarray *xa, unsigned long *index,
+ unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
+void *xa_find_after(struct xarray *xa, unsigned long *index,
+ unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
/**
* xa_init() - Initialise an empty XArray.
@@ -279,6 +283,35 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
return xa->xa_flags & XA_FLAGS_TAG(tag);
}
+/**
+ * xa_for_each() - Iterate over a portion of an XArray.
+ * @xa: XArray.
+ * @entry: Entry retrieved from array.
+ * @index: Index of @entry.
+ * @max: Maximum index to retrieve from array.
+ * @filter: Selection criterion.
+ *
+ * Initialise @index to the minimum index you want to retrieve from
+ * the array. During the iteration, @entry will have the value of the
+ * entry stored in @xa at @index. The iteration will skip all entries in
+ * the array which do not match @filter. You may modify @index during the
+ * iteration if you want to skip or reprocess indices. It is safe to modify
+ * the array during the iteration. At the end of the iteration, @entry will
+ * be set to NULL and @index will have a value less than or equal to max.
+ *
+ * xa_for_each() is O(n.log(n)) while xas_for_each() is O(n). You have
+ * to handle your own locking with xas_for_each(), and if you have to unlock
+ * after each iteration, it will also end up being O(n.log(n)). xa_for_each()
+ * will spin if it hits a retry entry; if you intend to see retry entries,
+ * you should use the xas_for_each() iterator instead. The xas_for_each()
+ * iterator will expand into more inline code than xa_for_each().
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ */
+#define xa_for_each(xa, entry, index, max, filter) \
+ for (entry = xa_find(xa, &index, max, filter); entry; \
+ entry = xa_find_after(xa, &index, max, filter))
+
/**
* xa_insert() - Store this entry in the XArray unless another entry is
* already present.
@@ -641,6 +674,12 @@ static inline bool xas_valid(const struct xa_state *xas)
return !xas_invalid(xas);
}
+/* True if the pointer is something other than a node */
+static inline bool xas_not_node(struct xa_node *node)
+{
+ return ((unsigned long)node & 3) || !node;
+}
+
/* True if the node represents head-of-tree, RESTART or BOUNDS */
static inline bool xas_top(struct xa_node *node)
{
@@ -685,13 +724,16 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
void *xas_load(struct xa_state *);
void *xas_store(struct xa_state *, void *entry);
void *xas_create(struct xa_state *);
+void *xas_find(struct xa_state *, unsigned long max);
bool xas_get_tag(const struct xa_state *, xa_tag_t);
void xas_set_tag(const struct xa_state *, xa_tag_t);
void xas_clear_tag(const struct xa_state *, xa_tag_t);
+void *xas_find_tag(struct xa_state *, unsigned long max, xa_tag_t);
void xas_init_tags(const struct xa_state *);
bool xas_nomem(struct xa_state *, gfp_t);
+void xas_pause(struct xa_state *);
/**
* xas_reload() - Refetch an entry from the xarray.
@@ -764,6 +806,137 @@ static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update)
xas->xa_update = update;
}
+/* Skip over any of these entries when iterating */
+static inline bool xa_iter_skip(const void *entry)
+{
+ return unlikely(!entry ||
+ (xa_is_internal(entry) && entry < XA_RETRY_ENTRY));
+}
+
+/**
+ * xas_next_entry() - Advance iterator to next present entry.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ *
+ * xas_next_entry() is an inline function to optimise xarray traversal for
+ * speed. It is equivalent to calling xas_find(), and will call xas_find()
+ * for all the hard cases.
+ *
+ * Return: The next present entry after the one currently referred to by @xas.
+ */
+static inline void *xas_next_entry(struct xa_state *xas, unsigned long max)
+{
+ struct xa_node *node = xas->xa_node;
+ void *entry;
+
+ if (unlikely(xas_not_node(node) || node->shift))
+ return xas_find(xas, max);
+
+ do {
+ if (unlikely(xas->xa_index >= max))
+ return xas_find(xas, max);
+ if (unlikely(xas->xa_offset == XA_CHUNK_MASK))
+ return xas_find(xas, max);
+ xas->xa_index++;
+ xas->xa_offset++;
+ entry = xa_entry(xas->xa, node, xas->xa_offset);
+ } while (xa_iter_skip(entry));
+
+ return entry;
+}
+
+/* Private */
+static inline unsigned int xas_find_chunk(struct xa_state *xas, bool advance,
+ xa_tag_t tag)
+{
+ unsigned long *addr = xas->xa_node->tags[(__force unsigned)tag];
+ unsigned int offset = xas->xa_offset;
+
+ if (advance)
+ offset++;
+ if (XA_CHUNK_SIZE == BITS_PER_LONG) {
+ unsigned long data = *addr & (~0UL << offset);
+ if (data)
+ return __ffs(data);
+ return XA_CHUNK_SIZE;
+ }
+
+ return find_next_bit(addr, XA_CHUNK_SIZE, offset);
+}
+
+/**
+ * xas_next_tag() - Advance iterator to next tagged entry.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ * @tag: Tag to search for.
+ *
+ * xas_next_tag() is an inline function to optimise xarray traversal for
+ * speed. It is equivalent to calling xas_find_tag(), and will call
+ * xas_find_tag() for all the hard cases.
+ *
+ * Return: The next tagged entry after the one currently referred to by @xas.
+ */
+static inline void *xas_next_tag(struct xa_state *xas, unsigned long max,
+ xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset;
+
+ if (unlikely(xas_not_node(node) || node->shift))
+ return xas_find_tag(xas, max, tag);
+ offset = xas_find_chunk(xas, true, tag);
+ xas->xa_offset = offset;
+ xas->xa_index = (xas->xa_index & ~XA_CHUNK_MASK) + offset;
+ if (xas->xa_index > max)
+ return NULL;
+ if (offset == XA_CHUNK_SIZE)
+ return xas_find_tag(xas, max, tag);
+ return xa_entry(xas->xa, node, offset);
+}
+
+/*
+ * If iterating while holding a lock, drop the lock and reschedule
+ * every %XA_CHECK_SCHED loops.
+ */
+enum {
+ XA_CHECK_SCHED = 4096,
+};
+
+/**
+ * xas_for_each() - Iterate over a range of an XArray
+ * @xas: XArray operation state.
+ * @entry: Entry retrieved from array.
+ * @max: Maximum index to retrieve from array.
+ *
+ * The loop body will be executed for each entry present in the xarray
+ * between the current xas position and @max. @entry will be set to
+ * the entry retrieved from the xarray. It is safe to delete entries
+ * from the array in the loop body. You should hold either the RCU lock
+ * or the xa_lock while iterating. If you need to drop the lock, call
+ * xas_pause() first.
+ */
+#define xas_for_each(xas, entry, max) \
+ for (entry = xas_find(xas, max); entry; \
+ entry = xas_next_entry(xas, max))
+
+/**
+ * xas_for_each_tag() - Iterate over a range of an XArray
+ * @xas: XArray operation state.
+ * @entry: Entry retrieved from array.
+ * @max: Maximum index to retrieve from array.
+ * @tag: Tag to search for.
+ *
+ * The loop body will be executed for each tagged entry in the xarray
+ * between the current xas position and @max. @entry will be set to
+ * the entry retrieved from the xarray. It is safe to delete entries
+ * from the array in the loop body. You should hold either the RCU lock
+ * or the xa_lock while iterating. If you need to drop the lock, call
+ * xas_pause() first.
+ */
+#define xas_for_each_tag(xas, entry, max, tag) \
+ for (entry = xas_find_tag(xas, max, tag); entry; \
+ entry = xas_next_tag(xas, max, tag))
+
/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
void xas_destroy(struct xa_state *);
diff --git a/lib/xarray.c b/lib/xarray.c
index a231699d894a..267510e98a57 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -91,6 +91,11 @@ static unsigned int get_offset(unsigned long index, struct xa_node *node)
return (index >> node->shift) & XA_CHUNK_MASK;
}
+static void xas_set_offset(struct xa_state *xas)
+{
+ xas->xa_offset = get_offset(xas->xa_index, xas->xa_node);
+}
+
/* move the index either forwards (find) or backwards (sibling slot) */
static void xas_move_index(struct xa_state *xas, unsigned long offset)
{
@@ -99,6 +104,12 @@ static void xas_move_index(struct xa_state *xas, unsigned long offset)
xas->xa_index += offset << shift;
}
+static void xas_advance(struct xa_state *xas)
+{
+ xas->xa_offset++;
+ xas_move_index(xas, xas->xa_offset);
+}
+
static void *set_bounds(struct xa_state *xas)
{
xas->xa_node = XAS_BOUNDS;
@@ -791,6 +802,191 @@ void xas_init_tags(const struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_init_tags);
+/**
+ * xas_pause() - Pause a walk to drop a lock.
+ * @xas: XArray operation state.
+ *
+ * Some users need to pause a walk and drop the lock they're holding in
+ * order to yield to a higher priority thread or carry out an operation
+ * on an entry. Those users should call this function before they drop
+ * the lock. It resets the @xas to be suitable for the next iteration
+ * of the loop after the user has reacquired the lock. If most entries
+ * found during a walk require you to call xas_pause(), the xa_for_each()
+ * iterator may be more appropriate.
+ *
+ * Note that xas_pause() only works for forward iteration. If a user needs
+ * to pause a reverse iteration, we will need a xas_pause_rev().
+ */
+void xas_pause(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (xas_invalid(xas))
+ return;
+
+ if (node) {
+ unsigned int offset = xas->xa_offset;
+ while (++offset < XA_CHUNK_SIZE) {
+ if (!xa_is_sibling(xa_entry(xas->xa, node, offset)))
+ break;
+ }
+ xas->xa_index += (offset - xas->xa_offset) << node->shift;
+ } else {
+ xas->xa_index++;
+ }
+ xas->xa_node = XAS_RESTART;
+}
+EXPORT_SYMBOL_GPL(xas_pause);
+
+/**
+ * xas_find() - Find the next present entry in the XArray.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ *
+ * If the xas has not yet been walked to an entry, return the entry
+ * which has an index >= xas.xa_index. If it has been walked, the entry
+ * currently being pointed at has been processed, and so we move to the
+ * next entry.
+ *
+ * If no entry is found and the array is smaller than @max, the iterator
+ * is set to the smallest index not yet in the array. This allows @xas
+ * to be immediately passed to xas_create().
+ *
+ * Return: The entry, if found, otherwise NULL.
+ */
+void *xas_find(struct xa_state *xas, unsigned long max)
+{
+ void *entry;
+
+ if (xas_error(xas))
+ return NULL;
+
+ if (!xas->xa_node) {
+ xas->xa_index = 1;
+ return set_bounds(xas);
+ } else if (xas_top(xas->xa_node)) {
+ entry = xas_load(xas);
+ if (entry || xas_not_node(xas->xa_node))
+ return entry;
+ }
+
+ xas_advance(xas);
+
+ while (xas->xa_node && (xas->xa_index <= max)) {
+ if (unlikely(xas->xa_offset == XA_CHUNK_SIZE)) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ continue;
+ }
+
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (xa_is_node(entry)) {
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = 0;
+ continue;
+ }
+ if (!xa_iter_skip(entry))
+ return entry;
+
+ xas_advance(xas);
+ }
+
+ if (!xas->xa_node)
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(xas_find);
+
+/**
+ * xas_find_tag() - Find the next tagged entry in the XArray.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ * @tag: Tag number to search for.
+ *
+ * If the xas has not yet been walked to an entry, return the tagged entry
+ * which has an index >= xas.xa_index. If it has been walked, the entry
+ * currently being pointed at has been processed, and so we move to the
+ * next tagged entry.
+ *
+ * If no tagged entry is found and the array is smaller than @max, @xas is
+ * set to the bounds state and xas->xa_index is set to the smallest index
+ * not yet in the array. This allows @xas to be immediately passed to
+ * xas_create().
+ *
+ * Return: The entry, if found, otherwise %NULL.
+ */
+void *xas_find_tag(struct xa_state *xas, unsigned long max, xa_tag_t tag)
+{
+ bool advance = true;
+ unsigned int offset;
+ void *entry;
+
+ if (xas_error(xas))
+ return NULL;
+
+ if (!xas->xa_node) {
+ xas->xa_index = 1;
+ goto out;
+ } else if (xas_top(xas->xa_node)) {
+ advance = false;
+ entry = xa_head(xas->xa);
+ if (xas->xa_index > max_index(entry))
+ goto out;
+ if (!xa_is_node(entry)) {
+ if (xa_tagged(xas->xa, tag)) {
+ xas->xa_node = NULL;
+ return entry;
+ }
+ xas->xa_index = 1;
+ goto out;
+ }
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = xas->xa_index >> xas->xa_node->shift;
+ }
+
+ while (xas->xa_index <= max) {
+ if (unlikely(xas->xa_offset == XA_CHUNK_SIZE)) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ break;
+ advance = false;
+ continue;
+ }
+
+ if (!advance) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (xa_is_sibling(entry)) {
+ xas->xa_offset = xa_to_sibling(entry);
+ xas_move_index(xas, xas->xa_offset);
+ }
+ }
+
+ offset = xas_find_chunk(xas, advance, tag);
+ if (offset > xas->xa_offset) {
+ advance = false;
+ xas_move_index(xas, offset);
+ xas->xa_offset = offset;
+ if (offset == XA_CHUNK_SIZE)
+ continue;
+ if (xas->xa_index > max)
+ break;
+ }
+
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+
+ out:
+ if (!xas->xa_node)
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(xas_find_tag);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -1114,6 +1310,84 @@ void xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
}
EXPORT_SYMBOL(xa_clear_tag);
+/**
+ * xa_find() - Search the XArray for an entry.
+ * @xa: XArray.
+ * @indexp: Pointer to an index.
+ * @max: Maximum index to search to.
+ * @filter: Selection criterion.
+ *
+ * Finds the entry in @xa which matches the @filter, and has the lowest
+ * index that is at least @indexp and no more than @max.
+ * If an entry is found, @indexp is updated to be the index of the entry.
+ * This function is protected by the RCU read lock, so it may not find
+ * entries which are being simultaneously added. It will not return an
+ * %XA_RETRY_ENTRY; if you need to see retry entries, use xas_find().
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ * Return: The entry, if found, otherwise NULL.
+ */
+void *xa_find(struct xarray *xa, unsigned long *indexp,
+ unsigned long max, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, *indexp);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ entry = xas_find_tag(&xas, max, filter);
+ else
+ entry = xas_find(&xas, max);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ if (entry)
+ *indexp = xas.xa_index;
+ return entry;
+}
+EXPORT_SYMBOL(xa_find);
+
+/**
+ * xa_find_after() - Search the XArray for a present entry.
+ * @xa: XArray.
+ * @indexp: Pointer to an index.
+ * @max: Maximum index to search to.
+ * @filter: Selection criterion.
+ *
+ * Finds the entry in @xa which matches the @filter and has the lowest
+ * index that is above @indexp and no more than @max.
+ * If an entry is found, @indexp is updated to be the index of the entry.
+ * This function is protected by the RCU read lock, so it may miss entries
+ * which are being simultaneously added. It will not return an
+ * %XA_RETRY_ENTRY; if you need to see retry entries, use xas_find().
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ * Return: The pointer, if found, otherwise NULL.
+ */
+void *xa_find_after(struct xarray *xa, unsigned long *indexp,
+ unsigned long max, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, *indexp + 1);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ entry = xas_find_tag(&xas, max, filter);
+ else
+ entry = xas_find(&xas, max);
+ if (*indexp >= xas.xa_index)
+ entry = xas_next_entry(&xas, max);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ if (entry)
+ *indexp = xas.xa_index;
+ return entry;
+}
+EXPORT_SYMBOL(xa_find_after);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index f151588d04a0..e9b4a4ed9bf5 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -244,6 +244,19 @@ unsigned long find_item(struct radix_tree_root *root, void *item)
return found;
}
+static LIST_HEAD(item_nodes);
+
+void item_update_node(struct xa_node *node)
+{
+ if (node->count) {
+ if (list_empty(&node->private_list))
+ list_add(&node->private_list, &item_nodes);
+ } else {
+ if (!list_empty(&node->private_list))
+ list_del_init(&node->private_list);
+ }
+}
+
static int verify_node(struct radix_tree_node *slot, unsigned int tag,
int tagged)
{
diff --git a/tools/testing/radix-tree/test.h b/tools/testing/radix-tree/test.h
index ffd162645c11..f97cacd1422d 100644
--- a/tools/testing/radix-tree/test.h
+++ b/tools/testing/radix-tree/test.h
@@ -30,6 +30,7 @@ void item_gang_check_present(struct radix_tree_root *root,
void item_full_scan(struct radix_tree_root *root, unsigned long start,
unsigned long nr, int chunk);
void item_kill_tree(struct radix_tree_root *root);
+void item_update_node(struct xa_node *node);
int tag_tagged_items(struct radix_tree_root *, pthread_mutex_t *,
unsigned long start, unsigned long end, unsigned batch,
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index d6a969d999d9..26b25be81656 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -49,6 +49,29 @@ void check_xa_tag(struct xarray *xa)
assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
}
+void check_xas_retry(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+
+ assert(xas_find(&xas, ULONG_MAX) == xa_mk_value(0));
+ xa_erase(xa, 1);
+ assert(xa_is_retry(xas_reload(&xas)));
+ assert(!xas_retry(&xas, NULL));
+ assert(!xas_retry(&xas, xa_mk_value(0)));
+ assert(xas_retry(&xas, XA_RETRY_ENTRY));
+ assert(xas.xa_node == XAS_RESTART);
+ assert(xas_next_entry(&xas, ULONG_MAX) == xa_mk_value(0));
+ assert(xas.xa_node == NULL);
+
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ assert(xa_is_internal(xas_reload(&xas)));
+ xas.xa_node = XAS_RESTART;
+ assert(xas_next_entry(&xas, ULONG_MAX) == xa_mk_value(0));
+}
+
void check_xa_load(struct xarray *xa)
{
unsigned long i, j;
@@ -142,6 +165,98 @@ void check_multi_store(struct xarray *xa)
}
}
+void check_multi_find(struct xarray *xa)
+{
+ unsigned long index;
+ xa_store_order(xa, 12, 2, xa_mk_value(12), GFP_KERNEL);
+ xa_store(xa, 16, xa_mk_value(16), GFP_KERNEL);
+
+ index = 0;
+ assert(xa_find(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(12));
+ assert(index == 12);
+ index = 13;
+ assert(xa_find(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(12));
+ assert(index >= 12 && index < 16);
+ assert(xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(16));
+ assert(index == 16);
+ xa_erase(xa, 12);
+ xa_erase(xa, 16);
+ assert(xa_empty(xa));
+}
+
+void check_find(struct xarray *xa)
+{
+ unsigned long i, j, k;
+
+ assert(xa_empty(xa));
+
+ for (i = 0; i < 100; i++) {
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ xa_set_tag(xa, i, XA_TAG_0);
+ for (j = 0; j < i; j++) {
+ xa_store(xa, j, xa_mk_value(j), GFP_KERNEL);
+ xa_set_tag(xa, j, XA_TAG_0);
+ for (k = 0; k < 100; k++) {
+ unsigned long index = k;
+ void *entry = xa_find(xa, &index, ULONG_MAX,
+ XA_PRESENT);
+ if (k <= j)
+ assert(index == j);
+ else if (k <= i)
+ assert(index == i);
+ else
+ assert(entry == NULL);
+
+ index = k;
+ entry = xa_find(xa, &index, ULONG_MAX,
+ XA_TAG_0);
+ if (k <= j)
+ assert(index == j);
+ else if (k <= i)
+ assert(index == i);
+ else
+ assert(entry == NULL);
+ }
+ xa_erase(xa, j);
+ }
+ xa_erase(xa, i);
+ }
+ assert(xa_empty(xa));
+ check_multi_find(xa);
+}
+
+void check_xas_delete(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *entry;
+ unsigned long i, j;
+
+ xas_set_update(&xas, item_update_node);
+ for (i = 0; i < 200; i++) {
+ for (j = i; j < 2 * i + 17; j++) {
+ xas_set(&xas, j);
+ do {
+ xas_store(&xas, xa_mk_value(j));
+ } while (xas_nomem(&xas, GFP_KERNEL));
+ }
+
+ xas_set(&xas, ULONG_MAX);
+ do {
+ xas_store(&xas, xa_mk_value(0));
+ } while (xas_nomem(&xas, GFP_KERNEL));
+ xas_store(&xas, NULL);
+
+ xas_set(&xas, 0);
+ j = i;
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ assert(entry == xa_mk_value(j));
+ xas_store(&xas, NULL);
+ j++;
+ }
+ assert(xa_empty(xa));
+ }
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
@@ -152,6 +267,9 @@ void xarray_checks(void)
check_xa_tag(&array);
item_kill_tree(&array);
+ check_xas_retry(&array);
+ item_kill_tree(&array);
+
check_xa_load(&array);
item_kill_tree(&array);
@@ -161,6 +279,10 @@ void xarray_checks(void)
check_cmpxchg(&array);
check_multi_store(&array);
item_kill_tree(&array);
+
+ check_find(&array);
+ check_xas_delete(&array);
+ item_kill_tree(&array);
}
int __weak main(void)
--
2.16.1
From: Matthew Wilcox <[email protected]>
This results in no change in structure size on 64-bit x86 as it fits in
the padding between the gfp_t and the void *.
Initialising the spinlock requires a name for the benefit of lockdep,
so RADIX_TREE_INIT() now needs to know the name of the radix tree it's
initialising, and so do IDR_INIT() and IDA_INIT().
Also add the xa_lock() and xa_unlock() family of wrappers to make it
easier to use the lock. If we could rely on -fplan9-extensions in
the compiler, we could avoid all of this syntactic sugar, but that
wasn't added until gcc 4.6.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/gc.c | 2 +-
include/linux/idr.h | 19 ++++++++++---------
include/linux/radix-tree.h | 7 +++++--
include/linux/xarray.h | 24 ++++++++++++++++++++++++
kernel/pid.c | 2 +-
tools/include/linux/spinlock.h | 1 +
6 files changed, 42 insertions(+), 13 deletions(-)
create mode 100644 include/linux/xarray.h
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index aa720cc44509..7aa15134180e 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -1006,7 +1006,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
unsigned int init_segno = segno;
struct gc_inode_list gc_list = {
.ilist = LIST_HEAD_INIT(gc_list.ilist),
- .iroot = RADIX_TREE_INIT(GFP_NOFS),
+ .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
};
trace_f2fs_gc_begin(sbi->sb, sync, background,
diff --git a/include/linux/idr.h b/include/linux/idr.h
index 913c335054f0..e856f4e0ab35 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -32,27 +32,28 @@ struct idr {
#define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
(1 << (ROOT_TAG_SHIFT + IDR_FREE)))
-#define IDR_INIT_BASE(base) { \
- .idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER), \
+#define IDR_INIT_BASE(name, base) { \
+ .idr_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER), \
.idr_base = (base), \
.idr_next = 0, \
}
/**
* IDR_INIT() - Initialise an IDR.
+ * @name: Name of IDR.
*
* A freshly-initialised IDR contains no IDs.
*/
-#define IDR_INIT IDR_INIT_BASE(0)
+#define IDR_INIT(name) IDR_INIT_BASE(name, 0)
/**
- * DEFINE_IDR() - Define a statically-allocated IDR
- * @name: Name of IDR
+ * DEFINE_IDR() - Define a statically-allocated IDR.
+ * @name: Name of IDR.
*
* An IDR defined using this macro is ready for use with no additional
* initialisation required. It contains no IDs.
*/
-#define DEFINE_IDR(name) struct idr name = IDR_INIT
+#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)
/**
* idr_get_cursor - Return the current position of the cyclic allocator
@@ -219,10 +220,10 @@ struct ida {
struct radix_tree_root ida_rt;
};
-#define IDA_INIT { \
- .ida_rt = RADIX_TREE_INIT(IDR_RT_MARKER | GFP_NOWAIT), \
+#define IDA_INIT(name) { \
+ .ida_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER | GFP_NOWAIT), \
}
-#define DEFINE_IDA(name) struct ida name = IDA_INIT
+#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)
int ida_pre_get(struct ida *ida, gfp_t gfp_mask);
int ida_get_new_above(struct ida *ida, int starting_id, int *p_id);
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 6c4e2e716dac..34149e8b5f73 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -110,20 +110,23 @@ struct radix_tree_node {
#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
struct radix_tree_root {
+ spinlock_t xa_lock;
gfp_t gfp_mask;
struct radix_tree_node __rcu *rnode;
};
-#define RADIX_TREE_INIT(mask) { \
+#define RADIX_TREE_INIT(name, mask) { \
+ .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
.gfp_mask = (mask), \
.rnode = NULL, \
}
#define RADIX_TREE(name, mask) \
- struct radix_tree_root name = RADIX_TREE_INIT(mask)
+ struct radix_tree_root name = RADIX_TREE_INIT(name, mask)
#define INIT_RADIX_TREE(root, mask) \
do { \
+ spin_lock_init(&(root)->xa_lock); \
(root)->gfp_mask = (mask); \
(root)->rnode = NULL; \
} while (0)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
new file mode 100644
index 000000000000..2dfc8006fe64
--- /dev/null
+++ b/include/linux/xarray.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#ifndef _LINUX_XARRAY_H
+#define _LINUX_XARRAY_H
+/*
+ * eXtensible Arrays
+ * Copyright (c) 2017 Microsoft Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ */
+
+#include <linux/spinlock.h>
+
+#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
+#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
+#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
+#define xa_lock_bh(xa) spin_lock_bh(&(xa)->xa_lock)
+#define xa_unlock_bh(xa) spin_unlock_bh(&(xa)->xa_lock)
+#define xa_lock_irq(xa) spin_lock_irq(&(xa)->xa_lock)
+#define xa_unlock_irq(xa) spin_unlock_irq(&(xa)->xa_lock)
+#define xa_lock_irqsave(xa, flags) \
+ spin_lock_irqsave(&(xa)->xa_lock, flags)
+#define xa_unlock_irqrestore(xa, flags) \
+ spin_unlock_irqrestore(&(xa)->xa_lock, flags)
+
+#endif /* _LINUX_XARRAY_H */
diff --git a/kernel/pid.c b/kernel/pid.c
index ed6c343fe50d..157fe4b19971 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -70,7 +70,7 @@ int pid_max_max = PID_MAX_LIMIT;
*/
struct pid_namespace init_pid_ns = {
.kref = KREF_INIT(2),
- .idr = IDR_INIT,
+ .idr = IDR_INIT(init_pid_ns.idr),
.pid_allocated = PIDNS_ADDING,
.level = 0,
.child_reaper = &init_task,
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 4ed569fcb139..b21b586b9854 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -7,6 +7,7 @@
#define spinlock_t pthread_mutex_t
#define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
+#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
--
2.16.1
From: Matthew Wilcox <[email protected]>
This function combines the functionality of radix_tree_gang_lookup() and
radix_tree_gang_lookup_tagged(). It extracts entries matching the
specified filter into a normal array.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 2 ++
lib/xarray.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 82 insertions(+)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index cf7966bfdd3e..85dd909586f0 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -227,6 +227,8 @@ void *xa_find(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
void *xa_find_after(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
+unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
+ unsigned long max, unsigned int n, xa_tag_t);
/**
* xa_init() - Initialise an empty XArray.
diff --git a/lib/xarray.c b/lib/xarray.c
index 267510e98a57..124bbfec66ae 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1388,6 +1388,86 @@ void *xa_find_after(struct xarray *xa, unsigned long *indexp,
}
EXPORT_SYMBOL(xa_find_after);
+static unsigned int xas_extract_present(struct xa_state *xas, void **dst,
+ unsigned long max, unsigned int n)
+{
+ void *entry;
+ unsigned int i = 0;
+
+ rcu_read_lock();
+ xas_for_each(xas, entry, max) {
+ if (xas_retry(xas, entry))
+ continue;
+ dst[i++] = entry;
+ if (i == n)
+ break;
+ }
+ rcu_read_unlock();
+
+ return i;
+}
+
+static unsigned int xas_extract_tag(struct xa_state *xas, void **dst,
+ unsigned long max, unsigned int n, xa_tag_t tag)
+{
+ void *entry;
+ unsigned int i = 0;
+
+ rcu_read_lock();
+ xas_for_each_tag(xas, entry, max, tag) {
+ if (xas_retry(xas, entry))
+ continue;
+ dst[i++] = entry;
+ if (i == n)
+ break;
+ }
+ rcu_read_unlock();
+
+ return i;
+}
+
+/**
+ * xa_extract() - Copy selected entries from the XArray into a normal array.
+ * @xa: The source XArray to copy from.
+ * @dst: The buffer to copy entries into.
+ * @start: The first index in the XArray eligible to be selected.
+ * @max: The last index in the XArray eligible to be selected.
+ * @n: The maximum number of entries to copy.
+ * @filter: Selection criterion.
+ *
+ * Copies up to @n entries that match @filter from the XArray. The
+ * copied entries will have indices between @start and @max, inclusive.
+ *
+ * The @filter may be an XArray tag value, in which case entries which are
+ * tagged with that tag will be copied. It may also be %XA_PRESENT, in
+ * which case non-NULL entries will be copied.
+ *
+ * The entries returned may not represent a snapshot of the XArray at a
+ * moment in time. For example, if another thread stores to index 5, then
+ * index 10, calling xa_extract() may return the old contents of index 5
+ * and the new contents of index 10. Indices not modified while this
+ * function is running will not be skipped.
+ *
+ * If you need stronger guarantees, holding the xa_lock across calls to this
+ * function will prevent concurrent modification.
+ *
+ * Context: Any context. Takes and releases the RCU lock.
+ * Return: The number of entries copied.
+ */
+unsigned int xa_extract(struct xarray *xa, void **dst, unsigned long start,
+ unsigned long max, unsigned int n, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, start);
+
+ if (!n)
+ return 0;
+
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ return xas_extract_tag(&xas, dst, max, n, filter);
+ return xas_extract_present(&xas, dst, max, n);
+}
+EXPORT_SYMBOL(xa_extract);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
--
2.16.1
From: Matthew Wilcox <[email protected]>
Instead of storing a pointer to the slot containing the canonical entry,
store the offset of the slot. Produces slightly more efficient code
(~300 bytes) and simplifies the implementation.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 93 ++++++++++++++++++++++++++++++++++++++++++++++++++
lib/radix-tree.c | 66 +++++++++++------------------------
2 files changed, 112 insertions(+), 47 deletions(-)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index f61806fd8002..283beb5aac58 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -22,6 +22,12 @@
* x1: Value entry
*
* Attempting to store internal entries in the XArray is a bug.
+ *
+ * Most internal entries are pointers to the next node in the tree.
+ * The following internal entries have a special meaning:
+ *
+ * 0-62: Sibling entries
+ * 256: Retry entry
*/
#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
@@ -63,6 +69,42 @@ static inline bool xa_is_value(const void *entry)
return (unsigned long)entry & 1;
}
+/*
+ * xa_mk_internal() - Create an internal entry.
+ * @v: Value to turn into an internal entry.
+ *
+ * Context: Any context.
+ * Return: An XArray internal entry corresponding to this value.
+ */
+static inline void *xa_mk_internal(unsigned long v)
+{
+ return (void *)((v << 2) | 2);
+}
+
+/*
+ * xa_to_internal() - Extract the value from an internal entry.
+ * @entry: XArray entry.
+ *
+ * Context: Any context.
+ * Return: The value which was stored in the internal entry.
+ */
+static inline unsigned long xa_to_internal(const void *entry)
+{
+ return (unsigned long)entry >> 2;
+}
+
+/*
+ * xa_is_internal() - Is the entry an internal entry?
+ * @entry: XArray entry.
+ *
+ * Context: Any context.
+ * Return: %true if the entry is an internal entry.
+ */
+static inline bool xa_is_internal(const void *entry)
+{
+ return ((unsigned long)entry & 3) == 2;
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -75,4 +117,55 @@ static inline bool xa_is_value(const void *entry)
#define xa_unlock_irqrestore(xa, flags) \
spin_unlock_irqrestore(&(xa)->xa_lock, flags)
+/* Everything below here is the Advanced API. Proceed with caution. */
+
+/*
+ * The xarray is constructed out of a set of 'chunks' of pointers. Choosing
+ * the best chunk size requires some tradeoffs. A power of two recommends
+ * itself so that we can walk the tree based purely on shifts and masks.
+ * Generally, the larger the better; as the number of slots per level of the
+ * tree increases, the less tall the tree needs to be. But that needs to be
+ * balanced against the memory consumption of each node. On a 64-bit system,
+ * xa_node is currently 576 bytes, and we get 7 of them per 4kB page. If we
+ * doubled the number of slots per node, we'd get only 3 nodes per 4kB page.
+ */
+#ifndef XA_CHUNK_SHIFT
+#define XA_CHUNK_SHIFT (CONFIG_BASE_SMALL ? 4 : 6)
+#endif
+#define XA_CHUNK_SIZE (1UL << XA_CHUNK_SHIFT)
+#define XA_CHUNK_MASK (XA_CHUNK_SIZE - 1)
+
+/* Private */
+static inline bool xa_is_node(const void *entry)
+{
+ return xa_is_internal(entry) && (unsigned long)entry > 4096;
+}
+
+/* Private */
+static inline void *xa_mk_sibling(unsigned int offset)
+{
+ return xa_mk_internal(offset);
+}
+
+/* Private */
+static inline unsigned long xa_to_sibling(const void *entry)
+{
+ return xa_to_internal(entry);
+}
+
+/**
+ * xa_is_sibling() - Is the entry a sibling entry?
+ * @entry: Entry retrieved from the XArray
+ *
+ * Return: %true if the entry is a sibling entry.
+ */
+static inline bool xa_is_sibling(const void *entry)
+{
+ return IS_ENABLED(CONFIG_RADIX_TREE_MULTIORDER) &&
+ xa_is_internal(entry) &&
+ (entry < xa_mk_sibling(XA_CHUNK_SIZE - 1));
+}
+
+#define XA_RETRY_ENTRY xa_mk_internal(256)
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 3d7bacb2f8ba..02863c54810d 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -38,6 +38,7 @@
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/string.h>
+#include <linux/xarray.h>
/* Number of nodes in fully populated tree of given height */
@@ -98,24 +99,7 @@ static inline void *node_to_entry(void *ptr)
return (void *)((unsigned long)ptr | RADIX_TREE_INTERNAL_NODE);
}
-#define RADIX_TREE_RETRY node_to_entry(NULL)
-
-#ifdef CONFIG_RADIX_TREE_MULTIORDER
-/* Sibling slots point directly to another slot in the same node */
-static inline
-bool is_sibling_entry(const struct radix_tree_node *parent, void *node)
-{
- void __rcu **ptr = node;
- return (parent->slots <= ptr) &&
- (ptr < parent->slots + RADIX_TREE_MAP_SIZE);
-}
-#else
-static inline
-bool is_sibling_entry(const struct radix_tree_node *parent, void *node)
-{
- return false;
-}
-#endif
+#define RADIX_TREE_RETRY XA_RETRY_ENTRY
static inline unsigned long
get_slot_offset(const struct radix_tree_node *parent, void __rcu **slot)
@@ -129,16 +113,10 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,
unsigned int offset = (index >> parent->shift) & RADIX_TREE_MAP_MASK;
void __rcu **entry = rcu_dereference_raw(parent->slots[offset]);
-#ifdef CONFIG_RADIX_TREE_MULTIORDER
- if (radix_tree_is_internal_node(entry)) {
- if (is_sibling_entry(parent, entry)) {
- void __rcu **sibentry;
- sibentry = (void __rcu **) entry_to_node(entry);
- offset = get_slot_offset(parent, sibentry);
- entry = rcu_dereference_raw(*sibentry);
- }
+ if (xa_is_sibling(entry)) {
+ offset = xa_to_sibling(entry);
+ entry = rcu_dereference_raw(parent->slots[offset]);
}
-#endif
*nodep = (void *)entry;
return offset;
@@ -300,10 +278,10 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
} else if (!radix_tree_is_internal_node(entry)) {
pr_debug("radix entry %p offset %ld indices %lu-%lu parent %p\n",
entry, i, first, last, node);
- } else if (is_sibling_entry(node, entry)) {
+ } else if (xa_is_sibling(entry)) {
pr_debug("radix sblng %p offset %ld indices %lu-%lu parent %p val %p\n",
entry, i, first, last, node,
- *(void **)entry_to_node(entry));
+ node->slots[xa_to_sibling(entry)]);
} else {
dump_node(entry_to_node(entry), first);
}
@@ -873,8 +851,7 @@ static void radix_tree_free_nodes(struct radix_tree_node *node)
for (;;) {
void *entry = rcu_dereference_raw(child->slots[offset]);
- if (radix_tree_is_internal_node(entry) &&
- !is_sibling_entry(child, entry)) {
+ if (xa_is_node(entry)) {
child = entry_to_node(entry);
offset = 0;
continue;
@@ -896,7 +873,7 @@ static void radix_tree_free_nodes(struct radix_tree_node *node)
static inline int insert_entries(struct radix_tree_node *node,
void __rcu **slot, void *item, unsigned order, bool replace)
{
- struct radix_tree_node *child;
+ void *sibling;
unsigned i, n, tag, offset, tags = 0;
if (node) {
@@ -914,7 +891,7 @@ static inline int insert_entries(struct radix_tree_node *node,
offset = offset & ~(n - 1);
slot = &node->slots[offset];
}
- child = node_to_entry(slot);
+ sibling = xa_mk_sibling(offset);
for (i = 0; i < n; i++) {
if (slot[i]) {
@@ -931,7 +908,7 @@ static inline int insert_entries(struct radix_tree_node *node,
for (i = 0; i < n; i++) {
struct radix_tree_node *old = rcu_dereference_raw(slot[i]);
if (i) {
- rcu_assign_pointer(slot[i], child);
+ rcu_assign_pointer(slot[i], sibling);
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
if (tags & (1 << tag))
tag_clear(node, tag, offset + i);
@@ -941,9 +918,7 @@ static inline int insert_entries(struct radix_tree_node *node,
if (tags & (1 << tag))
tag_set(node, tag, offset);
}
- if (radix_tree_is_internal_node(old) &&
- !is_sibling_entry(node, old) &&
- (old != RADIX_TREE_RETRY))
+ if (xa_is_node(old))
radix_tree_free_nodes(old);
if (xa_is_value(old))
node->exceptional--;
@@ -1102,10 +1077,10 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
void __rcu **slot, int count, int exceptional)
{
#ifdef CONFIG_RADIX_TREE_MULTIORDER
- void *ptr = node_to_entry(slot);
- unsigned offset = get_slot_offset(node, slot) + 1;
+ unsigned offset = get_slot_offset(node, slot);
+ void *ptr = xa_mk_sibling(offset);
- while (offset < RADIX_TREE_MAP_SIZE) {
+ while (++offset < RADIX_TREE_MAP_SIZE) {
if (rcu_dereference_raw(node->slots[offset]) != ptr)
break;
if (count < 0) {
@@ -1113,7 +1088,6 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
node->count--;
}
node->exceptional += exceptional;
- offset++;
}
#endif
}
@@ -1312,8 +1286,7 @@ int radix_tree_split(struct radix_tree_root *root, unsigned long index,
tags |= 1 << tag;
for (end = offset + 1; end < RADIX_TREE_MAP_SIZE; end++) {
- if (!is_sibling_entry(parent,
- rcu_dereference_raw(parent->slots[end])))
+ if (!xa_is_sibling(rcu_dereference_raw(parent->slots[end])))
break;
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
if (tags & (1 << tag))
@@ -1609,11 +1582,9 @@ static void set_iter_tags(struct radix_tree_iter *iter,
static void __rcu **skip_siblings(struct radix_tree_node **nodep,
void __rcu **slot, struct radix_tree_iter *iter)
{
- void *sib = node_to_entry(slot - 1);
-
while (iter->index < iter->next_index) {
*nodep = rcu_dereference_raw(*slot);
- if (*nodep && *nodep != sib)
+ if (*nodep && !xa_is_sibling(*nodep))
return slot;
slot++;
iter->index = __radix_tree_iter_add(iter, 1);
@@ -1764,7 +1735,7 @@ void __rcu **radix_tree_next_chunk(const struct radix_tree_root *root,
while (++offset < RADIX_TREE_MAP_SIZE) {
void *slot = rcu_dereference_raw(
node->slots[offset]);
- if (is_sibling_entry(node, slot))
+ if (xa_is_sibling(slot))
continue;
if (slot)
break;
@@ -2283,6 +2254,7 @@ void __init radix_tree_init(void)
BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
BUILD_BUG_ON(GFP_ZONEMASK != (__force gfp_t)15);
+ BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
sizeof(struct radix_tree_node), 0,
SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
--
2.16.1
From: Matthew Wilcox <[email protected]>
Remove the address_space ->tree_lock and use the xa_lock newly added to
the radix_tree_root. Rename the address_space ->page_tree to ->pages,
since we don't really care that it's a tree. Take the opportunity to
rearrange the elements of address_space to pack them better on 64-bit,
and make the comments more useful.
Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/cgroup-v1/memory.txt | 2 +-
Documentation/vm/page_migration | 14 +--
arch/arm/include/asm/cacheflush.h | 6 +-
arch/nios2/include/asm/cacheflush.h | 6 +-
arch/parisc/include/asm/cacheflush.h | 6 +-
drivers/staging/lustre/lustre/llite/glimpse.c | 2 +-
drivers/staging/lustre/lustre/mdc/mdc_request.c | 8 +-
fs/afs/write.c | 9 +-
fs/btrfs/compression.c | 2 +-
fs/btrfs/extent_io.c | 16 +--
fs/btrfs/inode.c | 2 +-
fs/buffer.c | 13 ++-
fs/cifs/file.c | 9 +-
fs/dax.c | 123 ++++++++++++------------
fs/f2fs/data.c | 6 +-
fs/f2fs/dir.c | 6 +-
fs/f2fs/inline.c | 6 +-
fs/f2fs/node.c | 8 +-
fs/fs-writeback.c | 20 ++--
fs/inode.c | 11 +--
fs/nilfs2/btnode.c | 20 ++--
fs/nilfs2/page.c | 22 ++---
include/linux/backing-dev.h | 12 +--
include/linux/fs.h | 17 ++--
include/linux/mm.h | 2 +-
include/linux/pagemap.h | 4 +-
mm/filemap.c | 84 ++++++++--------
mm/huge_memory.c | 10 +-
mm/khugepaged.c | 49 +++++-----
mm/memcontrol.c | 4 +-
mm/migrate.c | 32 +++---
mm/page-writeback.c | 42 ++++----
mm/readahead.c | 2 +-
mm/rmap.c | 4 +-
mm/shmem.c | 60 ++++++------
mm/swap_state.c | 17 ++--
mm/truncate.c | 22 ++---
mm/vmscan.c | 12 +--
mm/workingset.c | 22 ++---
39 files changed, 344 insertions(+), 368 deletions(-)
diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
index a4af2e124e24..e8ed4c2c2e9c 100644
--- a/Documentation/cgroup-v1/memory.txt
+++ b/Documentation/cgroup-v1/memory.txt
@@ -262,7 +262,7 @@ When oom event notifier is registered, event will be delivered.
2.6 Locking
lock_page_cgroup()/unlock_page_cgroup() should not be called under
- mapping->tree_lock.
+ the mapping's xa_lock.
Other lock order is following:
PG_locked.
diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration
index 0478ae2ad44a..faf849596a85 100644
--- a/Documentation/vm/page_migration
+++ b/Documentation/vm/page_migration
@@ -90,7 +90,7 @@ Steps:
1. Lock the page to be migrated
-2. Insure that writeback is complete.
+2. Ensure that writeback is complete.
3. Lock the new page that we want to move to. It is locked so that accesses to
this (not yet uptodate) page immediately lock while the move is in progress.
@@ -100,8 +100,8 @@ Steps:
mapcount is not zero then we do not migrate the page. All user space
processes that attempt to access the page will now wait on the page lock.
-5. The radix tree lock is taken. This will cause all processes trying
- to access the page via the mapping to block on the radix tree spinlock.
+5. The address space xa_lock is taken. This will cause all processes trying
+ to access the page via the mapping to block on the spinlock.
6. The refcount of the page is examined and we back out if references remain
otherwise we know that we are the only one referencing this page.
@@ -114,12 +114,12 @@ Steps:
9. The radix tree is changed to point to the new page.
-10. The reference count of the old page is dropped because the radix tree
+10. The reference count of the old page is dropped because the address space
reference is gone. A reference to the new page is established because
- the new page is referenced to by the radix tree.
+ the new page is referenced by the address space.
-11. The radix tree lock is dropped. With that lookups in the mapping
- become possible again. Processes will move from spinning on the tree_lock
+11. The address space xa_lock is dropped. With that lookups in the mapping
+ become possible again. Processes will move from spinning on the xa_lock
to sleeping on the locked new page.
12. The page contents are copied to the new page.
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 74504b154256..f4ead9a74b7d 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -318,10 +318,8 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);
-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
#define flush_icache_user_range(vma,page,addr,len) \
flush_dcache_page(page)
diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
index 55e383c173f7..7a6eda381964 100644
--- a/arch/nios2/include/asm/cacheflush.h
+++ b/arch/nios2/include/asm/cacheflush.h
@@ -46,9 +46,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
extern void flush_dcache_range(unsigned long start, unsigned long end);
extern void invalidate_dcache_range(unsigned long start, unsigned long end);
-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
#endif /* _ASM_NIOS2_CACHEFLUSH_H */
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 3742508cc534..b772dd320118 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -54,10 +54,8 @@ void invalidate_kernel_vmap_range(void *vaddr, int size);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *page);
-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
#define flush_icache_page(vma,page) do { \
flush_kernel_dcache_page(page); \
diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index c43ac574274c..5f2843da911c 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -69,7 +69,7 @@ blkcnt_t dirty_cnt(struct inode *inode)
void *results[1];
if (inode->i_mapping)
- cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->page_tree,
+ cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->pages,
results, 0, 1,
PAGECACHE_TAG_DIRTY);
if (cnt == 0 && atomic_read(&vob->vob_mmap_cnt) > 0)
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 03e55bca4ada..45dcf9f958d4 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -937,14 +937,14 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
struct page *page;
int found;
- spin_lock_irq(&mapping->tree_lock);
- found = radix_tree_gang_lookup(&mapping->page_tree,
+ xa_lock_irq(&mapping->pages);
+ found = radix_tree_gang_lookup(&mapping->pages,
(void **)&page, offset, 1);
if (found > 0 && !radix_tree_exceptional_entry(page)) {
struct lu_dirpage *dp;
get_page(page);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* In contrast to find_lock_page() we are sure that directory
* page cannot be truncated (while DLM lock is held) and,
@@ -992,7 +992,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
page = ERR_PTR(-EIO);
}
} else {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
page = NULL;
}
return page;
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 9370e2feb999..603d2ce48dbb 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -570,10 +570,11 @@ static int afs_writepages_region(struct address_space *mapping,
_debug("wback %lx", page->index);
- /* at this point we hold neither mapping->tree_lock nor lock on
- * the page itself: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled back from
- * swapper_space to tmpfs file mapping
+ /*
+ * at this point we hold neither the xa_lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
*/
ret = lock_page_killable(page);
if (ret < 0) {
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 07d049c0c20f..0e35aa6aa2f1 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -458,7 +458,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
break;
rcu_read_lock();
- page = radix_tree_lookup(&mapping->page_tree, pg_index);
+ page = radix_tree_lookup(&mapping->pages, pg_index);
rcu_read_unlock();
if (page && !radix_tree_exceptional_entry(page)) {
misses++;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index dfeb74a0be77..1f2739702518 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3958,11 +3958,11 @@ static int extent_write_cache_pages(struct address_space *mapping,
done_index = page->index;
/*
- * At this point we hold neither mapping->tree_lock nor
- * lock on the page itself: the page may be truncated or
- * invalidated (changing page->mapping to NULL), or even
- * swizzled back from swapper_space to tmpfs file
- * mapping
+ * At this point we hold neither the xa_lock nor
+ * the page lock: the page may be truncated or
+ * invalidated (changing page->mapping to NULL),
+ * or even swizzled back from swapper_space to
+ * tmpfs file mapping
*/
if (!trylock_page(page)) {
flush_write_bio(epd);
@@ -5169,13 +5169,13 @@ void clear_extent_buffer_dirty(struct extent_buffer *eb)
WARN_ON(!PagePrivate(page));
clear_page_dirty_for_io(page);
- spin_lock_irq(&page->mapping->tree_lock);
+ xa_lock_irq(&page->mapping->pages);
if (!PageDirty(page)) {
- radix_tree_tag_clear(&page->mapping->page_tree,
+ radix_tree_tag_clear(&page->mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&page->mapping->tree_lock);
+ xa_unlock_irq(&page->mapping->pages);
ClearPageError(page);
unlock_page(page);
}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 53ca025655fc..d0016c1c7b04 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7427,7 +7427,7 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
{
- struct radix_tree_root *root = &inode->i_mapping->page_tree;
+ struct radix_tree_root *root = &inode->i_mapping->pages;
bool found = false;
void **pagep = NULL;
struct page *page = NULL;
diff --git a/fs/buffer.c b/fs/buffer.c
index 0b487cdb7124..692ee249fb6a 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -185,10 +185,9 @@ EXPORT_SYMBOL(end_buffer_write_sync);
* we get exclusion from try_to_free_buffers with the blockdev mapping's
* private_lock.
*
- * Hack idea: for the blockdev mapping, i_bufferlist_lock contention
+ * Hack idea: for the blockdev mapping, private_lock contention
* may be quite high. This code could TryLock the page, and if that
- * succeeds, there is no need to take private_lock. (But if
- * private_lock is contended then so is mapping->tree_lock).
+ * succeeds, there is no need to take private_lock.
*/
static struct buffer_head *
__find_get_block_slow(struct block_device *bdev, sector_t block)
@@ -599,14 +598,14 @@ void __set_page_dirty(struct page *page, struct address_space *mapping,
{
unsigned long flags;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
if (page->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
+ radix_tree_tag_set(&mapping->pages,
page_index(page), PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
}
EXPORT_SYMBOL_GPL(__set_page_dirty);
@@ -1096,7 +1095,7 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* inode list.
*
* mark_buffer_dirty() is atomic. It takes bh->b_page->mapping->private_lock,
- * mapping->tree_lock and mapping->host->i_lock.
+ * mapping xa_lock and mapping->host->i_lock.
*/
void mark_buffer_dirty(struct buffer_head *bh)
{
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 7cee97b93a61..a6ace9ac4d94 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1987,11 +1987,10 @@ wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages,
for (i = 0; i < found_pages; i++) {
page = wdata->pages[i];
/*
- * At this point we hold neither mapping->tree_lock nor
- * lock on the page itself: the page may be truncated or
- * invalidated (changing page->mapping to NULL), or even
- * swizzled back from swapper_space to tmpfs file
- * mapping
+ * At this point we hold neither the xa_lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
*/
if (nr_pages == 0)
diff --git a/fs/dax.c b/fs/dax.c
index 0276df90e86c..cac580399ed4 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -159,11 +159,9 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
}
/*
- * We do not necessarily hold the mapping->tree_lock when we call this
- * function so it is possible that 'entry' is no longer a valid item in the
- * radix tree. This is okay because all we really need to do is to find the
- * correct waitqueue where tasks might be waiting for that old 'entry' and
- * wake them.
+ * @entry may no longer be the entry at the index in the mapping.
+ * The important information it's conveying is whether the entry at
+ * this index used to be a PMD entry.
*/
static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
pgoff_t index, void *entry, bool wake_all)
@@ -175,7 +173,7 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
/*
* Checking for locked entry and prepare_to_wait_exclusive() happens
- * under mapping->tree_lock, ditto for entry handling in our callers.
+ * under xa_lock, ditto for entry handling in our callers.
* So at this point all tasks that could have seen our entry locked
* must be in the waitqueue and the following check will see them.
*/
@@ -184,41 +182,38 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
}
/*
- * Check whether the given slot is locked. The function must be called with
- * mapping->tree_lock held
+ * Check whether the given slot is locked. Must be called with xa_lock held.
*/
static inline int slot_locked(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
return entry & RADIX_DAX_ENTRY_LOCK;
}
/*
- * Mark the given slot is locked. The function must be called with
- * mapping->tree_lock held
+ * Mark the given slot as locked. Must be called with xa_lock held.
*/
static inline void *lock_slot(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
entry |= RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
+ radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
return (void *)entry;
}
/*
- * Mark the given slot is unlocked. The function must be called with
- * mapping->tree_lock held
+ * Mark the given slot as unlocked. Must be called with xa_lock held.
*/
static inline void *unlock_slot(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
entry &= ~(unsigned long)RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
+ radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
return (void *)entry;
}
@@ -229,7 +224,7 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
* put_locked_mapping_entry() when he locked the entry and now wants to
* unlock it.
*
- * The function must be called with mapping->tree_lock held.
+ * Must be called with xa_lock held.
*/
static void *get_unlocked_mapping_entry(struct address_space *mapping,
pgoff_t index, void ***slotp)
@@ -242,7 +237,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
ewait.wait.func = wake_exceptional_entry_func;
for (;;) {
- entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
+ entry = __radix_tree_lookup(&mapping->pages, index, NULL,
&slot);
if (!entry ||
WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
@@ -255,10 +250,10 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
schedule();
finish_wait(wq, &ewait.wait);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
}
@@ -267,15 +262,15 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
{
void *entry, **slot;
- spin_lock_irq(&mapping->tree_lock);
- entry = __radix_tree_lookup(&mapping->page_tree, index, NULL, &slot);
+ xa_lock_irq(&mapping->pages);
+ entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
!slot_locked(mapping, slot))) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return;
}
unlock_slot(mapping, slot);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
dax_wake_mapping_entry_waiter(mapping, index, entry, false);
}
@@ -332,7 +327,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
void *entry, **slot;
restart:
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);
if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
@@ -364,12 +359,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
if (pmd_downgrade) {
/*
* Make sure 'entry' remains valid while we drop
- * mapping->tree_lock.
+ * xa_lock.
*/
entry = lock_slot(mapping, slot);
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* Besides huge zero pages the only other thing that gets
* downgraded are empty entries which don't need to be
@@ -386,26 +381,26 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
put_locked_mapping_entry(mapping, index);
return ERR_PTR(err);
}
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
if (!entry) {
/*
- * We needed to drop the page_tree lock while calling
+ * We needed to drop the pages lock while calling
* radix_tree_preload() and we didn't have an entry to
* lock. See if another thread inserted an entry at
* our index during this time.
*/
- entry = __radix_tree_lookup(&mapping->page_tree, index,
+ entry = __radix_tree_lookup(&mapping->pages, index,
NULL, &slot);
if (entry) {
radix_tree_preload_end();
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
goto restart;
}
}
if (pmd_downgrade) {
- radix_tree_delete(&mapping->page_tree, index);
+ radix_tree_delete(&mapping->pages, index);
mapping->nrexceptional--;
dax_wake_mapping_entry_waiter(mapping, index, entry,
true);
@@ -413,11 +408,11 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
entry = dax_radix_locked_entry(0, size_flag | RADIX_DAX_EMPTY);
- err = __radix_tree_insert(&mapping->page_tree, index,
+ err = __radix_tree_insert(&mapping->pages, index,
dax_radix_order(entry), entry);
radix_tree_preload_end();
if (err) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* Our insertion of a DAX entry failed, most likely
* because we were inserting a PMD entry and it
@@ -430,12 +425,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
}
/* Good, we have inserted empty locked entry into the tree. */
mapping->nrexceptional++;
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}
entry = lock_slot(mapping, slot);
out_unlock:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}
@@ -444,22 +439,22 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
{
int ret = 0;
void *entry;
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, NULL);
if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
goto out;
if (!trunc &&
- (radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_DIRTY) ||
- radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE)))
+ (radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
+ radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE)))
goto out;
- radix_tree_delete(page_tree, index);
+ radix_tree_delete(pages, index);
mapping->nrexceptional--;
ret = 1;
out:
put_unlocked_mapping_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return ret;
}
/*
@@ -529,7 +524,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
void *entry, sector_t sector,
unsigned long flags, bool dirty)
{
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;
void *new_entry;
pgoff_t index = vmf->pgoff;
@@ -545,7 +540,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
unmap_mapping_pages(mapping, vmf->pgoff, 1, false);
}
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
new_entry = dax_radix_locked_entry(sector, flags);
if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
@@ -561,17 +556,17 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
void **slot;
void *ret;
- ret = __radix_tree_lookup(page_tree, index, &node, &slot);
+ ret = __radix_tree_lookup(pages, index, &node, &slot);
WARN_ON_ONCE(ret != entry);
- __radix_tree_replace(page_tree, node, slot,
+ __radix_tree_replace(pages, node, slot,
new_entry, NULL);
entry = new_entry;
}
if (dirty)
- radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY);
+ radix_tree_tag_set(pages, index, PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}
@@ -661,7 +656,7 @@ static int dax_writeback_one(struct block_device *bdev,
struct dax_device *dax_dev, struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;
void *entry2, **slot, *kaddr;
long ret = 0, id;
sector_t sector;
@@ -676,7 +671,7 @@ static int dax_writeback_one(struct block_device *bdev,
if (WARN_ON(!radix_tree_exceptional_entry(entry)))
return -EIO;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
/* Entry got punched out / reallocated? */
if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
@@ -695,7 +690,7 @@ static int dax_writeback_one(struct block_device *bdev,
}
/* Another fsync thread may have already written back this entry */
- if (!radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE))
+ if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
goto put_unlocked;
/* Lock the entry to serialize with page faults */
entry = lock_slot(mapping, slot);
@@ -703,11 +698,11 @@ static int dax_writeback_one(struct block_device *bdev,
* We can clear the tag now but we have to be careful so that concurrent
* dax_writeback_one() calls for the same index cannot finish before we
* actually flush the caches. This is achieved as the calls will look
- * at the entry only under tree_lock and once they do that they will
+ * at the entry only under xa_lock and once they do that they will
* see the entry locked and wait for it to unlock.
*/
- radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_TOWRITE);
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_tag_clear(pages, index, PAGECACHE_TAG_TOWRITE);
+ xa_unlock_irq(&mapping->pages);
/*
* Even if dax_writeback_mapping_range() was given a wbc->range_start
@@ -725,7 +720,7 @@ static int dax_writeback_one(struct block_device *bdev,
goto dax_unlock;
/*
- * dax_direct_access() may sleep, so cannot hold tree_lock over
+ * dax_direct_access() may sleep, so cannot hold xa_lock over
* its invocation.
*/
ret = dax_direct_access(dax_dev, pgoff, size / PAGE_SIZE, &kaddr, &pfn);
@@ -745,9 +740,9 @@ static int dax_writeback_one(struct block_device *bdev,
* the pfn mappings are writeprotected and fault waits for mapping
* entry lock.
*/
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_clear(pages, index, PAGECACHE_TAG_DIRTY);
+ xa_unlock_irq(&mapping->pages);
trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
dax_unlock:
dax_read_unlock(id);
@@ -756,7 +751,7 @@ static int dax_writeback_one(struct block_device *bdev,
put_unlocked:
put_unlocked_mapping_entry(mapping, index, entry2);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return ret;
}
@@ -1524,21 +1519,21 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
pgoff_t index = vmf->pgoff;
int vmf_ret, error;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);
/* Did we race with someone splitting entry or so? */
if (!entry ||
(pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
(pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
put_unlocked_mapping_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
return VM_FAULT_NOPAGE;
}
- radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY);
+ radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
entry = lock_slot(mapping, slot);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
switch (pe_size) {
case PE_SIZE_PTE:
error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 7578ed1a85e0..4eee39befc67 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2381,12 +2381,12 @@ void f2fs_set_page_dirty_nobuffers(struct page *page)
SetPageDirty(page);
spin_unlock(&mapping->private_lock);
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
WARN_ON_ONCE(!PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
+ radix_tree_tag_set(&mapping->pages,
page_index(page), PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index f00b5ed8c011..0fd9695eddf6 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -741,10 +741,10 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) {
- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree, page_index(page),
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
clear_page_dirty_for_io(page);
ClearPagePrivate(page);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 90e38d8ea688..7858b8e15f33 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -226,10 +226,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page);
- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree, page_index(page),
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 177c438e4a56..fba2644abdf0 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -91,11 +91,11 @@ static void clear_node_page_dirty(struct page *page)
unsigned int long flags;
if (PageDirty(page)) {
- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree,
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
clear_page_dirty_for_io(page);
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
@@ -1140,7 +1140,7 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
f2fs_bug_on(sbi, check_nid_range(sbi, nid));
rcu_read_lock();
- apage = radix_tree_lookup(&NODE_MAPPING(sbi)->page_tree, nid);
+ apage = radix_tree_lookup(&NODE_MAPPING(sbi)->pages, nid);
rcu_read_unlock();
if (apage)
return;
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d4d04fee568a..d5c0e70dbfa8 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -347,9 +347,9 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
* By the time control reaches here, RCU grace period has passed
* since I_WB_SWITCH assertion and all wb stat update transactions
* between unlocked_inode_to_wb_begin/end() are guaranteed to be
- * synchronizing against mapping->tree_lock.
+ * synchronizing against xa_lock.
*
- * Grabbing old_wb->list_lock, inode->i_lock and mapping->tree_lock
+ * Grabbing old_wb->list_lock, inode->i_lock and xa_lock
* gives us exclusion against all wb related operations on @inode
* including IO list manipulations and stat updates.
*/
@@ -361,7 +361,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING);
}
spin_lock(&inode->i_lock);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
/*
* Once I_FREEING is visible under i_lock, the eviction path owns
@@ -373,22 +373,22 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
/*
* Count and transfer stats. Note that PAGECACHE_TAG_DIRTY points
* to possibly dirty pages while PAGECACHE_TAG_WRITEBACK points to
- * pages actually under underwriteback.
+ * pages actually under writeback.
*/
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
PAGECACHE_TAG_DIRTY) {
struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (likely(page) && PageDirty(page)) {
dec_wb_stat(old_wb, WB_RECLAIMABLE);
inc_wb_stat(new_wb, WB_RECLAIMABLE);
}
}
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
PAGECACHE_TAG_WRITEBACK) {
struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (likely(page)) {
WARN_ON_ONCE(!PageWriteback(page));
dec_wb_stat(old_wb, WB_WRITEBACK);
@@ -430,7 +430,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
*/
smp_store_release(&inode->i_state, inode->i_state & ~I_WB_SWITCH);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
spin_unlock(&inode->i_lock);
spin_unlock(&new_wb->list_lock);
spin_unlock(&old_wb->list_lock);
@@ -507,7 +507,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
/*
* In addition to synchronizing among switchers, I_WB_SWITCH tells
* the RCU protected stat update paths to grab the mapping's
- * tree_lock so that stat transfer can synchronize against them.
+ * xa_lock so that stat transfer can synchronize against them.
* Let's continue after I_WB_SWITCH is guaranteed to be visible.
*/
call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
diff --git a/fs/inode.c b/fs/inode.c
index ef362364d396..07e26909e24d 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -349,8 +349,7 @@ EXPORT_SYMBOL(inc_nlink);
void address_space_init_once(struct address_space *mapping)
{
memset(mapping, 0, sizeof(*mapping));
- INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC | __GFP_ACCOUNT);
- spin_lock_init(&mapping->tree_lock);
+ INIT_RADIX_TREE(&mapping->pages, GFP_ATOMIC | __GFP_ACCOUNT);
init_rwsem(&mapping->i_mmap_rwsem);
INIT_LIST_HEAD(&mapping->private_list);
spin_lock_init(&mapping->private_lock);
@@ -499,14 +498,14 @@ EXPORT_SYMBOL(__remove_inode_hash);
void clear_inode(struct inode *inode)
{
/*
- * We have to cycle tree_lock here because reclaim can be still in the
+ * We have to cycle the xa_lock here because reclaim can be in the
* process of removing the last page (in __delete_from_page_cache())
- * and we must not free mapping under it.
+ * and we must not free the mapping under it.
*/
- spin_lock_irq(&inode->i_data.tree_lock);
+ xa_lock_irq(&inode->i_data.pages);
BUG_ON(inode->i_data.nrpages);
BUG_ON(inode->i_data.nrexceptional);
- spin_unlock_irq(&inode->i_data.tree_lock);
+ xa_unlock_irq(&inode->i_data.pages);
BUG_ON(!list_empty(&inode->i_data.private_list));
BUG_ON(!(inode->i_state & I_FREEING));
BUG_ON(inode->i_state & I_CLEAR);
diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index c21e0b4454a6..9e2a00207436 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -193,9 +193,9 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
(unsigned long long)oldkey,
(unsigned long long)newkey);
- spin_lock_irq(&btnc->tree_lock);
- err = radix_tree_insert(&btnc->page_tree, newkey, obh->b_page);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_lock_irq(&btnc->pages);
+ err = radix_tree_insert(&btnc->pages, newkey, obh->b_page);
+ xa_unlock_irq(&btnc->pages);
/*
* Note: page->index will not change to newkey until
* nilfs_btnode_commit_change_key() will be called.
@@ -251,11 +251,11 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
(unsigned long long)newkey);
mark_buffer_dirty(obh);
- spin_lock_irq(&btnc->tree_lock);
- radix_tree_delete(&btnc->page_tree, oldkey);
- radix_tree_tag_set(&btnc->page_tree, newkey,
+ xa_lock_irq(&btnc->pages);
+ radix_tree_delete(&btnc->pages, oldkey);
+ radix_tree_tag_set(&btnc->pages, newkey,
PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_unlock_irq(&btnc->pages);
opage->index = obh->b_blocknr = newkey;
unlock_page(opage);
@@ -283,9 +283,9 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
return;
if (nbh == NULL) { /* blocksize == pagesize */
- spin_lock_irq(&btnc->tree_lock);
- radix_tree_delete(&btnc->page_tree, newkey);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_lock_irq(&btnc->pages);
+ radix_tree_delete(&btnc->pages, newkey);
+ xa_unlock_irq(&btnc->pages);
unlock_page(ctxt->bh->b_page);
} else
brelse(nbh);
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 68241512d7c1..1c6703efde9e 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -331,15 +331,15 @@ void nilfs_copy_back_pages(struct address_space *dmap,
struct page *page2;
/* move the page to the destination cache */
- spin_lock_irq(&smap->tree_lock);
- page2 = radix_tree_delete(&smap->page_tree, offset);
+ xa_lock_irq(&smap->pages);
+ page2 = radix_tree_delete(&smap->pages, offset);
WARN_ON(page2 != page);
smap->nrpages--;
- spin_unlock_irq(&smap->tree_lock);
+ xa_unlock_irq(&smap->pages);
- spin_lock_irq(&dmap->tree_lock);
- err = radix_tree_insert(&dmap->page_tree, offset, page);
+ xa_lock_irq(&dmap->pages);
+ err = radix_tree_insert(&dmap->pages, offset, page);
if (unlikely(err < 0)) {
WARN_ON(err == -EEXIST);
page->mapping = NULL;
@@ -348,11 +348,11 @@ void nilfs_copy_back_pages(struct address_space *dmap,
page->mapping = dmap;
dmap->nrpages++;
if (PageDirty(page))
- radix_tree_tag_set(&dmap->page_tree,
+ radix_tree_tag_set(&dmap->pages,
offset,
PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&dmap->tree_lock);
+ xa_unlock_irq(&dmap->pages);
}
unlock_page(page);
}
@@ -474,15 +474,15 @@ int __nilfs_clear_page_dirty(struct page *page)
struct address_space *mapping = page->mapping;
if (mapping) {
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
if (test_bit(PG_dirty, &page->flags)) {
- radix_tree_tag_clear(&mapping->page_tree,
+ radix_tree_tag_clear(&mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return clear_page_dirty_for_io(page);
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return 0;
}
return TestClearPageDirty(page);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 3e4ce54d84ab..3df0d20e23f3 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -329,7 +329,7 @@ static inline bool inode_to_wb_is_valid(struct inode *inode)
* @inode: inode of interest
*
* Returns the wb @inode is currently associated with. The caller must be
- * holding either @inode->i_lock, @inode->i_mapping->tree_lock, or the
+ * holding either @inode->i_lock, @inode->i_mapping->pages.xa_lock, or the
* associated wb's list_lock.
*/
static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
@@ -337,7 +337,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
#ifdef CONFIG_LOCKDEP
WARN_ON_ONCE(debug_locks &&
(!lockdep_is_held(&inode->i_lock) &&
- !lockdep_is_held(&inode->i_mapping->tree_lock) &&
+ !lockdep_is_held(&inode->i_mapping->pages.xa_lock) &&
!lockdep_is_held(&inode->i_wb->list_lock)));
#endif
return inode->i_wb;
@@ -349,7 +349,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
* @lockedp: temp bool output param, to be passed to the end function
*
* The caller wants to access the wb associated with @inode but isn't
- * holding inode->i_lock, mapping->tree_lock or wb->list_lock. This
+ * holding inode->i_lock, mapping->pages.xa_lock or wb->list_lock. This
* function determines the wb associated with @inode and ensures that the
* association doesn't change until the transaction is finished with
* unlocked_inode_to_wb_end().
@@ -370,10 +370,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
*lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
if (unlikely(*lockedp))
- spin_lock_irq(&inode->i_mapping->tree_lock);
+ xa_lock_irq(&inode->i_mapping->pages);
/*
- * Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock.
+ * Protected by either !I_WB_SWITCH + rcu_read_lock() or xa_lock.
* inode_to_wb() will bark. Deref directly.
*/
return inode->i_wb;
@@ -387,7 +387,7 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
{
if (unlikely(locked))
- spin_unlock_irq(&inode->i_mapping->tree_lock);
+ xa_unlock_irq(&inode->i_mapping->pages);
rcu_read_unlock();
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 2a815560fda0..e227f68e0418 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -13,6 +13,7 @@
#include <linux/list_lru.h>
#include <linux/llist.h>
#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/rbtree.h>
#include <linux/init.h>
#include <linux/pid.h>
@@ -390,23 +391,21 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
struct address_space {
struct inode *host; /* owner: inode, block_device */
- struct radix_tree_root page_tree; /* radix tree of all pages */
- spinlock_t tree_lock; /* and lock protecting it */
+ struct radix_tree_root pages; /* cached pages */
+ gfp_t gfp_mask; /* for allocating pages */
atomic_t i_mmap_writable;/* count VM_SHARED mappings */
struct rb_root_cached i_mmap; /* tree of private and shared mappings */
struct rw_semaphore i_mmap_rwsem; /* protect tree, count, list */
- /* Protected by tree_lock together with the radix tree */
+ /* Protected by pages.xa_lock */
unsigned long nrpages; /* number of total pages */
- /* number of shadow or DAX exceptional entries */
- unsigned long nrexceptional;
+ unsigned long nrexceptional; /* shadow or DAX entries */
pgoff_t writeback_index;/* writeback starts here */
const struct address_space_operations *a_ops; /* methods */
unsigned long flags; /* error bits */
+ errseq_t wb_err;
spinlock_t private_lock; /* for use by the address_space */
- gfp_t gfp_mask; /* implicit gfp mask for allocations */
- struct list_head private_list; /* for use by the address_space */
+ struct list_head private_list; /* ditto */
void *private_data; /* ditto */
- errseq_t wb_err;
} __attribute__((aligned(sizeof(long)))) __randomize_layout;
/*
* On most architectures that alignment is already the case; but
@@ -1986,7 +1985,7 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
*
* I_WB_SWITCH Cgroup bdi_writeback switching in progress. Used to
* synchronize competing switching instances and to tell
- * wb stat updates to grab mapping->tree_lock. See
+ * wb stat updates to grab mapping->pages.xa_lock. See
* inode_switch_wb_work_fn() for details.
*
* I_OVL_INUSE Used by overlayfs to get exclusive ownership on upper
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 47b0fb0a6e41..aad22344d685 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -738,7 +738,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
* refcount. The each user mapping also has a reference to the page.
*
* The pagecache pages are stored in a per-mapping radix tree, which is
- * rooted at mapping->page_tree, and indexed by offset.
+ * rooted at mapping->pages, and indexed by offset.
* Where 2.4 and early 2.6 kernels kept dirty/clean pages in per-address_space
* lists, we instead now tag pages as dirty/writeback in the radix tree.
*
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 34ce3ebf97d5..80a6149152d4 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -144,7 +144,7 @@ void release_pages(struct page **pages, int nr);
* 3. check the page is still in pagecache (if no, goto 1)
*
* Remove-side that cares about stability of _refcount (eg. reclaim) has the
- * following (with tree_lock held for write):
+ * following (with pages.xa_lock held):
* A. atomically check refcount is correct and set it to 0 (atomic_cmpxchg)
* B. remove page from pagecache
* C. free the page
@@ -157,7 +157,7 @@ void release_pages(struct page **pages, int nr);
*
* It is possible that between 1 and 2, the page is removed then the exact same
* page is inserted into the same position in pagecache. That's OK: the
- * old find_get_page using tree_lock could equally have run before or after
+ * old find_get_page using a lock could equally have run before or after
* such a re-insertion, depending on order that locks are granted.
*
* Lookups racing against pagecache insertion isn't a big problem: either 1
diff --git a/mm/filemap.c b/mm/filemap.c
index 693f62212a59..7588b7f1f479 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -66,7 +66,7 @@
* ->i_mmap_rwsem (truncate_pagecache)
* ->private_lock (__free_pte->__set_page_dirty_buffers)
* ->swap_lock (exclusive_swap_page, others)
- * ->mapping->tree_lock
+ * ->mapping->pages.xa_lock
*
* ->i_mutex
* ->i_mmap_rwsem (truncate->unmap_mapping_range)
@@ -74,7 +74,7 @@
* ->mmap_sem
* ->i_mmap_rwsem
* ->page_table_lock or pte_lock (various, mainly in memory.c)
- * ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
+ * ->mapping->pages.xa_lock (arch-dependent flush_dcache_mmap_lock)
*
* ->mmap_sem
* ->lock_page (access_process_vm)
@@ -84,7 +84,7 @@
*
* bdi->wb.list_lock
* sb_lock (fs/fs-writeback.c)
- * ->mapping->tree_lock (__sync_single_inode)
+ * ->mapping->pages.xa_lock (__sync_single_inode)
*
* ->i_mmap_rwsem
* ->anon_vma.lock (vma_adjust)
@@ -95,11 +95,11 @@
* ->page_table_lock or pte_lock
* ->swap_lock (try_to_unmap_one)
* ->private_lock (try_to_unmap_one)
- * ->tree_lock (try_to_unmap_one)
+ * ->pages.xa_lock (try_to_unmap_one)
* ->zone_lru_lock(zone) (follow_page->mark_page_accessed)
* ->zone_lru_lock(zone) (check_pte_range->isolate_lru_page)
* ->private_lock (page_remove_rmap->set_page_dirty)
- * ->tree_lock (page_remove_rmap->set_page_dirty)
+ * ->pages.xa_lock (page_remove_rmap->set_page_dirty)
* bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
* ->inode->i_lock (page_remove_rmap->set_page_dirty)
* ->memcg->move_lock (page_remove_rmap->lock_page_memcg)
@@ -118,14 +118,15 @@ static int page_cache_tree_insert(struct address_space *mapping,
void **slot;
int error;
- error = __radix_tree_create(&mapping->page_tree, page->index, 0,
+ error = __radix_tree_create(&mapping->pages, page->index, 0,
&node, &slot);
if (error)
return error;
if (*slot) {
void *p;
- p = radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ p = radix_tree_deref_slot_protected(slot,
+ &mapping->pages.xa_lock);
if (!radix_tree_exceptional_entry(p))
return -EEXIST;
@@ -133,7 +134,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
if (shadowp)
*shadowp = p;
}
- __radix_tree_replace(&mapping->page_tree, node, slot, page,
+ __radix_tree_replace(&mapping->pages, node, slot, page,
workingset_lookup_update(mapping));
mapping->nrpages++;
return 0;
@@ -155,13 +156,13 @@ static void page_cache_tree_delete(struct address_space *mapping,
struct radix_tree_node *node;
void **slot;
- __radix_tree_lookup(&mapping->page_tree, page->index + i,
+ __radix_tree_lookup(&mapping->pages, page->index + i,
&node, &slot);
VM_BUG_ON_PAGE(!node && nr != 1, page);
- radix_tree_clear_tags(&mapping->page_tree, node, slot);
- __radix_tree_replace(&mapping->page_tree, node, slot, shadow,
+ radix_tree_clear_tags(&mapping->pages, node, slot);
+ __radix_tree_replace(&mapping->pages, node, slot, shadow,
workingset_lookup_update(mapping));
}
@@ -253,7 +254,7 @@ static void unaccount_page_cache_page(struct address_space *mapping,
/*
* Delete a page from the page cache and free it. Caller has to make
* sure the page is locked and that nobody else uses it - or that usage
- * is safe. The caller must hold the mapping's tree_lock.
+ * is safe. The caller must hold the xa_lock.
*/
void __delete_from_page_cache(struct page *page, void *shadow)
{
@@ -296,9 +297,9 @@ void delete_from_page_cache(struct page *page)
unsigned long flags;
BUG_ON(!PageLocked(page));
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
__delete_from_page_cache(page, NULL);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
page_cache_free_page(mapping, page);
}
@@ -309,14 +310,14 @@ EXPORT_SYMBOL(delete_from_page_cache);
* @mapping: the mapping to which pages belong
* @pvec: pagevec with pages to delete
*
- * The function walks over mapping->page_tree and removes pages passed in @pvec
- * from the radix tree. The function expects @pvec to be sorted by page index.
- * It tolerates holes in @pvec (radix tree entries at those indices are not
+ * The function walks over mapping->pages and removes pages passed in @pvec
+ * from the mapping. The function expects @pvec to be sorted by page index.
+ * It tolerates holes in @pvec (mapping entries at those indices are not
* modified). The function expects only THP head pages to be present in the
- * @pvec and takes care to delete all corresponding tail pages from the radix
- * tree as well.
+ * @pvec and takes care to delete all corresponding tail pages from the
+ * mapping as well.
*
- * The function expects mapping->tree_lock to be held.
+ * The function expects xa_lock to be held.
*/
static void
page_cache_tree_delete_batch(struct address_space *mapping,
@@ -330,11 +331,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
pgoff_t start;
start = pvec->pages[0]->index;
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (i >= pagevec_count(pvec) && !tail_pages)
break;
page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (radix_tree_exceptional_entry(page))
continue;
if (!tail_pages) {
@@ -357,8 +358,8 @@ page_cache_tree_delete_batch(struct address_space *mapping,
} else {
tail_pages--;
}
- radix_tree_clear_tags(&mapping->page_tree, iter.node, slot);
- __radix_tree_replace(&mapping->page_tree, iter.node, slot, NULL,
+ radix_tree_clear_tags(&mapping->pages, iter.node, slot);
+ __radix_tree_replace(&mapping->pages, iter.node, slot, NULL,
workingset_lookup_update(mapping));
total_pages++;
}
@@ -374,14 +375,14 @@ void delete_from_page_cache_batch(struct address_space *mapping,
if (!pagevec_count(pvec))
return;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
for (i = 0; i < pagevec_count(pvec); i++) {
trace_mm_filemap_delete_from_page_cache(pvec->pages[i]);
unaccount_page_cache_page(mapping, pvec->pages[i]);
}
page_cache_tree_delete_batch(mapping, pvec);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
for (i = 0; i < pagevec_count(pvec); i++)
page_cache_free_page(mapping, pvec->pages[i]);
@@ -798,7 +799,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
new->mapping = mapping;
new->index = offset;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
__delete_from_page_cache(old, NULL);
error = page_cache_tree_insert(mapping, new, NULL);
BUG_ON(error);
@@ -810,7 +811,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
__inc_node_page_state(new, NR_FILE_PAGES);
if (PageSwapBacked(new))
__inc_node_page_state(new, NR_SHMEM);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
mem_cgroup_migrate(old, new);
radix_tree_preload_end();
if (freepage)
@@ -852,7 +853,7 @@ static int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
error = page_cache_tree_insert(mapping, page, shadowp);
radix_tree_preload_end();
if (unlikely(error))
@@ -861,7 +862,7 @@ static int __add_to_page_cache_locked(struct page *page,
/* hugetlb pages do not participate in page cache accounting. */
if (!huge)
__inc_node_page_state(page, NR_FILE_PAGES);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_commit_charge(page, memcg, false, false);
trace_mm_filemap_add_to_page_cache(page);
@@ -869,7 +870,7 @@ static int __add_to_page_cache_locked(struct page *page,
err_insert:
page->mapping = NULL;
/* Leave page->index set: truncation relies upon it */
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
@@ -1353,7 +1354,7 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
for (i = 0; i < max_scan; i++) {
struct page *page;
- page = radix_tree_lookup(&mapping->page_tree, index);
+ page = radix_tree_lookup(&mapping->pages, index);
if (!page || radix_tree_exceptional_entry(page))
break;
index++;
@@ -1394,7 +1395,7 @@ pgoff_t page_cache_prev_hole(struct address_space *mapping,
for (i = 0; i < max_scan; i++) {
struct page *page;
- page = radix_tree_lookup(&mapping->page_tree, index);
+ page = radix_tree_lookup(&mapping->pages, index);
if (!page || radix_tree_exceptional_entry(page))
break;
index--;
@@ -1427,7 +1428,7 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
rcu_read_lock();
repeat:
page = NULL;
- pagep = radix_tree_lookup_slot(&mapping->page_tree, offset);
+ pagep = radix_tree_lookup_slot(&mapping->pages, offset);
if (pagep) {
page = radix_tree_deref_slot(pagep);
if (unlikely(!page))
@@ -1633,7 +1634,7 @@ unsigned find_get_entries(struct address_space *mapping,
return 0;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -1710,7 +1711,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
return 0;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, *start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) {
struct page *head, *page;
if (iter.index > end)
@@ -1795,7 +1796,7 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
return 0;
rcu_read_lock();
- radix_tree_for_each_contig(slot, &mapping->page_tree, &iter, index) {
+ radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -1875,8 +1876,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
return 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree,
- &iter, *index, tag) {
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) {
struct page *head, *page;
if (iter.index > end)
@@ -1969,8 +1969,7 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
return 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree,
- &iter, start, tag) {
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -2624,8 +2623,7 @@ void filemap_map_pages(struct vm_fault *vmf,
struct page *head, *page;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
- start_pgoff) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) {
if (iter.index > end_pgoff)
break;
repeat:
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 87ab9b8f56b5..4b60f55f1f8b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2450,7 +2450,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
} else {
/* Additional pin to radix tree */
page_ref_add(head, 2);
- spin_unlock(&head->mapping->tree_lock);
+ xa_unlock(&head->mapping->pages);
}
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
@@ -2658,15 +2658,15 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
if (mapping) {
void **pslot;
- spin_lock(&mapping->tree_lock);
- pslot = radix_tree_lookup_slot(&mapping->page_tree,
+ xa_lock(&mapping->pages);
+ pslot = radix_tree_lookup_slot(&mapping->pages,
page_index(head));
/*
* Check if the head page is present in radix tree.
* We assume all tail are present too, if head is there.
*/
if (radix_tree_deref_slot_protected(pslot,
- &mapping->tree_lock) != head)
+ &mapping->pages.xa_lock) != head)
goto fail;
}
@@ -2700,7 +2700,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
}
spin_unlock(&pgdata->split_queue_lock);
fail: if (mapping)
- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
unfreeze_page(head);
ret = -EBUSY;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b7e2268dfc9a..5800093fe94a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1339,8 +1339,8 @@ static void collapse_shmem(struct mm_struct *mm,
*/
index = start;
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
int n = min(iter.index, end) - index;
/*
@@ -1353,7 +1353,7 @@ static void collapse_shmem(struct mm_struct *mm,
}
nr_none += n;
for (; index < min(iter.index, end); index++) {
- radix_tree_insert(&mapping->page_tree, index,
+ radix_tree_insert(&mapping->pages, index,
new_page + (index % HPAGE_PMD_NR));
}
@@ -1362,16 +1362,16 @@ static void collapse_shmem(struct mm_struct *mm,
break;
page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (radix_tree_exceptional_entry(page) || !PageUptodate(page)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
SGP_NOHUGE)) {
result = SCAN_FAIL;
goto tree_unlocked;
}
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
} else if (trylock_page(page)) {
get_page(page);
} else {
@@ -1380,7 +1380,7 @@ static void collapse_shmem(struct mm_struct *mm,
}
/*
- * The page must be locked, so we can drop the tree_lock
+ * The page must be locked, so we can drop the xa_lock
* without racing with truncate.
*/
VM_BUG_ON_PAGE(!PageLocked(page), page);
@@ -1391,7 +1391,7 @@ static void collapse_shmem(struct mm_struct *mm,
result = SCAN_TRUNCATED;
goto out_unlock;
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
if (isolate_lru_page(page)) {
result = SCAN_DEL_PAGE_LRU;
@@ -1401,11 +1401,11 @@ static void collapse_shmem(struct mm_struct *mm,
if (page_mapped(page))
unmap_mapping_pages(mapping, index, 1, false);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
- slot = radix_tree_lookup_slot(&mapping->page_tree, index);
+ slot = radix_tree_lookup_slot(&mapping->pages, index);
VM_BUG_ON_PAGE(page != radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock), page);
+ &mapping->pages.xa_lock), page);
VM_BUG_ON_PAGE(page_mapped(page), page);
/*
@@ -1426,14 +1426,14 @@ static void collapse_shmem(struct mm_struct *mm,
list_add_tail(&page->lru, &pagelist);
/* Finally, replace with the new page. */
- radix_tree_replace_slot(&mapping->page_tree, slot,
+ radix_tree_replace_slot(&mapping->pages, slot,
new_page + (index % HPAGE_PMD_NR));
slot = radix_tree_iter_resume(slot, &iter);
index++;
continue;
out_lru:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
putback_lru_page(page);
out_isolate_failed:
unlock_page(page);
@@ -1459,14 +1459,14 @@ static void collapse_shmem(struct mm_struct *mm,
}
for (; index < end; index++) {
- radix_tree_insert(&mapping->page_tree, index,
+ radix_tree_insert(&mapping->pages, index,
new_page + (index % HPAGE_PMD_NR));
}
nr_none += n;
}
tree_locked:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
tree_unlocked:
if (result == SCAN_SUCCEED) {
@@ -1515,9 +1515,8 @@ static void collapse_shmem(struct mm_struct *mm,
} else {
/* Something went wrong: rollback changes to the radix-tree */
shmem_uncharge(mapping->host, nr_none);
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
- start) {
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= end)
break;
page = list_first_entry_or_null(&pagelist,
@@ -1527,8 +1526,7 @@ static void collapse_shmem(struct mm_struct *mm,
break;
nr_none--;
/* Put holes back where they were */
- radix_tree_delete(&mapping->page_tree,
- iter.index);
+ radix_tree_delete(&mapping->pages, iter.index);
continue;
}
@@ -1537,16 +1535,15 @@ static void collapse_shmem(struct mm_struct *mm,
/* Unfreeze the page. */
list_del(&page->lru);
page_ref_unfreeze(page, 2);
- radix_tree_replace_slot(&mapping->page_tree,
- slot, page);
+ radix_tree_replace_slot(&mapping->pages, slot, page);
slot = radix_tree_iter_resume(slot, &iter);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
putback_lru_page(page);
unlock_page(page);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
VM_BUG_ON(nr_none);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/* Unfreeze new_page, caller would take care about freeing it */
page_ref_unfreeze(new_page, 1);
@@ -1574,7 +1571,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
swap = 0;
memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= start + HPAGE_PMD_NR)
break;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 670e99b68aa6..d89cb08ac39b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5967,9 +5967,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
/*
* Interrupts should be disabled here because the caller holds the
- * mapping->tree_lock lock which is taken with interrupts-off. It is
+ * mapping->pages xa_lock which is taken with interrupts-off. It is
* important here to have the interrupts disabled because it is the
- * only synchronisation we have for udpating the per-CPU variables.
+ * only synchronisation we have for updating the per-CPU variables.
*/
VM_BUG_ON(!irqs_disabled());
mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
diff --git a/mm/migrate.c b/mm/migrate.c
index 1e5525a25691..184bc1d0e187 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -466,20 +466,21 @@ int migrate_page_move_mapping(struct address_space *mapping,
oldzone = page_zone(page);
newzone = page_zone(newpage);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
- pslot = radix_tree_lookup_slot(&mapping->page_tree,
+ pslot = radix_tree_lookup_slot(&mapping->pages,
page_index(page));
expected_count += 1 + page_has_private(page);
if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_deref_slot_protected(pslot,
+ &mapping->pages.xa_lock) != page) {
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}
if (!page_ref_freeze(page, expected_count)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}
@@ -493,7 +494,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (mode == MIGRATE_ASYNC && head &&
!buffer_migrate_lock_buffers(head, mode)) {
page_ref_unfreeze(page, expected_count);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}
@@ -521,7 +522,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
SetPageDirty(newpage);
}
- radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
+ radix_tree_replace_slot(&mapping->pages, pslot, newpage);
/*
* Drop cache reference from old page by unfreezing
@@ -530,7 +531,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
page_ref_unfreeze(page, expected_count - 1);
- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
/* Leave irq disabled to prevent preemption while updating stats */
/*
@@ -573,20 +574,19 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
int expected_count;
void **pslot;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
- pslot = radix_tree_lookup_slot(&mapping->page_tree,
- page_index(page));
+ pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page));
expected_count = 2 + page_has_private(page);
if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) {
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}
if (!page_ref_freeze(page, expected_count)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}
@@ -595,11 +595,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
get_page(newpage);
- radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
+ radix_tree_replace_slot(&mapping->pages, pslot, newpage);
page_ref_unfreeze(page, expected_count - 1);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return MIGRATEPAGE_SUCCESS;
}
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 586f31261c83..588ce729d199 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2099,7 +2099,7 @@ void __init page_writeback_init(void)
* so that it can tag pages faster than a dirtying process can create them).
*/
/*
- * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce tree_lock latency.
+ * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency.
*/
void tag_pages_for_writeback(struct address_space *mapping,
pgoff_t start, pgoff_t end)
@@ -2109,22 +2109,22 @@ void tag_pages_for_writeback(struct address_space *mapping,
struct radix_tree_iter iter;
void **slot;
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, start,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start,
PAGECACHE_TAG_DIRTY) {
if (iter.index > end)
break;
- radix_tree_iter_tag_set(&mapping->page_tree, &iter,
+ radix_tree_iter_tag_set(&mapping->pages, &iter,
PAGECACHE_TAG_TOWRITE);
tagged++;
if ((tagged % WRITEBACK_TAG_BATCH) != 0)
continue;
slot = radix_tree_iter_resume(slot, &iter);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
cond_resched();
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}
EXPORT_SYMBOL(tag_pages_for_writeback);
@@ -2467,13 +2467,13 @@ int __set_page_dirty_nobuffers(struct page *page)
return 1;
}
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
BUG_ON(page_mapping(page) != mapping);
WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree, page_index(page),
+ radix_tree_tag_set(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);
if (mapping->host) {
@@ -2718,11 +2718,10 @@ int test_clear_page_writeback(struct page *page)
struct backing_dev_info *bdi = inode_to_bdi(inode);
unsigned long flags;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
ret = TestClearPageWriteback(page);
if (ret) {
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi)) {
struct bdi_writeback *wb = inode_to_wb(inode);
@@ -2736,7 +2735,7 @@ int test_clear_page_writeback(struct page *page)
PAGECACHE_TAG_WRITEBACK))
sb_clear_inode_writeback(mapping->host);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
} else {
ret = TestClearPageWriteback(page);
}
@@ -2766,7 +2765,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
struct backing_dev_info *bdi = inode_to_bdi(inode);
unsigned long flags;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
ret = TestSetPageWriteback(page);
if (!ret) {
bool on_wblist;
@@ -2774,8 +2773,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
on_wblist = mapping_tagged(mapping,
PAGECACHE_TAG_WRITEBACK);
- radix_tree_tag_set(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_set(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi))
inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK);
@@ -2789,14 +2787,12 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
sb_mark_inode_writeback(mapping->host);
}
if (!PageDirty(page))
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
if (!keep_write)
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_TOWRITE);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
} else {
ret = TestSetPageWriteback(page);
}
@@ -2816,7 +2812,7 @@ EXPORT_SYMBOL(__test_set_page_writeback);
*/
int mapping_tagged(struct address_space *mapping, int tag)
{
- return radix_tree_tagged(&mapping->page_tree, tag);
+ return radix_tree_tagged(&mapping->pages, tag);
}
EXPORT_SYMBOL(mapping_tagged);
diff --git a/mm/readahead.c b/mm/readahead.c
index c4ca70239233..514188fd2489 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -175,7 +175,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
break;
rcu_read_lock();
- page = radix_tree_lookup(&mapping->page_tree, page_offset);
+ page = radix_tree_lookup(&mapping->pages, page_offset);
rcu_read_unlock();
if (page && !radix_tree_exceptional_entry(page))
continue;
diff --git a/mm/rmap.c b/mm/rmap.c
index 47db27f8049e..87c1ca0cf1a3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -32,11 +32,11 @@
* mmlist_lock (in mmput, drain_mmlist and others)
* mapping->private_lock (in __set_page_dirty_buffers)
* mem_cgroup_{begin,end}_page_stat (memcg->move_lock)
- * mapping->tree_lock (widely used)
+ * mapping->pages.xa_lock (widely used)
* inode->i_lock (in set_page_dirty's __mark_inode_dirty)
* bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
* sb_lock (within inode_lock in fs/fs-writeback.c)
- * mapping->tree_lock (widely used, in set_page_dirty,
+ * mapping->pages.xa_lock (widely used, in set_page_dirty,
* in arch-dependent flush_dcache_mmap_lock,
* within bdi.wb->list_lock in __sync_single_inode)
*
diff --git a/mm/shmem.c b/mm/shmem.c
index 1907688b75ee..b2fdc258853d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -332,12 +332,12 @@ static int shmem_radix_tree_replace(struct address_space *mapping,
VM_BUG_ON(!expected);
VM_BUG_ON(!replacement);
- item = __radix_tree_lookup(&mapping->page_tree, index, &node, &pslot);
+ item = __radix_tree_lookup(&mapping->pages, index, &node, &pslot);
if (!item)
return -ENOENT;
if (item != expected)
return -ENOENT;
- __radix_tree_replace(&mapping->page_tree, node, pslot,
+ __radix_tree_replace(&mapping->pages, node, pslot,
replacement, NULL);
return 0;
}
@@ -355,7 +355,7 @@ static bool shmem_confirm_swap(struct address_space *mapping,
void *item;
rcu_read_lock();
- item = radix_tree_lookup(&mapping->page_tree, index);
+ item = radix_tree_lookup(&mapping->pages, index);
rcu_read_unlock();
return item == swp_to_radix_entry(swap);
}
@@ -581,14 +581,14 @@ static int shmem_add_to_page_cache(struct page *page,
page->mapping = mapping;
page->index = index;
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
if (PageTransHuge(page)) {
void __rcu **results;
pgoff_t idx;
int i;
error = 0;
- if (radix_tree_gang_lookup_slot(&mapping->page_tree,
+ if (radix_tree_gang_lookup_slot(&mapping->pages,
&results, &idx, index, 1) &&
idx < index + HPAGE_PMD_NR) {
error = -EEXIST;
@@ -596,14 +596,14 @@ static int shmem_add_to_page_cache(struct page *page,
if (!error) {
for (i = 0; i < HPAGE_PMD_NR; i++) {
- error = radix_tree_insert(&mapping->page_tree,
+ error = radix_tree_insert(&mapping->pages,
index + i, page + i);
VM_BUG_ON(error);
}
count_vm_event(THP_FILE_ALLOC);
}
} else if (!expected) {
- error = radix_tree_insert(&mapping->page_tree, index, page);
+ error = radix_tree_insert(&mapping->pages, index, page);
} else {
error = shmem_radix_tree_replace(mapping, index, expected,
page);
@@ -615,10 +615,10 @@ static int shmem_add_to_page_cache(struct page *page,
__inc_node_page_state(page, NR_SHMEM_THPS);
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
__mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
} else {
page->mapping = NULL;
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
page_ref_sub(page, nr);
}
return error;
@@ -634,13 +634,13 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
VM_BUG_ON_PAGE(PageCompound(page), page);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
error = shmem_radix_tree_replace(mapping, page->index, page, radswap);
page->mapping = NULL;
mapping->nrpages--;
__dec_node_page_state(page, NR_FILE_PAGES);
__dec_node_page_state(page, NR_SHMEM);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
put_page(page);
BUG_ON(error);
}
@@ -653,9 +653,9 @@ static int shmem_free_swap(struct address_space *mapping,
{
void *old;
- spin_lock_irq(&mapping->tree_lock);
- old = radix_tree_delete_item(&mapping->page_tree, index, radswap);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ old = radix_tree_delete_item(&mapping->pages, index, radswap);
+ xa_unlock_irq(&mapping->pages);
if (old != radswap)
return -ENOENT;
free_swap_and_cache(radix_to_swp_entry(radswap));
@@ -666,7 +666,7 @@ static int shmem_free_swap(struct address_space *mapping,
* Determine (in bytes) how many of the shmem object's pages mapped by the
* given offsets are swapped out.
*
- * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
+ * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
* as long as the inode doesn't go away and racy results are not a problem.
*/
unsigned long shmem_partial_swap_usage(struct address_space *mapping,
@@ -679,7 +679,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= end)
break;
@@ -708,7 +708,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
* Determine (in bytes) how many of the shmem object's pages mapped by the
* given vma is swapped out.
*
- * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
+ * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
* as long as the inode doesn't go away and racy results are not a problem.
*/
unsigned long shmem_swap_usage(struct vm_area_struct *vma)
@@ -1123,7 +1123,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
int error = 0;
radswap = swp_to_radix_entry(swap);
- index = find_swap_entry(&mapping->page_tree, radswap);
+ index = find_swap_entry(&mapping->pages, radswap);
if (index == -1)
return -EAGAIN; /* tell shmem_unuse we found nothing */
@@ -1436,7 +1436,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
hindex = round_down(index, HPAGE_PMD_NR);
rcu_read_lock();
- if (radix_tree_gang_lookup_slot(&mapping->page_tree, &results, &idx,
+ if (radix_tree_gang_lookup_slot(&mapping->pages, &results, &idx,
hindex, 1) && idx < hindex + HPAGE_PMD_NR) {
rcu_read_unlock();
return NULL;
@@ -1549,14 +1549,14 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
* Our caller will very soon move newpage out of swapcache, but it's
* a nice clean interface for us to replace oldpage by newpage there.
*/
- spin_lock_irq(&swap_mapping->tree_lock);
+ xa_lock_irq(&swap_mapping->pages);
error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,
newpage);
if (!error) {
__inc_node_page_state(newpage, NR_FILE_PAGES);
__dec_node_page_state(oldpage, NR_FILE_PAGES);
}
- spin_unlock_irq(&swap_mapping->tree_lock);
+ xa_unlock_irq(&swap_mapping->pages);
if (unlikely(error)) {
/*
@@ -2622,7 +2622,7 @@ static void shmem_tag_pins(struct address_space *mapping)
start = 0;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
page = radix_tree_deref_slot(slot);
if (!page || radix_tree_exception(page)) {
if (radix_tree_deref_retry(page)) {
@@ -2630,10 +2630,10 @@ static void shmem_tag_pins(struct address_space *mapping)
continue;
}
} else if (page_count(page) - page_mapcount(page) > 1) {
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_set(&mapping->page_tree, iter.index,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_set(&mapping->pages, iter.index,
SHMEM_TAG_PINNED);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}
if (need_resched()) {
@@ -2665,7 +2665,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)
error = 0;
for (scan = 0; scan <= LAST_SCAN; scan++) {
- if (!radix_tree_tagged(&mapping->page_tree, SHMEM_TAG_PINNED))
+ if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED))
break;
if (!scan)
@@ -2675,7 +2675,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)
start = 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter,
start, SHMEM_TAG_PINNED) {
page = radix_tree_deref_slot(slot);
@@ -2701,10 +2701,10 @@ static int shmem_wait_for_pins(struct address_space *mapping)
error = -EBUSY;
}
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_clear(&mapping->page_tree,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_clear(&mapping->pages,
iter.index, SHMEM_TAG_PINNED);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
continue_resched:
if (need_resched()) {
slot = radix_tree_iter_resume(slot, &iter);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 39ae7cfad90f..3f95e8fc4cb2 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -124,10 +124,10 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
SetPageSwapCache(page);
address_space = swap_address_space(entry);
- spin_lock_irq(&address_space->tree_lock);
+ xa_lock_irq(&address_space->pages);
for (i = 0; i < nr; i++) {
set_page_private(page + i, entry.val + i);
- error = radix_tree_insert(&address_space->page_tree,
+ error = radix_tree_insert(&address_space->pages,
idx + i, page + i);
if (unlikely(error))
break;
@@ -145,13 +145,13 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
VM_BUG_ON(error == -EEXIST);
set_page_private(page + i, 0UL);
while (i--) {
- radix_tree_delete(&address_space->page_tree, idx + i);
+ radix_tree_delete(&address_space->pages, idx + i);
set_page_private(page + i, 0UL);
}
ClearPageSwapCache(page);
page_ref_sub(page, nr);
}
- spin_unlock_irq(&address_space->tree_lock);
+ xa_unlock_irq(&address_space->pages);
return error;
}
@@ -188,7 +188,7 @@ void __delete_from_swap_cache(struct page *page)
address_space = swap_address_space(entry);
idx = swp_offset(entry);
for (i = 0; i < nr; i++) {
- radix_tree_delete(&address_space->page_tree, idx + i);
+ radix_tree_delete(&address_space->pages, idx + i);
set_page_private(page + i, 0);
}
ClearPageSwapCache(page);
@@ -272,9 +272,9 @@ void delete_from_swap_cache(struct page *page)
entry.val = page_private(page);
address_space = swap_address_space(entry);
- spin_lock_irq(&address_space->tree_lock);
+ xa_lock_irq(&address_space->pages);
__delete_from_swap_cache(page);
- spin_unlock_irq(&address_space->tree_lock);
+ xa_unlock_irq(&address_space->pages);
put_swap_page(page, entry);
page_ref_sub(page, hpage_nr_pages(page));
@@ -612,12 +612,11 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages)
return -ENOMEM;
for (i = 0; i < nr; i++) {
space = spaces + i;
- INIT_RADIX_TREE(&space->page_tree, GFP_ATOMIC|__GFP_NOWARN);
+ INIT_RADIX_TREE(&space->pages, GFP_ATOMIC|__GFP_NOWARN);
atomic_set(&space->i_mmap_writable, 0);
space->a_ops = &swap_aops;
/* swap cache doesn't use writeback related tags */
mapping_set_no_writeback_tags(space);
- spin_lock_init(&space->tree_lock);
}
nr_swapper_spaces[type] = nr;
rcu_assign_pointer(swapper_spaces[type], spaces);
diff --git a/mm/truncate.c b/mm/truncate.c
index c34e2fd4f583..295a33a06fac 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -36,11 +36,11 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
struct radix_tree_node *node;
void **slot;
- if (!__radix_tree_lookup(&mapping->page_tree, index, &node, &slot))
+ if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot))
return;
if (*slot != entry)
return;
- __radix_tree_replace(&mapping->page_tree, node, slot, NULL,
+ __radix_tree_replace(&mapping->pages, node, slot, NULL,
workingset_update_node);
mapping->nrexceptional--;
}
@@ -48,9 +48,9 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
static void clear_shadow_entry(struct address_space *mapping, pgoff_t index,
void *entry)
{
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
__clear_shadow_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}
/*
@@ -79,7 +79,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
dax = dax_mapping(mapping);
lock = !dax && indices[j] < end;
if (lock)
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
for (i = j; i < pagevec_count(pvec); i++) {
struct page *page = pvec->pages[i];
@@ -102,7 +102,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
}
if (lock)
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
pvec->nr = j;
}
@@ -518,8 +518,8 @@ void truncate_inode_pages_final(struct address_space *mapping)
* modification that does not see AS_EXITING is
* completed before starting the final truncate.
*/
- spin_lock_irq(&mapping->tree_lock);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ xa_unlock_irq(&mapping->pages);
truncate_inode_pages(mapping, 0);
}
@@ -627,13 +627,13 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
return 0;
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
if (PageDirty(page))
goto failed;
BUG_ON(page_has_private(page));
__delete_from_page_cache(page, NULL);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
if (mapping->a_ops->freepage)
mapping->a_ops->freepage(page);
@@ -641,7 +641,7 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
put_page(page); /* pagecache ref */
return 1;
failed:
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
return 0;
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 444749669187..93f4b4634431 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -656,7 +656,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
BUG_ON(!PageLocked(page));
BUG_ON(mapping != page_mapping(page));
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
/*
* The non racy check for a busy page.
*
@@ -680,7 +680,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
* load is not satisfied before that of page->_refcount.
*
* Note that if SetPageDirty is always performed via set_page_dirty,
- * and thus under tree_lock, then this ordering is not required.
+ * and thus under xa_lock, then this ordering is not required.
*/
if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
refcount = 1 + HPAGE_PMD_NR;
@@ -698,7 +698,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
swp_entry_t swap = { .val = page_private(page) };
mem_cgroup_swapout(page, swap);
__delete_from_swap_cache(page);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
put_swap_page(page, swap);
} else {
void (*freepage)(struct page *);
@@ -719,13 +719,13 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
* only page cache pages found in these are zero pages
* covering holes, and because we don't want to mix DAX
* exceptional entries and shadow exceptional entries in the
- * same page_tree.
+ * same address_space.
*/
if (reclaimed && page_is_file_cache(page) &&
!mapping_exiting(mapping) && !dax_mapping(mapping))
shadow = workingset_eviction(mapping, page);
__delete_from_page_cache(page, shadow);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
if (freepage != NULL)
freepage(page);
@@ -734,7 +734,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
return 1;
cannot_free:
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
return 0;
}
diff --git a/mm/workingset.c b/mm/workingset.c
index b7d616a3bbbe..3cb3586181e6 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -202,7 +202,7 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
* @mapping: address space the page was backing
* @page: the page being evicted
*
- * Returns a shadow entry to be stored in @mapping->page_tree in place
+ * Returns a shadow entry to be stored in @mapping->pages in place
* of the evicted @page so that a later refault can be detected.
*/
void *workingset_eviction(struct address_space *mapping, struct page *page)
@@ -348,7 +348,7 @@ void workingset_update_node(struct radix_tree_node *node)
*
* Avoid acquiring the list_lru lock when the nodes are
* already where they should be. The list_empty() test is safe
- * as node->private_list is protected by &mapping->tree_lock.
+ * as node->private_list is protected by mapping->pages.xa_lock.
*/
if (node->count && node->count == node->exceptional) {
if (list_empty(&node->private_list))
@@ -366,7 +366,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
unsigned long nodes;
unsigned long cache;
- /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
+ /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
local_irq_disable();
nodes = list_lru_shrink_count(&shadow_nodes, sc);
local_irq_enable();
@@ -419,21 +419,21 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
/*
* Page cache insertions and deletions synchroneously maintain
- * the shadow node LRU under the mapping->tree_lock and the
+ * the shadow node LRU under the mapping->pages.xa_lock and the
* lru_lock. Because the page cache tree is emptied before
* the inode can be destroyed, holding the lru_lock pins any
* address_space that has radix tree nodes on the LRU.
*
- * We can then safely transition to the mapping->tree_lock to
+ * We can then safely transition to the mapping->pages.xa_lock to
* pin only the address_space of the particular node we want
* to reclaim, take the node off-LRU, and drop the lru_lock.
*/
node = container_of(item, struct radix_tree_node, private_list);
- mapping = container_of(node->root, struct address_space, page_tree);
+ mapping = container_of(node->root, struct address_space, pages);
/* Coming from the list, invert the lock order */
- if (!spin_trylock(&mapping->tree_lock)) {
+ if (!xa_trylock(&mapping->pages)) {
spin_unlock(lru_lock);
ret = LRU_RETRY;
goto out;
@@ -468,11 +468,11 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
if (WARN_ON_ONCE(node->exceptional))
goto out_invalid;
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
- __radix_tree_delete_node(&mapping->page_tree, node,
+ __radix_tree_delete_node(&mapping->pages, node,
workingset_lookup_update(mapping));
out_invalid:
- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
ret = LRU_REMOVED_RETRY;
out:
local_irq_enable();
@@ -487,7 +487,7 @@ static unsigned long scan_shadow_nodes(struct shrinker *shrinker,
{
unsigned long ret;
- /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
+ /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
local_irq_disable();
ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL);
local_irq_enable();
@@ -503,7 +503,7 @@ static struct shrinker workingset_shadow_shrinker = {
/*
* Our list_lru->lock is IRQ-safe as it nests inside the IRQ-safe
- * mapping->tree_lock.
+ * mapping->pages.xa_lock.
*/
static struct lock_class_key shadow_nodes_key;
--
2.16.1
From: Matthew Wilcox <[email protected]>
Introduce xarray value entries to replace the radix tree exceptional
entry code. This is a slight change in encoding to allow the use of an
extra bit (we can now store BITS_PER_LONG - 1 bits in a value entry).
It is also a change in emphasis; exceptional entries are intimidating
and different. As the comment explains, you can choose to store values
or pointers in the xarray and they are both first-class citizens.
Signed-off-by: Matthew Wilcox <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +-
arch/powerpc/include/asm/nohash/64/pgtable.h | 4 +-
drivers/gpu/drm/i915/i915_gem.c | 17 ++--
drivers/staging/lustre/lustre/mdc/mdc_request.c | 2 +-
fs/btrfs/compression.c | 2 +-
fs/btrfs/inode.c | 4 +-
fs/dax.c | 107 ++++++++++++------------
fs/proc/task_mmu.c | 2 +-
include/linux/fs.h | 48 +++++++----
include/linux/radix-tree.h | 36 ++------
include/linux/swapops.h | 19 ++---
include/linux/xarray.h | 54 ++++++++++++
lib/idr.c | 63 ++++++--------
lib/radix-tree.c | 21 ++---
mm/filemap.c | 10 +--
mm/khugepaged.c | 2 +-
mm/madvise.c | 2 +-
mm/memcontrol.c | 2 +-
mm/mincore.c | 2 +-
mm/readahead.c | 2 +-
mm/shmem.c | 10 +--
mm/swap.c | 2 +-
mm/truncate.c | 12 +--
mm/workingset.c | 12 ++-
tools/testing/radix-tree/idr-test.c | 6 +-
tools/testing/radix-tree/linux/radix-tree.h | 1 +
tools/testing/radix-tree/multiorder.c | 47 +++++------
tools/testing/radix-tree/test.c | 2 +-
28 files changed, 259 insertions(+), 236 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 51017726d495..39b01a143944 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -707,9 +707,7 @@ static inline bool pte_user(pte_t pte)
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
} while (0)
-/*
- * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
- */
+
#define SWP_TYPE_BITS 5
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
& ((1UL << SWP_TYPE_BITS) - 1))
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index abddf5830ad5..f711773568d7 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -329,9 +329,7 @@ static inline void __ptep_set_access_flags(struct mm_struct *mm,
*/ \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
} while (0)
-/*
- * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
- */
+
#define SWP_TYPE_BITS 5
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
& ((1UL << SWP_TYPE_BITS) - 1))
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index dd89abd2263d..796a0db11b5d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5676,7 +5676,8 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
count = __sg_page_count(sg);
while (idx + count <= n) {
- unsigned long exception, i;
+ void *entry;
+ unsigned long i;
int ret;
/* If we cannot allocate and insert this entry, or the
@@ -5691,12 +5692,9 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
if (ret && ret != -EEXIST)
goto scan;
- exception =
- RADIX_TREE_EXCEPTIONAL_ENTRY |
- idx << RADIX_TREE_EXCEPTIONAL_SHIFT;
+ entry = xa_mk_value(idx);
for (i = 1; i < count; i++) {
- ret = radix_tree_insert(&iter->radix, idx + i,
- (void *)exception);
+ ret = radix_tree_insert(&iter->radix, idx + i, entry);
if (ret && ret != -EEXIST)
goto scan;
}
@@ -5734,15 +5732,14 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
GEM_BUG_ON(!sg);
/* If this index is in the middle of multi-page sg entry,
- * the radixtree will contain an exceptional entry that points
+ * the radix tree will contain a value entry that points
* to the start of that range. We will return the pointer to
* the base page and the offset of this page within the
* sg entry's range.
*/
*offset = 0;
- if (unlikely(radix_tree_exception(sg))) {
- unsigned long base =
- (unsigned long)sg >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ if (unlikely(xa_is_value(sg))) {
+ unsigned long base = xa_to_value(sg);
sg = radix_tree_lookup(&iter->radix, base);
GEM_BUG_ON(!sg);
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 45dcf9f958d4..2ec79a6b17da 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -940,7 +940,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
xa_lock_irq(&mapping->pages);
found = radix_tree_gang_lookup(&mapping->pages,
(void **)&page, offset, 1);
- if (found > 0 && !radix_tree_exceptional_entry(page)) {
+ if (found > 0 && !xa_is_value(page)) {
struct lu_dirpage *dp;
get_page(page);
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 0e35aa6aa2f1..9fa8617c7344 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -460,7 +460,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
rcu_read_lock();
page = radix_tree_lookup(&mapping->pages, pg_index);
rcu_read_unlock();
- if (page && !radix_tree_exceptional_entry(page)) {
+ if (page && !xa_is_value(page)) {
misses++;
if (misses > 4)
break;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d0016c1c7b04..fbb4291fcc57 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7462,8 +7462,8 @@ bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
}
/*
* Otherwise, shmem/tmpfs must be storing a swap entry
- * here as an exceptional entry: so return it without
- * attempting to raise page count.
+ * here so return it without attempting to raise page
+ * count.
*/
page = NULL;
break; /* TODO: Is this relevant for this use case? */
diff --git a/fs/dax.c b/fs/dax.c
index cac580399ed4..61cb25c8b9fd 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -59,57 +59,57 @@ static int __init init_dax_wait_table(void)
fs_initcall(init_dax_wait_table);
/*
- * We use lowest available bit in exceptional entry for locking, one bit for
- * the entry size (PMD) and two more to tell us if the entry is a zero page or
- * an empty entry that is just used for locking. In total four special bits.
+ * DAX pagecache entries use XArray value entries so they can't be mistaken
+ * for pages. We use one bit for locking, one bit for the entry size (PMD)
+ * and two more to tell us if the entry is a zero page or an empty entry that
+ * is just used for locking. In total four special bits.
*
* If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
* and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
* block allocation.
*/
-#define RADIX_DAX_SHIFT (RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
-#define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
-#define RADIX_DAX_PMD (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
-#define RADIX_DAX_ZERO_PAGE (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
-#define RADIX_DAX_EMPTY (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
+#define DAX_SHIFT (4)
+#define DAX_ENTRY_LOCK (1UL << 0)
+#define DAX_PMD (1UL << 1)
+#define DAX_ZERO_PAGE (1UL << 2)
+#define DAX_EMPTY (1UL << 3)
static unsigned long dax_radix_sector(void *entry)
{
- return (unsigned long)entry >> RADIX_DAX_SHIFT;
+ return xa_to_value(entry) >> DAX_SHIFT;
}
static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
{
- return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
- ((unsigned long)sector << RADIX_DAX_SHIFT) |
- RADIX_DAX_ENTRY_LOCK);
+ return xa_mk_value(flags | ((unsigned long)sector << DAX_SHIFT) |
+ DAX_ENTRY_LOCK);
}
static unsigned int dax_radix_order(void *entry)
{
- if ((unsigned long)entry & RADIX_DAX_PMD)
+ if (xa_to_value(entry) & DAX_PMD)
return PMD_SHIFT - PAGE_SHIFT;
return 0;
}
static int dax_is_pmd_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_PMD;
+ return xa_to_value(entry) & DAX_PMD;
}
static int dax_is_pte_entry(void *entry)
{
- return !((unsigned long)entry & RADIX_DAX_PMD);
+ return !(xa_to_value(entry) & DAX_PMD);
}
static int dax_is_zero_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_ZERO_PAGE;
+ return xa_to_value(entry) & DAX_ZERO_PAGE;
}
static int dax_is_empty_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_EMPTY;
+ return xa_to_value(entry) & DAX_EMPTY;
}
/*
@@ -186,9 +186,9 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
*/
static inline int slot_locked(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
- return entry & RADIX_DAX_ENTRY_LOCK;
+ unsigned long entry = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ return entry & DAX_ENTRY_LOCK;
}
/*
@@ -196,12 +196,11 @@ static inline int slot_locked(struct address_space *mapping, void **slot)
*/
static inline void *lock_slot(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
-
- entry |= RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
- return (void *)entry;
+ unsigned long v = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ void *entry = xa_mk_value(v | DAX_ENTRY_LOCK);
+ radix_tree_replace_slot(&mapping->pages, slot, entry);
+ return entry;
}
/*
@@ -209,17 +208,16 @@ static inline void *lock_slot(struct address_space *mapping, void **slot)
*/
static inline void *unlock_slot(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
-
- entry &= ~(unsigned long)RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
- return (void *)entry;
+ unsigned long v = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ void *entry = xa_mk_value(v & ~DAX_ENTRY_LOCK);
+ radix_tree_replace_slot(&mapping->pages, slot, entry);
+ return entry;
}
/*
* Lookup entry in radix tree, wait for it to become unlocked if it is
- * exceptional entry and return it. The caller must call
+ * a DAX entry and return it. The caller must call
* put_unlocked_mapping_entry() when he decided not to lock the entry or
* put_locked_mapping_entry() when he locked the entry and now wants to
* unlock it.
@@ -240,7 +238,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
entry = __radix_tree_lookup(&mapping->pages, index, NULL,
&slot);
if (!entry ||
- WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
+ WARN_ON_ONCE(!xa_is_value(entry)) ||
!slot_locked(mapping, slot)) {
if (slotp)
*slotp = slot;
@@ -264,7 +262,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
xa_lock_irq(&mapping->pages);
entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
- if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
+ if (WARN_ON_ONCE(!entry || !xa_is_value(entry) ||
!slot_locked(mapping, slot))) {
xa_unlock_irq(&mapping->pages);
return;
@@ -295,12 +293,11 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
}
/*
- * Find radix tree entry at given index. If it points to an exceptional entry,
- * return it with the radix tree entry locked. If the radix tree doesn't
- * contain given index, create an empty exceptional entry for the index and
- * return with it locked.
+ * Find radix tree entry at given index. If it is a DAX entry, return it
+ * with the radix tree entry locked. If the radix tree doesn't contain the
+ * given index, create an empty entry for the index and return with it locked.
*
- * When requesting an entry with size RADIX_DAX_PMD, grab_mapping_entry() will
+ * When requesting an entry with size DAX_PMD, grab_mapping_entry() will
* either return that locked entry or will return an error. This error will
* happen if there are any 4k entries within the 2MiB range that we are
* requesting.
@@ -330,13 +327,13 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);
- if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
+ if (WARN_ON_ONCE(entry && !xa_is_value(entry))) {
entry = ERR_PTR(-EIO);
goto out_unlock;
}
if (entry) {
- if (size_flag & RADIX_DAX_PMD) {
+ if (size_flag & DAX_PMD) {
if (dax_is_pte_entry(entry)) {
put_unlocked_mapping_entry(mapping, index,
entry);
@@ -406,7 +403,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
true);
}
- entry = dax_radix_locked_entry(0, size_flag | RADIX_DAX_EMPTY);
+ entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);
err = __radix_tree_insert(&mapping->pages, index,
dax_radix_order(entry), entry);
@@ -443,7 +440,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, NULL);
- if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
goto out;
if (!trunc &&
(radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
@@ -458,8 +455,8 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
return ret;
}
/*
- * Delete exceptional DAX entry at @index from @mapping. Wait for radix tree
- * entry to get unlocked before deleting it.
+ * Delete DAX entry at @index from @mapping. Wait for it
+ * to be unlocked before deleting it.
*/
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
{
@@ -469,7 +466,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
* This gets called from truncate / punch_hole path. As such, the caller
* must hold locks protecting against concurrent modifications of the
* radix tree (usually fs-private i_mmap_sem for writing). Since the
- * caller has seen exceptional entry for this index, we better find it
+ * caller has seen a DAX entry for this index, we better find it
* at that index as well...
*/
WARN_ON_ONCE(!ret);
@@ -477,7 +474,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
}
/*
- * Invalidate exceptional DAX entry if it is clean.
+ * Invalidate DAX entry if it is clean.
*/
int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
pgoff_t index)
@@ -531,7 +528,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
if (dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
- if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) {
+ if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
/* we are replacing a zero page with block mapping */
if (dax_is_pmd_entry(entry))
unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR,
@@ -668,13 +665,13 @@ static int dax_writeback_one(struct block_device *bdev,
* A page got tagged dirty in DAX mapping? Something is seriously
* wrong.
*/
- if (WARN_ON(!radix_tree_exceptional_entry(entry)))
+ if (WARN_ON(!xa_is_value(entry)))
return -EIO;
xa_lock_irq(&mapping->pages);
entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
/* Entry got punched out / reallocated? */
- if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
+ if (!entry2 || WARN_ON_ONCE(!xa_is_value(entry2)))
goto put_unlocked;
/*
* Entry got reallocated elsewhere? No need to writeback. We have to
@@ -880,7 +877,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
}
entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0,
- RADIX_DAX_ZERO_PAGE, false);
+ DAX_ZERO_PAGE, false);
if (IS_ERR(entry2)) {
ret = VM_FAULT_SIGBUS;
goto out;
@@ -1282,7 +1279,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
goto fallback;
ret = dax_insert_mapping_entry(mapping, vmf, entry, 0,
- RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE, false);
+ DAX_PMD | DAX_ZERO_PAGE, false);
if (IS_ERR(ret))
goto fallback;
@@ -1367,7 +1364,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* is already in the tree, for instance), it will return -EEXIST and
* we just fall back to 4k entries.
*/
- entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD);
+ entry = grab_mapping_entry(mapping, pgoff, DAX_PMD);
if (IS_ERR(entry))
goto fallback;
@@ -1406,7 +1403,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
entry = dax_insert_mapping_entry(mapping, vmf, entry,
dax_iomap_sector(&iomap, pos),
- RADIX_DAX_PMD, write && !sync);
+ DAX_PMD, write && !sync);
if (IS_ERR(entry))
goto finish_iomap;
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index ec6d2983a5cb..43c8a3ff8ce7 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -558,7 +558,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
if (!page)
return;
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
mss->swap += PAGE_SIZE;
else
put_page(page);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e227f68e0418..b9ea6961947a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -389,23 +389,41 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata);
+/**
+ * struct address_space - Contents of a cachable, mappable object
+ *
+ * @host: Owner, either the inode or the block_device
+ * @pages: Cached pages
+ * @gfp_mask: Memory allocation flags to use for allocating pages
+ * @i_mmap_writable: count VM_SHARED mappings
+ * @i_mmap: tree of private and shared mappings
+ * @i_mmap_rwsem: Protects @i_mmap and @i_mmap_writable
+ * @nrpages: Number of total pages, protected by pages.xa_lock
+ * @nrexceptional: Shadow or DAX entries, protected by pages.xa_lock
+ * @writeback_index: writeback starts here
+ * @a_ops: methods
+ * @flags: Error bits and flags (AS_*)
+ * @wb_err: The most recent error which has occurred
+ * @private_lock: For use by the owner of the address_space
+ * @private_list: For use by the owner of the address space
+ * @private_data: For use by the owner of the address space
+ */
struct address_space {
- struct inode *host; /* owner: inode, block_device */
- struct radix_tree_root pages; /* cached pages */
- gfp_t gfp_mask; /* for allocating pages */
- atomic_t i_mmap_writable;/* count VM_SHARED mappings */
- struct rb_root_cached i_mmap; /* tree of private and shared mappings */
- struct rw_semaphore i_mmap_rwsem; /* protect tree, count, list */
- /* Protected by pages.xa_lock */
- unsigned long nrpages; /* number of total pages */
- unsigned long nrexceptional; /* shadow or DAX entries */
- pgoff_t writeback_index;/* writeback starts here */
- const struct address_space_operations *a_ops; /* methods */
- unsigned long flags; /* error bits */
+ struct inode *host;
+ struct radix_tree_root pages;
+ gfp_t gfp_mask;
+ atomic_t i_mmap_writable;
+ struct rb_root_cached i_mmap;
+ struct rw_semaphore i_mmap_rwsem;
+ unsigned long nrpages;
+ unsigned long nrexceptional;
+ pgoff_t writeback_index;
+ const struct address_space_operations *a_ops;
+ unsigned long flags;
errseq_t wb_err;
- spinlock_t private_lock; /* for use by the address_space */
- struct list_head private_list; /* ditto */
- void *private_data; /* ditto */
+ spinlock_t private_lock;
+ struct list_head private_list;
+ void *private_data;
} __attribute__((aligned(sizeof(long)))) __randomize_layout;
/*
* On most architectures that alignment is already the case; but
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 34149e8b5f73..87f35fe00e55 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -28,34 +28,26 @@
#include <linux/rcupdate.h>
#include <linux/spinlock.h>
#include <linux/types.h>
+#include <linux/xarray.h>
/*
* The bottom two bits of the slot determine how the remaining bits in the
* slot are interpreted:
*
* 00 - data pointer
- * 01 - internal entry
- * 10 - exceptional entry
- * 11 - this bit combination is currently unused/reserved
+ * 10 - internal entry
+ * x1 - value entry
*
* The internal entry may be a pointer to the next level in the tree, a
* sibling entry, or an indicator that the entry in this slot has been moved
* to another location in the tree and the lookup should be restarted. While
* NULL fits the 'data pointer' pattern, it means that there is no entry in
* the tree for this index (no matter what level of the tree it is found at).
- * This means that you cannot store NULL in the tree as a value for the index.
+ * This means that storing a NULL entry in the tree is the same as deleting
+ * the entry from the tree.
*/
#define RADIX_TREE_ENTRY_MASK 3UL
-#define RADIX_TREE_INTERNAL_NODE 1UL
-
-/*
- * Most users of the radix tree store pointers but shmem/tmpfs stores swap
- * entries in the same tree. They are marked as exceptional entries to
- * distinguish them from pointers to struct page.
- * EXCEPTIONAL_ENTRY tests the bit, EXCEPTIONAL_SHIFT shifts content past it.
- */
-#define RADIX_TREE_EXCEPTIONAL_ENTRY 2
-#define RADIX_TREE_EXCEPTIONAL_SHIFT 2
+#define RADIX_TREE_INTERNAL_NODE 2UL
static inline bool radix_tree_is_internal_node(void *ptr)
{
@@ -83,11 +75,10 @@ static inline bool radix_tree_is_internal_node(void *ptr)
/*
* @count is the count of every non-NULL element in the ->slots array
- * whether that is an exceptional entry, a retry entry, a user pointer,
+ * whether that is a data entry, a retry entry, a user pointer,
* a sibling entry or a pointer to the next level of the tree.
* @exceptional is the count of every element in ->slots which is
- * either radix_tree_exceptional_entry() or is a sibling entry for an
- * exceptional entry.
+ * either a data entry or a sibling entry for data.
*/
struct radix_tree_node {
unsigned char shift; /* Bits remaining in each slot */
@@ -268,17 +259,6 @@ static inline int radix_tree_deref_retry(void *arg)
return unlikely(radix_tree_is_internal_node(arg));
}
-/**
- * radix_tree_exceptional_entry - radix_tree_deref_slot gave exceptional entry?
- * @arg: value returned by radix_tree_deref_slot
- * Returns: 0 if well-aligned pointer, non-0 if exceptional entry.
- */
-static inline int radix_tree_exceptional_entry(void *arg)
-{
- /* Not unlikely because radix_tree_exception often tested first */
- return (unsigned long)arg & RADIX_TREE_EXCEPTIONAL_ENTRY;
-}
-
/**
* radix_tree_exception - radix_tree_deref_slot returned either exception?
* @arg: value returned by radix_tree_deref_slot
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 1d3877c39a00..9c0eb4d4f444 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -17,9 +17,8 @@
*
* swp_entry_t's are *never* stored anywhere in their arch-dependent format.
*/
-#define SWP_TYPE_SHIFT(e) ((sizeof(e.val) * 8) - \
- (MAX_SWAPFILES_SHIFT + RADIX_TREE_EXCEPTIONAL_SHIFT))
-#define SWP_OFFSET_MASK(e) ((1UL << SWP_TYPE_SHIFT(e)) - 1)
+#define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT)
+#define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1)
/*
* Store a type+offset into a swp_entry_t in an arch-independent format
@@ -28,8 +27,7 @@ static inline swp_entry_t swp_entry(unsigned long type, pgoff_t offset)
{
swp_entry_t ret;
- ret.val = (type << SWP_TYPE_SHIFT(ret)) |
- (offset & SWP_OFFSET_MASK(ret));
+ ret.val = (type << SWP_TYPE_SHIFT) | (offset & SWP_OFFSET_MASK);
return ret;
}
@@ -39,7 +37,7 @@ static inline swp_entry_t swp_entry(unsigned long type, pgoff_t offset)
*/
static inline unsigned swp_type(swp_entry_t entry)
{
- return (entry.val >> SWP_TYPE_SHIFT(entry));
+ return (entry.val >> SWP_TYPE_SHIFT);
}
/*
@@ -48,7 +46,7 @@ static inline unsigned swp_type(swp_entry_t entry)
*/
static inline pgoff_t swp_offset(swp_entry_t entry)
{
- return entry.val & SWP_OFFSET_MASK(entry);
+ return entry.val & SWP_OFFSET_MASK;
}
#ifdef CONFIG_MMU
@@ -89,16 +87,13 @@ static inline swp_entry_t radix_to_swp_entry(void *arg)
{
swp_entry_t entry;
- entry.val = (unsigned long)arg >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ entry.val = xa_to_value(arg);
return entry;
}
static inline void *swp_to_radix_entry(swp_entry_t entry)
{
- unsigned long value;
-
- value = entry.val << RADIX_TREE_EXCEPTIONAL_SHIFT;
- return (void *)(value | RADIX_TREE_EXCEPTIONAL_ENTRY);
+ return xa_mk_value(entry.val);
}
#if IS_ENABLED(CONFIG_DEVICE_PRIVATE)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 2dfc8006fe64..f61806fd8002 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -5,9 +5,63 @@
* eXtensible Arrays
* Copyright (c) 2017 Microsoft Corporation
* Author: Matthew Wilcox <[email protected]>
+ *
+ * See Documentation/core-api/xarray.rst for how to use the XArray.
*/
+#include <linux/bug.h>
#include <linux/spinlock.h>
+#include <linux/types.h>
+
+/*
+ * The bottom two bits of the entry determine how the XArray interprets
+ * the contents:
+ *
+ * 00: Pointer entry
+ * 10: Internal entry
+ * x1: Value entry
+ *
+ * Attempting to store internal entries in the XArray is a bug.
+ */
+
+#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
+
+/**
+ * xa_mk_value() - Create an XArray entry from an integer.
+ * @v: Value to store in XArray.
+ *
+ * Context: Any context.
+ * Return: An entry suitable for storing in the XArray.
+ */
+static inline void *xa_mk_value(unsigned long v)
+{
+ WARN_ON((long)v < 0);
+ return (void *)((v << 1) | 1);
+}
+
+/**
+ * xa_to_value() - Get value stored in an XArray entry.
+ * @entry: XArray entry.
+ *
+ * Context: Any context.
+ * Return: The value stored in the XArray entry.
+ */
+static inline unsigned long xa_to_value(const void *entry)
+{
+ return (unsigned long)entry >> 1;
+}
+
+/**
+ * xa_is_value() - Determine if an entry is a value.
+ * @entry: XArray entry.
+ *
+ * Context: Any context.
+ * Return: True if the entry is a value, false if it is a pointer.
+ */
+static inline bool xa_is_value(const void *entry)
+{
+ return (unsigned long)entry & 1;
+}
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
diff --git a/lib/idr.c b/lib/idr.c
index c98d77fcf393..756e82c66a30 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -4,6 +4,7 @@
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
+#include <linux/xarray.h>
DEFINE_PER_CPU(struct ida_bitmap *, ida_bitmap);
static DEFINE_SPINLOCK(simple_ida_lock);
@@ -347,11 +348,8 @@ EXPORT_SYMBOL(idr_replace);
* by the number of bits in the leaf bitmap before doing a radix tree lookup.
*
* As an optimisation, if there are only a few low bits set in any given
- * leaf, instead of allocating a 128-byte bitmap, we use the 'exceptional
- * entry' functionality of the radix tree to store BITS_PER_LONG - 2 bits
- * directly in the entry. By being really tricksy, we could store
- * BITS_PER_LONG - 1 bits, but there're diminishing returns after optimising
- * for 0-3 allocated IDs.
+ * leaf, instead of allocating a 128-byte bitmap, we store the bits
+ * directly in the entry.
*
* We allow the radix tree 'exceptional' count to get out of date. Nothing
* in the IDA nor the radix tree code checks it. If it becomes important
@@ -393,12 +391,11 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
struct radix_tree_iter iter;
struct ida_bitmap *bitmap;
unsigned long index;
- unsigned bit, ebit;
+ unsigned bit;
int new;
index = start / IDA_BITMAP_BITS;
bit = start % IDA_BITMAP_BITS;
- ebit = bit + RADIX_TREE_EXCEPTIONAL_SHIFT;
slot = radix_tree_iter_init(&iter, index);
for (;;) {
@@ -413,26 +410,25 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
return PTR_ERR(slot);
}
}
- if (iter.index > index) {
+ if (iter.index > index)
bit = 0;
- ebit = RADIX_TREE_EXCEPTIONAL_SHIFT;
- }
new = iter.index * IDA_BITMAP_BITS;
bitmap = rcu_dereference_raw(*slot);
- if (radix_tree_exception(bitmap)) {
- unsigned long tmp = (unsigned long)bitmap;
- ebit = find_next_zero_bit(&tmp, BITS_PER_LONG, ebit);
- if (ebit < BITS_PER_LONG) {
- tmp |= 1UL << ebit;
- rcu_assign_pointer(*slot, (void *)tmp);
- *id = new + ebit - RADIX_TREE_EXCEPTIONAL_SHIFT;
+ if (xa_is_value(bitmap)) {
+ unsigned long tmp = xa_to_value(bitmap);
+ int vbit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE,
+ bit);
+ if (vbit < BITS_PER_XA_VALUE) {
+ tmp |= 1UL << vbit;
+ rcu_assign_pointer(*slot, xa_mk_value(tmp));
+ *id = new + vbit;
return 0;
}
bitmap = this_cpu_xchg(ida_bitmap, NULL);
if (!bitmap)
return -EAGAIN;
memset(bitmap, 0, sizeof(*bitmap));
- bitmap->bitmap[0] = tmp >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ bitmap->bitmap[0] = tmp;
rcu_assign_pointer(*slot, bitmap);
}
@@ -453,19 +449,15 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
new += bit;
if (new < 0)
return -ENOSPC;
- if (ebit < BITS_PER_LONG) {
- bitmap = (void *)((1UL << ebit) |
- RADIX_TREE_EXCEPTIONAL_ENTRY);
- radix_tree_iter_replace(root, &iter, slot,
- bitmap);
- *id = new;
- return 0;
+ if (bit < BITS_PER_XA_VALUE) {
+ bitmap = xa_mk_value(1UL << bit);
+ } else {
+ bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (!bitmap)
+ return -EAGAIN;
+ memset(bitmap, 0, sizeof(*bitmap));
+ __set_bit(bit, bitmap->bitmap);
}
- bitmap = this_cpu_xchg(ida_bitmap, NULL);
- if (!bitmap)
- return -EAGAIN;
- memset(bitmap, 0, sizeof(*bitmap));
- __set_bit(bit, bitmap->bitmap);
radix_tree_iter_replace(root, &iter, slot, bitmap);
}
@@ -496,9 +488,9 @@ void ida_remove(struct ida *ida, int id)
goto err;
bitmap = rcu_dereference_raw(*slot);
- if (radix_tree_exception(bitmap)) {
+ if (xa_is_value(bitmap)) {
btmp = (unsigned long *)slot;
- offset += RADIX_TREE_EXCEPTIONAL_SHIFT;
+ offset += 1; /* Intimate knowledge of the xa_data encoding */
if (offset >= BITS_PER_LONG)
goto err;
} else {
@@ -509,9 +501,8 @@ void ida_remove(struct ida *ida, int id)
__clear_bit(offset, btmp);
radix_tree_iter_tag_set(&ida->ida_rt, &iter, IDR_FREE);
- if (radix_tree_exception(bitmap)) {
- if (rcu_dereference_raw(*slot) ==
- (void *)RADIX_TREE_EXCEPTIONAL_ENTRY)
+ if (xa_is_value(bitmap)) {
+ if (xa_to_value(rcu_dereference_raw(*slot)) == 0)
radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
} else if (bitmap_empty(btmp, IDA_BITMAP_BITS)) {
kfree(bitmap);
@@ -539,7 +530,7 @@ void ida_destroy(struct ida *ida)
radix_tree_for_each_slot(slot, &ida->ida_rt, &iter, 0) {
struct ida_bitmap *bitmap = rcu_dereference_raw(*slot);
- if (!radix_tree_exception(bitmap))
+ if (!xa_is_value(bitmap))
kfree(bitmap);
radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
}
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 66732e2f9606..3d7bacb2f8ba 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -340,14 +340,12 @@ static void dump_ida_node(void *entry, unsigned long index)
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++)
dump_ida_node(node->slots[i],
index | (i << node->shift));
- } else if (radix_tree_exceptional_entry(entry)) {
+ } else if (xa_is_value(entry)) {
pr_debug("ida excp: %p offset %d indices %lu-%lu data %lx\n",
entry, (int)(index & RADIX_TREE_MAP_MASK),
index * IDA_BITMAP_BITS,
- index * IDA_BITMAP_BITS + BITS_PER_LONG -
- RADIX_TREE_EXCEPTIONAL_SHIFT,
- (unsigned long)entry >>
- RADIX_TREE_EXCEPTIONAL_SHIFT);
+ index * IDA_BITMAP_BITS + BITS_PER_XA_VALUE,
+ xa_to_value(entry));
} else {
struct ida_bitmap *bitmap = entry;
@@ -656,7 +654,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
BUG_ON(shift > BITS_PER_LONG);
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
- } else if (radix_tree_exceptional_entry(entry)) {
+ } else if (xa_is_value(entry)) {
/* Moving an exceptional root->rnode to a node */
node->exceptional = 1;
}
@@ -947,12 +945,12 @@ static inline int insert_entries(struct radix_tree_node *node,
!is_sibling_entry(node, old) &&
(old != RADIX_TREE_RETRY))
radix_tree_free_nodes(old);
- if (radix_tree_exceptional_entry(old))
+ if (xa_is_value(old))
node->exceptional--;
}
if (node) {
node->count += n;
- if (radix_tree_exceptional_entry(item))
+ if (xa_is_value(item))
node->exceptional += n;
}
return n;
@@ -966,7 +964,7 @@ static inline int insert_entries(struct radix_tree_node *node,
rcu_assign_pointer(*slot, item);
if (node) {
node->count++;
- if (radix_tree_exceptional_entry(item))
+ if (xa_is_value(item))
node->exceptional++;
}
return 1;
@@ -1183,8 +1181,7 @@ void __radix_tree_replace(struct radix_tree_root *root,
radix_tree_update_node_t update_node)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = !!radix_tree_exceptional_entry(item) -
- !!radix_tree_exceptional_entry(old);
+ int exceptional = !!xa_is_value(item) - !!xa_is_value(old);
int count = calculate_count(root, node, slot, item, old);
/*
@@ -1987,7 +1984,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
struct radix_tree_node *node, void __rcu **slot)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = radix_tree_exceptional_entry(old) ? -1 : 0;
+ int exceptional = xa_is_value(old) ? -1 : 0;
unsigned offset = get_slot_offset(node, slot);
int tag;
diff --git a/mm/filemap.c b/mm/filemap.c
index 7588b7f1f479..92f344f0f9ce 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -127,7 +127,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
p = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (!radix_tree_exceptional_entry(p))
+ if (!xa_is_value(p))
return -EEXIST;
mapping->nrexceptional--;
@@ -336,7 +336,7 @@ page_cache_tree_delete_batch(struct address_space *mapping,
break;
page = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;
if (!tail_pages) {
/*
@@ -1355,7 +1355,7 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
struct page *page;
page = radix_tree_lookup(&mapping->pages, index);
- if (!page || radix_tree_exceptional_entry(page))
+ if (!page || xa_is_value(page))
break;
index++;
if (index == 0)
@@ -1396,7 +1396,7 @@ pgoff_t page_cache_prev_hole(struct address_space *mapping,
struct page *page;
page = radix_tree_lookup(&mapping->pages, index);
- if (!page || radix_tree_exceptional_entry(page))
+ if (!page || xa_is_value(page))
break;
index--;
if (index == ULONG_MAX)
@@ -1539,7 +1539,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
repeat:
page = find_get_entry(mapping, offset);
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
page = NULL;
if (!page)
goto no_page;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5800093fe94a..70e10c1f3127 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1363,7 +1363,7 @@ static void collapse_shmem(struct mm_struct *mm,
page = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (radix_tree_exceptional_entry(page) || !PageUptodate(page)) {
+ if (xa_is_value(page) || !PageUptodate(page)) {
xa_unlock_irq(&mapping->pages);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
diff --git a/mm/madvise.c b/mm/madvise.c
index 4d3c922ea1a1..faec5437d0e3 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -251,7 +251,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
page = find_get_entry(mapping, index);
- if (!radix_tree_exceptional_entry(page)) {
+ if (!xa_is_value(page)) {
if (page)
put_page(page);
continue;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d89cb08ac39b..70d3f4df43d0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4447,7 +4447,7 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
/* shmem/tmpfs may report page out on swap: account for that too. */
if (shmem_mapping(mapping)) {
page = find_get_entry(mapping, pgoff);
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swp_entry_t swp = radix_to_swp_entry(page);
if (do_memsw_account())
*entry = swp;
diff --git a/mm/mincore.c b/mm/mincore.c
index fc37afe226e6..4985965aa20a 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -66,7 +66,7 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
* shmem/tmpfs may return swap: account for swapcache
* page too.
*/
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swp_entry_t swp = radix_to_swp_entry(page);
page = find_get_page(swap_address_space(swp),
swp_offset(swp));
diff --git a/mm/readahead.c b/mm/readahead.c
index 514188fd2489..4851f002605f 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -177,7 +177,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
rcu_read_lock();
page = radix_tree_lookup(&mapping->pages, page_offset);
rcu_read_unlock();
- if (page && !radix_tree_exceptional_entry(page))
+ if (page && !xa_is_value(page))
continue;
page = __page_cache_alloc(gfp_mask);
diff --git a/mm/shmem.c b/mm/shmem.c
index b2fdc258853d..2616f2d3be95 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -690,7 +690,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
continue;
}
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
swapped++;
if (need_resched()) {
@@ -805,7 +805,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
if (index >= end)
break;
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (unfalloc)
continue;
nr_swaps_freed += !shmem_free_swap(mapping,
@@ -902,7 +902,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
if (index >= end)
break;
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (unfalloc)
continue;
if (shmem_free_swap(mapping, index, page)) {
@@ -1614,7 +1614,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
repeat:
swap.val = 0;
page = find_lock_entry(mapping, index);
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swap = radix_to_swp_entry(page);
page = NULL;
}
@@ -2547,7 +2547,7 @@ static pgoff_t shmem_seek_hole_data(struct address_space *mapping,
index = indices[i];
}
page = pvec.pages[i];
- if (page && !radix_tree_exceptional_entry(page)) {
+ if (page && !xa_is_value(page)) {
if (!PageUptodate(page))
page = NULL;
}
diff --git a/mm/swap.c b/mm/swap.c
index 567a7b96e41d..f62c5c7198f2 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -953,7 +953,7 @@ void pagevec_remove_exceptionals(struct pagevec *pvec)
for (i = 0, j = 0; i < pagevec_count(pvec); i++) {
struct page *page = pvec->pages[i];
- if (!radix_tree_exceptional_entry(page))
+ if (!xa_is_value(page))
pvec->pages[j++] = page;
}
pvec->nr = j;
diff --git a/mm/truncate.c b/mm/truncate.c
index 295a33a06fac..3fe1e7461684 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -70,7 +70,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
return;
for (j = 0; j < pagevec_count(pvec); j++)
- if (radix_tree_exceptional_entry(pvec->pages[j]))
+ if (xa_is_value(pvec->pages[j]))
break;
if (j == pagevec_count(pvec))
@@ -85,7 +85,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
struct page *page = pvec->pages[i];
pgoff_t index = indices[i];
- if (!radix_tree_exceptional_entry(page)) {
+ if (!xa_is_value(page)) {
pvec->pages[j++] = page;
continue;
}
@@ -347,7 +347,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
if (index >= end)
break;
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;
if (!trylock_page(page))
@@ -442,7 +442,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
break;
}
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;
lock_page(page);
@@ -561,7 +561,7 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
if (index > end)
break;
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
invalidate_exceptional_entry(mapping, index,
page);
continue;
@@ -692,7 +692,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
if (index > end)
break;
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (!invalidate_exceptional_entry2(mapping,
index, page))
ret = -EBUSY;
diff --git a/mm/workingset.c b/mm/workingset.c
index 3cb3586181e6..3afeb84720f4 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -155,8 +155,8 @@
* refault distance will immediately activate the refaulting page.
*/
-#define EVICTION_SHIFT (RADIX_TREE_EXCEPTIONAL_ENTRY + \
- NODES_SHIFT + \
+#define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \
+ NODES_SHIFT + \
MEM_CGROUP_ID_SHIFT)
#define EVICTION_MASK (~0UL >> EVICTION_SHIFT)
@@ -175,18 +175,16 @@ static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction)
eviction >>= bucket_order;
eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid;
eviction = (eviction << NODES_SHIFT) | pgdat->node_id;
- eviction = (eviction << RADIX_TREE_EXCEPTIONAL_SHIFT);
- return (void *)(eviction | RADIX_TREE_EXCEPTIONAL_ENTRY);
+ return xa_mk_value(eviction);
}
static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
unsigned long *evictionp)
{
- unsigned long entry = (unsigned long)shadow;
+ unsigned long entry = xa_to_value(shadow);
int memcgid, nid;
- entry >>= RADIX_TREE_EXCEPTIONAL_SHIFT;
nid = entry & ((1UL << NODES_SHIFT) - 1);
entry >>= NODES_SHIFT;
memcgid = entry & ((1UL << MEM_CGROUP_ID_SHIFT) - 1);
@@ -453,7 +451,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
goto out_invalid;
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
if (node->slots[i]) {
- if (WARN_ON_ONCE(!radix_tree_exceptional_entry(node->slots[i])))
+ if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
goto out_invalid;
if (WARN_ON_ONCE(!node->exceptional))
goto out_invalid;
diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
index 44ef9eba5a7a..2b68969ae6dc 100644
--- a/tools/testing/radix-tree/idr-test.c
+++ b/tools/testing/radix-tree/idr-test.c
@@ -19,7 +19,7 @@
#include "test.h"
-#define DUMMY_PTR ((void *)0x12)
+#define DUMMY_PTR ((void *)0x10)
int item_idr_free(int id, void *p, void *data)
{
@@ -341,11 +341,11 @@ void ida_check_conv(void)
for (i = 0; i < 1000000; i++) {
int err = ida_get_new(&ida, &id);
if (err == -EAGAIN) {
- assert((i % IDA_BITMAP_BITS) == (BITS_PER_LONG - 2));
+ assert((i % IDA_BITMAP_BITS) == (BITS_PER_LONG - 1));
assert(ida_pre_get(&ida, GFP_KERNEL));
err = ida_get_new(&ida, &id);
} else {
- assert((i % IDA_BITMAP_BITS) != (BITS_PER_LONG - 2));
+ assert((i % IDA_BITMAP_BITS) != (BITS_PER_LONG - 1));
}
assert(!err);
assert(id == i);
diff --git a/tools/testing/radix-tree/linux/radix-tree.h b/tools/testing/radix-tree/linux/radix-tree.h
index 24f13d27a8da..de3f655caca3 100644
--- a/tools/testing/radix-tree/linux/radix-tree.h
+++ b/tools/testing/radix-tree/linux/radix-tree.h
@@ -4,6 +4,7 @@
#include "generated/map-shift.h"
#include "../../../../include/linux/radix-tree.h"
+#include <linux/xarray.h>
extern int kmalloc_verbose;
extern int test_verbose;
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 59245b3d587c..684e76f79f4a 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -38,12 +38,11 @@ static void __multiorder_tag_test(int index, int order)
/*
* Verify we get collisions for covered indices. We try and fail to
- * insert an exceptional entry so we don't leak memory via
+ * insert a data entry so we don't leak memory via
* item_insert_order().
*/
for_each_index(i, base, order) {
- err = __radix_tree_insert(&tree, i, order,
- (void *)(0xA0 | RADIX_TREE_EXCEPTIONAL_ENTRY));
+ err = __radix_tree_insert(&tree, i, order, xa_mk_value(0xA0));
assert(err == -EEXIST);
}
@@ -379,8 +378,8 @@ static void multiorder_join1(unsigned long index,
}
/*
- * Check that the accounting of exceptional entries is handled correctly
- * by joining an exceptional entry to a normal pointer.
+ * Check that the accounting of inline data entries is handled correctly
+ * by joining a data entry to a normal pointer.
*/
static void multiorder_join2(unsigned order1, unsigned order2)
{
@@ -390,9 +389,9 @@ static void multiorder_join2(unsigned order1, unsigned order2)
void *item2;
item_insert_order(&tree, 0, order2);
- radix_tree_insert(&tree, 1 << order2, (void *)0x12UL);
+ radix_tree_insert(&tree, 1 << order2, xa_mk_value(5));
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
- assert(item2 == (void *)0x12UL);
+ assert(item2 == xa_mk_value(5));
assert(node->exceptional == 1);
item2 = radix_tree_lookup(&tree, 0);
@@ -406,7 +405,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
}
/*
- * This test revealed an accounting bug for exceptional entries at one point.
+ * This test revealed an accounting bug for inline data entries at one point.
* Nodes were being freed back into the pool with an elevated exception count
* by radix_tree_join() and then radix_tree_split() was failing to zero the
* count of exceptional entries.
@@ -420,16 +419,16 @@ static void multiorder_join3(unsigned int order)
unsigned long i;
for (i = 0; i < (1 << order); i++) {
- radix_tree_insert(&tree, i, (void *)0x12UL);
+ radix_tree_insert(&tree, i, xa_mk_value(5));
}
- radix_tree_join(&tree, 0, order, (void *)0x16UL);
+ radix_tree_join(&tree, 0, order, xa_mk_value(7));
rcu_barrier();
radix_tree_split(&tree, 0, 0);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
- radix_tree_iter_replace(&tree, &iter, slot, (void *)0x12UL);
+ radix_tree_iter_replace(&tree, &iter, slot, xa_mk_value(5));
}
__radix_tree_lookup(&tree, 0, &node, NULL);
@@ -516,10 +515,10 @@ static void __multiorder_split2(int old_order, int new_order)
struct radix_tree_node *node;
void *item;
- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));
item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);
radix_tree_split(&tree, 0, new_order);
@@ -529,7 +528,7 @@ static void __multiorder_split2(int old_order, int new_order)
}
item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item != (void *)0x12);
+ assert(item != xa_mk_value(5));
assert(node->exceptional == 0);
item_kill_tree(&tree);
@@ -543,40 +542,40 @@ static void __multiorder_split3(int old_order, int new_order)
struct radix_tree_node *node;
void *item;
- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));
item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);
radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
- radix_tree_iter_replace(&tree, &iter, slot, (void *)0x16);
+ radix_tree_iter_replace(&tree, &iter, slot, xa_mk_value(7));
}
item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x16);
+ assert(item == xa_mk_value(7));
assert(node->exceptional > 0);
item_kill_tree(&tree);
- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));
item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);
radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
if (iter.index == (1 << new_order))
radix_tree_iter_replace(&tree, &iter, slot,
- (void *)0x16);
+ xa_mk_value(7));
else
radix_tree_iter_replace(&tree, &iter, slot, NULL);
}
item = __radix_tree_lookup(&tree, 1 << new_order, &node, NULL);
- assert(item == (void *)0x16);
+ assert(item == xa_mk_value(7));
assert(node->count == node->exceptional);
do {
node = node->parent;
@@ -609,13 +608,13 @@ static void multiorder_account(void)
item_insert_order(&tree, 0, 5);
- __radix_tree_insert(&tree, 1 << 5, 5, (void *)0x12);
+ __radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 0, &node, NULL);
assert(node->count == node->exceptional * 2);
radix_tree_delete(&tree, 1 << 5);
assert(node->exceptional == 0);
- __radix_tree_insert(&tree, 1 << 5, 5, (void *)0x12);
+ __radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 1 << 5, &node, &slot);
assert(node->count == node->exceptional * 2);
__radix_tree_replace(&tree, node, slot, NULL, NULL);
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 5978ab1f403d..0d69c49177c6 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -276,7 +276,7 @@ void item_kill_tree(struct radix_tree_root *root)
int nfound;
radix_tree_for_each_slot(slot, root, &iter, 0) {
- if (radix_tree_exceptional_entry(*slot))
+ if (xa_is_value(*slot))
radix_tree_delete(root, iter.index);
}
--
2.16.1
From: Matthew Wilcox <[email protected]>
Don't open-code accesses to data structure internals.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/fscache/cookie.c | 2 +-
fs/fscache/object.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index ff84258132bb..e9054e0c1a49 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
@@ -608,7 +608,7 @@ void __fscache_relinquish_cookie(struct fscache_cookie *cookie, bool retire)
/* Clear pointers back to the netfs */
cookie->netfs_data = NULL;
cookie->def = NULL;
- BUG_ON(cookie->stores.rnode);
+ BUG_ON(!radix_tree_empty(&cookie->stores));
if (cookie->parent) {
ASSERTCMP(atomic_read(&cookie->parent->usage), >, 0);
diff --git a/fs/fscache/object.c b/fs/fscache/object.c
index 7a182c87f378..aa0e71f02c33 100644
--- a/fs/fscache/object.c
+++ b/fs/fscache/object.c
@@ -956,7 +956,7 @@ static const struct fscache_state *_fscache_invalidate_object(struct fscache_obj
* retire the object instead.
*/
if (!fscache_use_cookie(object)) {
- ASSERT(object->cookie->stores.rnode == NULL);
+ ASSERT(radix_tree_empty(&object->cookie->stores));
set_bit(FSCACHE_OBJECT_RETIRED, &object->flags);
_leave(" [no cookie]");
return transit_to(KILL_OBJECT);
--
2.16.1
From: Matthew Wilcox <[email protected]>
In order to test the memory allocation failure paths, the radix tree
test suite fails allocations if __GFP_NOWARN is set. That happens to work
for the radix tree implementation, but the semantics we really want are
that we want to fail allocations which are not GFP_KERNEL. Do this
by failing allocations which don't have the DIRECT_RECLAIM bit set.
Signed-off-by: Matthew Wilcox <[email protected]>
---
tools/testing/radix-tree/linux.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/radix-tree/linux.c b/tools/testing/radix-tree/linux.c
index 6903ccf35595..f7f3caed3650 100644
--- a/tools/testing/radix-tree/linux.c
+++ b/tools/testing/radix-tree/linux.c
@@ -29,7 +29,7 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, int flags)
{
struct radix_tree_node *node;
- if (flags & __GFP_NOWARN)
+ if (!(flags & __GFP_DIRECT_RECLAIM))
return NULL;
pthread_mutex_lock(&cachep->lock);
--
2.16.1
From: Matthew Wilcox <[email protected]>
Unicore doesn't walk the VMA tree in its flush_dcache_page()
implementation, so has no need to take the tree_lock.
Signed-off-by: Matthew Wilcox <[email protected]>
---
arch/unicore32/include/asm/cacheflush.h | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/unicore32/include/asm/cacheflush.h b/arch/unicore32/include/asm/cacheflush.h
index a5e08e2d5d6d..1d9132b66039 100644
--- a/arch/unicore32/include/asm/cacheflush.h
+++ b/arch/unicore32/include/asm/cacheflush.h
@@ -170,10 +170,8 @@ extern void flush_cache_page(struct vm_area_struct *vma,
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *);
-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) do { } while (0)
+#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#define flush_icache_user_range(vma, page, addr, len) \
flush_dcache_page(page)
--
2.16.1
From: Matthew Wilcox <[email protected]>
XFS currently contains a copy-and-paste of __set_page_dirty(). Export
it from buffer.c instead.
Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/buffer.c | 3 ++-
fs/xfs/xfs_aops.c | 15 ++-------------
include/linux/mm.h | 1 +
3 files changed, 5 insertions(+), 14 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 9a73924db22f..0b487cdb7124 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -594,7 +594,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
*
* The caller must hold lock_page_memcg().
*/
-static void __set_page_dirty(struct page *page, struct address_space *mapping,
+void __set_page_dirty(struct page *page, struct address_space *mapping,
int warn)
{
unsigned long flags;
@@ -608,6 +608,7 @@ static void __set_page_dirty(struct page *page, struct address_space *mapping,
}
spin_unlock_irqrestore(&mapping->tree_lock, flags);
}
+EXPORT_SYMBOL_GPL(__set_page_dirty);
/*
* Add a page to the dirty page list.
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 9c6a830da0ee..31f2c4895a46 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -1472,19 +1472,8 @@ xfs_vm_set_page_dirty(
newly_dirty = !TestSetPageDirty(page);
spin_unlock(&mapping->private_lock);
- if (newly_dirty) {
- /* sigh - __set_page_dirty() is static, so copy it here, too */
- unsigned long flags;
-
- spin_lock_irqsave(&mapping->tree_lock, flags);
- if (page->mapping) { /* Race with truncate? */
- WARN_ON_ONCE(!PageUptodate(page));
- account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
- page_index(page), PAGECACHE_TAG_DIRTY);
- }
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
- }
+ if (newly_dirty)
+ __set_page_dirty(page, mapping, 1);
unlock_page_memcg(page);
if (newly_dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ad06d42adb1a..47b0fb0a6e41 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1454,6 +1454,7 @@ extern int try_to_release_page(struct page * page, gfp_t gfp_mask);
extern void do_invalidatepage(struct page *page, unsigned int offset,
unsigned int length);
+void __set_page_dirty(struct page *, struct address_space *, int warn);
int __set_page_dirty_nobuffers(struct page *page);
int __set_page_dirty_no_writeback(struct page *page);
int redirty_page_for_writepage(struct writeback_control *wbc,
--
2.16.1
From: Matthew Wilcox <[email protected]>
This is a simple rename, except that xa_ail becomes ail_head.
Signed-off-by: Matthew Wilcox <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
---
fs/xfs/xfs_buf_item.c | 10 ++--
fs/xfs/xfs_dquot.c | 4 +-
fs/xfs/xfs_dquot_item.c | 11 ++--
fs/xfs/xfs_inode_item.c | 22 +++----
fs/xfs/xfs_log.c | 6 +-
fs/xfs/xfs_log_recover.c | 80 ++++++++++++-------------
fs/xfs/xfs_trans.c | 18 +++---
fs/xfs/xfs_trans_ail.c | 152 +++++++++++++++++++++++------------------------
fs/xfs/xfs_trans_buf.c | 4 +-
fs/xfs/xfs_trans_priv.h | 42 ++++++-------
10 files changed, 175 insertions(+), 174 deletions(-)
diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c
index 270ddb4d2313..82ad270e390e 100644
--- a/fs/xfs/xfs_buf_item.c
+++ b/fs/xfs/xfs_buf_item.c
@@ -460,7 +460,7 @@ xfs_buf_item_unpin(
list_del_init(&bp->b_li_list);
bp->b_iodone = NULL;
} else {
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
xfs_trans_ail_delete(ailp, lip, SHUTDOWN_LOG_IO_ERROR);
xfs_buf_item_relse(bp);
ASSERT(bp->b_log_item == NULL);
@@ -1057,12 +1057,12 @@ xfs_buf_do_callbacks_fail(
lip = list_first_entry(&bp->b_li_list, struct xfs_log_item,
li_bio_list);
ailp = lip->li_ailp;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
list_for_each_entry(lip, &bp->b_li_list, li_bio_list) {
if (lip->li_ops->iop_error)
lip->li_ops->iop_error(lip, bp);
}
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
static bool
@@ -1226,7 +1226,7 @@ xfs_buf_iodone(
*
* Either way, AIL is useless if we're forcing a shutdown.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
xfs_trans_ail_delete(ailp, lip, SHUTDOWN_CORRUPT_INCORE);
xfs_buf_item_free(BUF_ITEM(lip));
}
@@ -1246,7 +1246,7 @@ xfs_buf_resubmit_failed_buffers(
/*
* Clear XFS_LI_FAILED flag from all items before resubmit
*
- * XFS_LI_FAILED set/clear is protected by xa_lock, caller this
+ * XFS_LI_FAILED set/clear is protected by ail_lock, caller this
* function already have it acquired
*/
list_for_each_entry(lip, &bp->b_li_list, li_bio_list)
diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
index 43572f8a1b8e..c4041b676b7b 100644
--- a/fs/xfs/xfs_dquot.c
+++ b/fs/xfs/xfs_dquot.c
@@ -920,7 +920,7 @@ xfs_qm_dqflush_done(
(lip->li_flags & XFS_LI_FAILED))) {
/* xfs_trans_ail_delete() drops the AIL lock. */
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
if (lip->li_lsn == qip->qli_flush_lsn) {
xfs_trans_ail_delete(ailp, lip, SHUTDOWN_CORRUPT_INCORE);
} else {
@@ -930,7 +930,7 @@ xfs_qm_dqflush_done(
*/
if (lip->li_flags & XFS_LI_FAILED)
xfs_clear_li_failed(lip);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
}
diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c
index 96eaa6933709..4b331e354da7 100644
--- a/fs/xfs/xfs_dquot_item.c
+++ b/fs/xfs/xfs_dquot_item.c
@@ -157,8 +157,9 @@ xfs_dquot_item_error(
STATIC uint
xfs_qm_dquot_logitem_push(
struct xfs_log_item *lip,
- struct list_head *buffer_list) __releases(&lip->li_ailp->xa_lock)
- __acquires(&lip->li_ailp->xa_lock)
+ struct list_head *buffer_list)
+ __releases(&lip->li_ailp->ail_lock)
+ __acquires(&lip->li_ailp->ail_lock)
{
struct xfs_dquot *dqp = DQUOT_ITEM(lip)->qli_dquot;
struct xfs_buf *bp = lip->li_buf;
@@ -205,7 +206,7 @@ xfs_qm_dquot_logitem_push(
goto out_unlock;
}
- spin_unlock(&lip->li_ailp->xa_lock);
+ spin_unlock(&lip->li_ailp->ail_lock);
error = xfs_qm_dqflush(dqp, &bp);
if (error) {
@@ -217,7 +218,7 @@ xfs_qm_dquot_logitem_push(
xfs_buf_relse(bp);
}
- spin_lock(&lip->li_ailp->xa_lock);
+ spin_lock(&lip->li_ailp->ail_lock);
out_unlock:
xfs_dqunlock(dqp);
return rval;
@@ -400,7 +401,7 @@ xfs_qm_qoffend_logitem_committed(
* Delete the qoff-start logitem from the AIL.
* xfs_trans_ail_delete() drops the AIL lock.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
xfs_trans_ail_delete(ailp, &qfs->qql_item, SHUTDOWN_LOG_IO_ERROR);
kmem_free(qfs->qql_item.li_lv_shadow);
diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c
index d5037f060d6f..7666eae8844f 100644
--- a/fs/xfs/xfs_inode_item.c
+++ b/fs/xfs/xfs_inode_item.c
@@ -502,8 +502,8 @@ STATIC uint
xfs_inode_item_push(
struct xfs_log_item *lip,
struct list_head *buffer_list)
- __releases(&lip->li_ailp->xa_lock)
- __acquires(&lip->li_ailp->xa_lock)
+ __releases(&lip->li_ailp->ail_lock)
+ __acquires(&lip->li_ailp->ail_lock)
{
struct xfs_inode_log_item *iip = INODE_ITEM(lip);
struct xfs_inode *ip = iip->ili_inode;
@@ -562,7 +562,7 @@ xfs_inode_item_push(
ASSERT(iip->ili_fields != 0 || XFS_FORCED_SHUTDOWN(ip->i_mount));
ASSERT(iip->ili_logged == 0 || XFS_FORCED_SHUTDOWN(ip->i_mount));
- spin_unlock(&lip->li_ailp->xa_lock);
+ spin_unlock(&lip->li_ailp->ail_lock);
error = xfs_iflush(ip, &bp);
if (!error) {
@@ -571,7 +571,7 @@ xfs_inode_item_push(
xfs_buf_relse(bp);
}
- spin_lock(&lip->li_ailp->xa_lock);
+ spin_lock(&lip->li_ailp->ail_lock);
out_unlock:
xfs_iunlock(ip, XFS_ILOCK_SHARED);
return rval;
@@ -759,7 +759,7 @@ xfs_iflush_done(
bool mlip_changed = false;
/* this is an opencoded batch version of xfs_trans_ail_delete */
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
list_for_each_entry(blip, &tmp, li_bio_list) {
if (INODE_ITEM(blip)->ili_logged &&
blip->li_lsn == INODE_ITEM(blip)->ili_flush_lsn)
@@ -770,15 +770,15 @@ xfs_iflush_done(
}
if (mlip_changed) {
- if (!XFS_FORCED_SHUTDOWN(ailp->xa_mount))
- xlog_assign_tail_lsn_locked(ailp->xa_mount);
- if (list_empty(&ailp->xa_ail))
- wake_up_all(&ailp->xa_empty);
+ if (!XFS_FORCED_SHUTDOWN(ailp->ail_mount))
+ xlog_assign_tail_lsn_locked(ailp->ail_mount);
+ if (list_empty(&ailp->ail_head))
+ wake_up_all(&ailp->ail_empty);
}
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
if (mlip_changed)
- xfs_log_space_wake(ailp->xa_mount);
+ xfs_log_space_wake(ailp->ail_mount);
}
/*
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 3e5ba1ecc080..2f529fd7bf67 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1149,7 +1149,7 @@ xlog_assign_tail_lsn_locked(
struct xfs_log_item *lip;
xfs_lsn_t tail_lsn;
- assert_spin_locked(&mp->m_ail->xa_lock);
+ assert_spin_locked(&mp->m_ail->ail_lock);
/*
* To make sure we always have a valid LSN for the log tail we keep
@@ -1172,9 +1172,9 @@ xlog_assign_tail_lsn(
{
xfs_lsn_t tail_lsn;
- spin_lock(&mp->m_ail->xa_lock);
+ spin_lock(&mp->m_ail->ail_lock);
tail_lsn = xlog_assign_tail_lsn_locked(mp);
- spin_unlock(&mp->m_ail->xa_lock);
+ spin_unlock(&mp->m_ail->ail_lock);
return tail_lsn;
}
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index 00240c9ee72e..8d79838da2f6 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -3434,7 +3434,7 @@ xlog_recover_efi_pass2(
}
atomic_set(&efip->efi_next_extent, efi_formatp->efi_nextents);
- spin_lock(&log->l_ailp->xa_lock);
+ spin_lock(&log->l_ailp->ail_lock);
/*
* The EFI has two references. One for the EFD and one for EFI to ensure
* it makes it into the AIL. Insert the EFI into the AIL directly and
@@ -3477,7 +3477,7 @@ xlog_recover_efd_pass2(
* Search for the EFI with the id in the EFD format structure in the
* AIL.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) {
if (lip->li_type == XFS_LI_EFI) {
@@ -3487,9 +3487,9 @@ xlog_recover_efd_pass2(
* Drop the EFD reference to the EFI. This
* removes the EFI from the AIL and frees it.
*/
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_efi_release(efip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
break;
}
}
@@ -3497,7 +3497,7 @@ xlog_recover_efd_pass2(
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return 0;
}
@@ -3530,7 +3530,7 @@ xlog_recover_rui_pass2(
}
atomic_set(&ruip->rui_next_extent, rui_formatp->rui_nextents);
- spin_lock(&log->l_ailp->xa_lock);
+ spin_lock(&log->l_ailp->ail_lock);
/*
* The RUI has two references. One for the RUD and one for RUI to ensure
* it makes it into the AIL. Insert the RUI into the AIL directly and
@@ -3570,7 +3570,7 @@ xlog_recover_rud_pass2(
* Search for the RUI with the id in the RUD format structure in the
* AIL.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) {
if (lip->li_type == XFS_LI_RUI) {
@@ -3580,9 +3580,9 @@ xlog_recover_rud_pass2(
* Drop the RUD reference to the RUI. This
* removes the RUI from the AIL and frees it.
*/
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_rui_release(ruip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
break;
}
}
@@ -3590,7 +3590,7 @@ xlog_recover_rud_pass2(
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return 0;
}
@@ -3646,7 +3646,7 @@ xlog_recover_cui_pass2(
}
atomic_set(&cuip->cui_next_extent, cui_formatp->cui_nextents);
- spin_lock(&log->l_ailp->xa_lock);
+ spin_lock(&log->l_ailp->ail_lock);
/*
* The CUI has two references. One for the CUD and one for CUI to ensure
* it makes it into the AIL. Insert the CUI into the AIL directly and
@@ -3687,7 +3687,7 @@ xlog_recover_cud_pass2(
* Search for the CUI with the id in the CUD format structure in the
* AIL.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) {
if (lip->li_type == XFS_LI_CUI) {
@@ -3697,9 +3697,9 @@ xlog_recover_cud_pass2(
* Drop the CUD reference to the CUI. This
* removes the CUI from the AIL and frees it.
*/
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_cui_release(cuip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
break;
}
}
@@ -3707,7 +3707,7 @@ xlog_recover_cud_pass2(
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return 0;
}
@@ -3765,7 +3765,7 @@ xlog_recover_bui_pass2(
}
atomic_set(&buip->bui_next_extent, bui_formatp->bui_nextents);
- spin_lock(&log->l_ailp->xa_lock);
+ spin_lock(&log->l_ailp->ail_lock);
/*
* The RUI has two references. One for the RUD and one for RUI to ensure
* it makes it into the AIL. Insert the RUI into the AIL directly and
@@ -3806,7 +3806,7 @@ xlog_recover_bud_pass2(
* Search for the BUI with the id in the BUD format structure in the
* AIL.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) {
if (lip->li_type == XFS_LI_BUI) {
@@ -3816,9 +3816,9 @@ xlog_recover_bud_pass2(
* Drop the BUD reference to the BUI. This
* removes the BUI from the AIL and frees it.
*/
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_bui_release(buip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
break;
}
}
@@ -3826,7 +3826,7 @@ xlog_recover_bud_pass2(
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return 0;
}
@@ -4659,9 +4659,9 @@ xlog_recover_process_efi(
if (test_bit(XFS_EFI_RECOVERED, &efip->efi_flags))
return 0;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
error = xfs_efi_recover(mp, efip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
return error;
}
@@ -4677,9 +4677,9 @@ xlog_recover_cancel_efi(
efip = container_of(lip, struct xfs_efi_log_item, efi_item);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_efi_release(efip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
}
/* Recover the RUI if necessary. */
@@ -4699,9 +4699,9 @@ xlog_recover_process_rui(
if (test_bit(XFS_RUI_RECOVERED, &ruip->rui_flags))
return 0;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
error = xfs_rui_recover(mp, ruip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
return error;
}
@@ -4717,9 +4717,9 @@ xlog_recover_cancel_rui(
ruip = container_of(lip, struct xfs_rui_log_item, rui_item);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_rui_release(ruip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
}
/* Recover the CUI if necessary. */
@@ -4740,9 +4740,9 @@ xlog_recover_process_cui(
if (test_bit(XFS_CUI_RECOVERED, &cuip->cui_flags))
return 0;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
error = xfs_cui_recover(mp, cuip, dfops);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
return error;
}
@@ -4758,9 +4758,9 @@ xlog_recover_cancel_cui(
cuip = container_of(lip, struct xfs_cui_log_item, cui_item);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_cui_release(cuip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
}
/* Recover the BUI if necessary. */
@@ -4781,9 +4781,9 @@ xlog_recover_process_bui(
if (test_bit(XFS_BUI_RECOVERED, &buip->bui_flags))
return 0;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
error = xfs_bui_recover(mp, buip, dfops);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
return error;
}
@@ -4799,9 +4799,9 @@ xlog_recover_cancel_bui(
buip = container_of(lip, struct xfs_bui_log_item, bui_item);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
xfs_bui_release(buip);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
}
/* Is this log item a deferred action intent? */
@@ -4889,7 +4889,7 @@ xlog_recover_process_intents(
#endif
ailp = log->l_ailp;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
#if defined(DEBUG) || defined(XFS_WARN)
last_lsn = xlog_assign_lsn(log->l_curr_cycle, log->l_curr_block);
@@ -4943,7 +4943,7 @@ xlog_recover_process_intents(
}
out:
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
if (error)
xfs_defer_cancel(&dfops);
else
@@ -4966,7 +4966,7 @@ xlog_recover_cancel_intents(
struct xfs_ail *ailp;
ailp = log->l_ailp;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) {
/*
@@ -5000,7 +5000,7 @@ xlog_recover_cancel_intents(
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return error;
}
diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
index 86f92df32c42..ec6b01834705 100644
--- a/fs/xfs/xfs_trans.c
+++ b/fs/xfs/xfs_trans.c
@@ -803,8 +803,8 @@ xfs_log_item_batch_insert(
{
int i;
- spin_lock(&ailp->xa_lock);
- /* xfs_trans_ail_update_bulk drops ailp->xa_lock */
+ spin_lock(&ailp->ail_lock);
+ /* xfs_trans_ail_update_bulk drops ailp->ail_lock */
xfs_trans_ail_update_bulk(ailp, cur, log_items, nr_items, commit_lsn);
for (i = 0; i < nr_items; i++) {
@@ -847,9 +847,9 @@ xfs_trans_committed_bulk(
struct xfs_ail_cursor cur;
int i = 0;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
xfs_trans_ail_cursor_last(ailp, &cur, commit_lsn);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
/* unpin all the log items */
for (lv = log_vector; lv; lv = lv->lv_next ) {
@@ -869,7 +869,7 @@ xfs_trans_committed_bulk(
* object into the AIL as we are in a shutdown situation.
*/
if (aborted) {
- ASSERT(XFS_FORCED_SHUTDOWN(ailp->xa_mount));
+ ASSERT(XFS_FORCED_SHUTDOWN(ailp->ail_mount));
lip->li_ops->iop_unpin(lip, 1);
continue;
}
@@ -883,11 +883,11 @@ xfs_trans_committed_bulk(
* not affect the AIL cursor the bulk insert path is
* using.
*/
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
if (XFS_LSN_CMP(item_lsn, lip->li_lsn) > 0)
xfs_trans_ail_update(ailp, lip, item_lsn);
else
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
lip->li_ops->iop_unpin(lip, 0);
continue;
}
@@ -905,9 +905,9 @@ xfs_trans_committed_bulk(
if (i)
xfs_log_item_batch_insert(ailp, &cur, log_items, i, commit_lsn);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
/*
diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
index cef89f7127d3..d4a2445215e6 100644
--- a/fs/xfs/xfs_trans_ail.c
+++ b/fs/xfs/xfs_trans_ail.c
@@ -40,7 +40,7 @@ xfs_ail_check(
{
xfs_log_item_t *prev_lip;
- if (list_empty(&ailp->xa_ail))
+ if (list_empty(&ailp->ail_head))
return;
/*
@@ -48,11 +48,11 @@ xfs_ail_check(
*/
ASSERT((lip->li_flags & XFS_LI_IN_AIL) != 0);
prev_lip = list_entry(lip->li_ail.prev, xfs_log_item_t, li_ail);
- if (&prev_lip->li_ail != &ailp->xa_ail)
+ if (&prev_lip->li_ail != &ailp->ail_head)
ASSERT(XFS_LSN_CMP(prev_lip->li_lsn, lip->li_lsn) <= 0);
prev_lip = list_entry(lip->li_ail.next, xfs_log_item_t, li_ail);
- if (&prev_lip->li_ail != &ailp->xa_ail)
+ if (&prev_lip->li_ail != &ailp->ail_head)
ASSERT(XFS_LSN_CMP(prev_lip->li_lsn, lip->li_lsn) >= 0);
@@ -69,10 +69,10 @@ static xfs_log_item_t *
xfs_ail_max(
struct xfs_ail *ailp)
{
- if (list_empty(&ailp->xa_ail))
+ if (list_empty(&ailp->ail_head))
return NULL;
- return list_entry(ailp->xa_ail.prev, xfs_log_item_t, li_ail);
+ return list_entry(ailp->ail_head.prev, xfs_log_item_t, li_ail);
}
/*
@@ -84,7 +84,7 @@ xfs_ail_next(
struct xfs_ail *ailp,
xfs_log_item_t *lip)
{
- if (lip->li_ail.next == &ailp->xa_ail)
+ if (lip->li_ail.next == &ailp->ail_head)
return NULL;
return list_first_entry(&lip->li_ail, xfs_log_item_t, li_ail);
@@ -105,11 +105,11 @@ xfs_ail_min_lsn(
xfs_lsn_t lsn = 0;
xfs_log_item_t *lip;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_ail_min(ailp);
if (lip)
lsn = lip->li_lsn;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return lsn;
}
@@ -124,11 +124,11 @@ xfs_ail_max_lsn(
xfs_lsn_t lsn = 0;
xfs_log_item_t *lip;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
lip = xfs_ail_max(ailp);
if (lip)
lsn = lip->li_lsn;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
return lsn;
}
@@ -146,7 +146,7 @@ xfs_trans_ail_cursor_init(
struct xfs_ail_cursor *cur)
{
cur->item = NULL;
- list_add_tail(&cur->list, &ailp->xa_cursors);
+ list_add_tail(&cur->list, &ailp->ail_cursors);
}
/*
@@ -194,7 +194,7 @@ xfs_trans_ail_cursor_clear(
{
struct xfs_ail_cursor *cur;
- list_for_each_entry(cur, &ailp->xa_cursors, list) {
+ list_for_each_entry(cur, &ailp->ail_cursors, list) {
if (cur->item == lip)
cur->item = (struct xfs_log_item *)
((uintptr_t)cur->item | 1);
@@ -222,7 +222,7 @@ xfs_trans_ail_cursor_first(
goto out;
}
- list_for_each_entry(lip, &ailp->xa_ail, li_ail) {
+ list_for_each_entry(lip, &ailp->ail_head, li_ail) {
if (XFS_LSN_CMP(lip->li_lsn, lsn) >= 0)
goto out;
}
@@ -241,7 +241,7 @@ __xfs_trans_ail_cursor_last(
{
xfs_log_item_t *lip;
- list_for_each_entry_reverse(lip, &ailp->xa_ail, li_ail) {
+ list_for_each_entry_reverse(lip, &ailp->ail_head, li_ail) {
if (XFS_LSN_CMP(lip->li_lsn, lsn) <= 0)
return lip;
}
@@ -310,7 +310,7 @@ xfs_ail_splice(
if (lip)
list_splice(list, &lip->li_ail);
else
- list_splice(list, &ailp->xa_ail);
+ list_splice(list, &ailp->ail_head);
}
/*
@@ -335,17 +335,17 @@ xfsaild_push_item(
* If log item pinning is enabled, skip the push and track the item as
* pinned. This can help induce head-behind-tail conditions.
*/
- if (XFS_TEST_ERROR(false, ailp->xa_mount, XFS_ERRTAG_LOG_ITEM_PIN))
+ if (XFS_TEST_ERROR(false, ailp->ail_mount, XFS_ERRTAG_LOG_ITEM_PIN))
return XFS_ITEM_PINNED;
- return lip->li_ops->iop_push(lip, &ailp->xa_buf_list);
+ return lip->li_ops->iop_push(lip, &ailp->ail_buf_list);
}
static long
xfsaild_push(
struct xfs_ail *ailp)
{
- xfs_mount_t *mp = ailp->xa_mount;
+ xfs_mount_t *mp = ailp->ail_mount;
struct xfs_ail_cursor cur;
xfs_log_item_t *lip;
xfs_lsn_t lsn;
@@ -360,30 +360,30 @@ xfsaild_push(
* buffers the last time we ran, force the log first and wait for it
* before pushing again.
*/
- if (ailp->xa_log_flush && ailp->xa_last_pushed_lsn == 0 &&
- (!list_empty_careful(&ailp->xa_buf_list) ||
+ if (ailp->ail_log_flush && ailp->ail_last_pushed_lsn == 0 &&
+ (!list_empty_careful(&ailp->ail_buf_list) ||
xfs_ail_min_lsn(ailp))) {
- ailp->xa_log_flush = 0;
+ ailp->ail_log_flush = 0;
XFS_STATS_INC(mp, xs_push_ail_flush);
xfs_log_force(mp, XFS_LOG_SYNC);
}
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
- /* barrier matches the xa_target update in xfs_ail_push() */
+ /* barrier matches the ail_target update in xfs_ail_push() */
smp_rmb();
- target = ailp->xa_target;
- ailp->xa_target_prev = target;
+ target = ailp->ail_target;
+ ailp->ail_target_prev = target;
- lip = xfs_trans_ail_cursor_first(ailp, &cur, ailp->xa_last_pushed_lsn);
+ lip = xfs_trans_ail_cursor_first(ailp, &cur, ailp->ail_last_pushed_lsn);
if (!lip) {
/*
* If the AIL is empty or our push has reached the end we are
* done now.
*/
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
goto out_done;
}
@@ -404,7 +404,7 @@ xfsaild_push(
XFS_STATS_INC(mp, xs_push_ail_success);
trace_xfs_ail_push(lip);
- ailp->xa_last_pushed_lsn = lsn;
+ ailp->ail_last_pushed_lsn = lsn;
break;
case XFS_ITEM_FLUSHING:
@@ -423,7 +423,7 @@ xfsaild_push(
trace_xfs_ail_flushing(lip);
flushing++;
- ailp->xa_last_pushed_lsn = lsn;
+ ailp->ail_last_pushed_lsn = lsn;
break;
case XFS_ITEM_PINNED:
@@ -431,7 +431,7 @@ xfsaild_push(
trace_xfs_ail_pinned(lip);
stuck++;
- ailp->xa_log_flush++;
+ ailp->ail_log_flush++;
break;
case XFS_ITEM_LOCKED:
XFS_STATS_INC(mp, xs_push_ail_locked);
@@ -468,10 +468,10 @@ xfsaild_push(
lsn = lip->li_lsn;
}
xfs_trans_ail_cursor_done(&cur);
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
- if (xfs_buf_delwri_submit_nowait(&ailp->xa_buf_list))
- ailp->xa_log_flush++;
+ if (xfs_buf_delwri_submit_nowait(&ailp->ail_buf_list))
+ ailp->ail_log_flush++;
if (!count || XFS_LSN_CMP(lsn, target) >= 0) {
out_done:
@@ -481,7 +481,7 @@ xfsaild_push(
* AIL before we start the next scan from the start of the AIL.
*/
tout = 50;
- ailp->xa_last_pushed_lsn = 0;
+ ailp->ail_last_pushed_lsn = 0;
} else if (((stuck + flushing) * 100) / count > 90) {
/*
* Either there is a lot of contention on the AIL or we are
@@ -494,7 +494,7 @@ xfsaild_push(
* the restart to issue a log force to unpin the stuck items.
*/
tout = 20;
- ailp->xa_last_pushed_lsn = 0;
+ ailp->ail_last_pushed_lsn = 0;
} else {
/*
* Assume we have more work to do in a short while.
@@ -536,26 +536,26 @@ xfsaild(
break;
}
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
/*
* Idle if the AIL is empty and we are not racing with a target
* update. We check the AIL after we set the task to a sleep
- * state to guarantee that we either catch an xa_target update
+ * state to guarantee that we either catch an ail_target update
* or that a wake_up resets the state to TASK_RUNNING.
* Otherwise, we run the risk of sleeping indefinitely.
*
- * The barrier matches the xa_target update in xfs_ail_push().
+ * The barrier matches the ail_target update in xfs_ail_push().
*/
smp_rmb();
if (!xfs_ail_min(ailp) &&
- ailp->xa_target == ailp->xa_target_prev) {
- spin_unlock(&ailp->xa_lock);
+ ailp->ail_target == ailp->ail_target_prev) {
+ spin_unlock(&ailp->ail_lock);
freezable_schedule();
tout = 0;
continue;
}
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
if (tout)
freezable_schedule_timeout(msecs_to_jiffies(tout));
@@ -592,8 +592,8 @@ xfs_ail_push(
xfs_log_item_t *lip;
lip = xfs_ail_min(ailp);
- if (!lip || XFS_FORCED_SHUTDOWN(ailp->xa_mount) ||
- XFS_LSN_CMP(threshold_lsn, ailp->xa_target) <= 0)
+ if (!lip || XFS_FORCED_SHUTDOWN(ailp->ail_mount) ||
+ XFS_LSN_CMP(threshold_lsn, ailp->ail_target) <= 0)
return;
/*
@@ -601,10 +601,10 @@ xfs_ail_push(
* the XFS_AIL_PUSHING_BIT.
*/
smp_wmb();
- xfs_trans_ail_copy_lsn(ailp, &ailp->xa_target, &threshold_lsn);
+ xfs_trans_ail_copy_lsn(ailp, &ailp->ail_target, &threshold_lsn);
smp_wmb();
- wake_up_process(ailp->xa_task);
+ wake_up_process(ailp->ail_task);
}
/*
@@ -630,18 +630,18 @@ xfs_ail_push_all_sync(
struct xfs_log_item *lip;
DEFINE_WAIT(wait);
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
while ((lip = xfs_ail_max(ailp)) != NULL) {
- prepare_to_wait(&ailp->xa_empty, &wait, TASK_UNINTERRUPTIBLE);
- ailp->xa_target = lip->li_lsn;
- wake_up_process(ailp->xa_task);
- spin_unlock(&ailp->xa_lock);
+ prepare_to_wait(&ailp->ail_empty, &wait, TASK_UNINTERRUPTIBLE);
+ ailp->ail_target = lip->li_lsn;
+ wake_up_process(ailp->ail_task);
+ spin_unlock(&ailp->ail_lock);
schedule();
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
}
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
- finish_wait(&ailp->xa_empty, &wait);
+ finish_wait(&ailp->ail_empty, &wait);
}
/*
@@ -672,7 +672,7 @@ xfs_trans_ail_update_bulk(
struct xfs_ail_cursor *cur,
struct xfs_log_item **log_items,
int nr_items,
- xfs_lsn_t lsn) __releases(ailp->xa_lock)
+ xfs_lsn_t lsn) __releases(ailp->ail_lock)
{
xfs_log_item_t *mlip;
int mlip_changed = 0;
@@ -705,13 +705,13 @@ xfs_trans_ail_update_bulk(
xfs_ail_splice(ailp, cur, &tmp, lsn);
if (mlip_changed) {
- if (!XFS_FORCED_SHUTDOWN(ailp->xa_mount))
- xlog_assign_tail_lsn_locked(ailp->xa_mount);
- spin_unlock(&ailp->xa_lock);
+ if (!XFS_FORCED_SHUTDOWN(ailp->ail_mount))
+ xlog_assign_tail_lsn_locked(ailp->ail_mount);
+ spin_unlock(&ailp->ail_lock);
- xfs_log_space_wake(ailp->xa_mount);
+ xfs_log_space_wake(ailp->ail_mount);
} else {
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
}
@@ -756,13 +756,13 @@ void
xfs_trans_ail_delete(
struct xfs_ail *ailp,
struct xfs_log_item *lip,
- int shutdown_type) __releases(ailp->xa_lock)
+ int shutdown_type) __releases(ailp->ail_lock)
{
- struct xfs_mount *mp = ailp->xa_mount;
+ struct xfs_mount *mp = ailp->ail_mount;
bool mlip_changed;
if (!(lip->li_flags & XFS_LI_IN_AIL)) {
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
if (!XFS_FORCED_SHUTDOWN(mp)) {
xfs_alert_tag(mp, XFS_PTAG_AILDELETE,
"%s: attempting to delete a log item that is not in the AIL",
@@ -776,13 +776,13 @@ xfs_trans_ail_delete(
if (mlip_changed) {
if (!XFS_FORCED_SHUTDOWN(mp))
xlog_assign_tail_lsn_locked(mp);
- if (list_empty(&ailp->xa_ail))
- wake_up_all(&ailp->xa_empty);
+ if (list_empty(&ailp->ail_head))
+ wake_up_all(&ailp->ail_empty);
}
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
if (mlip_changed)
- xfs_log_space_wake(ailp->xa_mount);
+ xfs_log_space_wake(ailp->ail_mount);
}
int
@@ -795,16 +795,16 @@ xfs_trans_ail_init(
if (!ailp)
return -ENOMEM;
- ailp->xa_mount = mp;
- INIT_LIST_HEAD(&ailp->xa_ail);
- INIT_LIST_HEAD(&ailp->xa_cursors);
- spin_lock_init(&ailp->xa_lock);
- INIT_LIST_HEAD(&ailp->xa_buf_list);
- init_waitqueue_head(&ailp->xa_empty);
+ ailp->ail_mount = mp;
+ INIT_LIST_HEAD(&ailp->ail_head);
+ INIT_LIST_HEAD(&ailp->ail_cursors);
+ spin_lock_init(&ailp->ail_lock);
+ INIT_LIST_HEAD(&ailp->ail_buf_list);
+ init_waitqueue_head(&ailp->ail_empty);
- ailp->xa_task = kthread_run(xfsaild, ailp, "xfsaild/%s",
- ailp->xa_mount->m_fsname);
- if (IS_ERR(ailp->xa_task))
+ ailp->ail_task = kthread_run(xfsaild, ailp, "xfsaild/%s",
+ ailp->ail_mount->m_fsname);
+ if (IS_ERR(ailp->ail_task))
goto out_free_ailp;
mp->m_ail = ailp;
@@ -821,6 +821,6 @@ xfs_trans_ail_destroy(
{
struct xfs_ail *ailp = mp->m_ail;
- kthread_stop(ailp->xa_task);
+ kthread_stop(ailp->ail_task);
kmem_free(ailp);
}
diff --git a/fs/xfs/xfs_trans_buf.c b/fs/xfs/xfs_trans_buf.c
index 653ce379d36b..a5d9dfc45d98 100644
--- a/fs/xfs/xfs_trans_buf.c
+++ b/fs/xfs/xfs_trans_buf.c
@@ -431,8 +431,8 @@ xfs_trans_brelse(
* If the fs has shutdown and we dropped the last reference, it may fall
* on us to release a (possibly dirty) bli if it never made it to the
* AIL (e.g., the aborted unpin already happened and didn't release it
- * due to our reference). Since we're already shutdown and need xa_lock,
- * just force remove from the AIL and release the bli here.
+ * due to our reference). Since we're already shutdown and need
+ * ail_lock, just force remove from the AIL and release the bli here.
*/
if (XFS_FORCED_SHUTDOWN(tp->t_mountp) && freed) {
xfs_trans_ail_remove(&bip->bli_item, SHUTDOWN_LOG_IO_ERROR);
diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h
index b317a3644c00..be24b0c8a332 100644
--- a/fs/xfs/xfs_trans_priv.h
+++ b/fs/xfs/xfs_trans_priv.h
@@ -65,17 +65,17 @@ struct xfs_ail_cursor {
* Eventually we need to drive the locking in here as well.
*/
struct xfs_ail {
- struct xfs_mount *xa_mount;
- struct task_struct *xa_task;
- struct list_head xa_ail;
- xfs_lsn_t xa_target;
- xfs_lsn_t xa_target_prev;
- struct list_head xa_cursors;
- spinlock_t xa_lock;
- xfs_lsn_t xa_last_pushed_lsn;
- int xa_log_flush;
- struct list_head xa_buf_list;
- wait_queue_head_t xa_empty;
+ struct xfs_mount *ail_mount;
+ struct task_struct *ail_task;
+ struct list_head ail_head;
+ xfs_lsn_t ail_target;
+ xfs_lsn_t ail_target_prev;
+ struct list_head ail_cursors;
+ spinlock_t ail_lock;
+ xfs_lsn_t ail_last_pushed_lsn;
+ int ail_log_flush;
+ struct list_head ail_buf_list;
+ wait_queue_head_t ail_empty;
};
/*
@@ -84,7 +84,7 @@ struct xfs_ail {
void xfs_trans_ail_update_bulk(struct xfs_ail *ailp,
struct xfs_ail_cursor *cur,
struct xfs_log_item **log_items, int nr_items,
- xfs_lsn_t lsn) __releases(ailp->xa_lock);
+ xfs_lsn_t lsn) __releases(ailp->ail_lock);
/*
* Return a pointer to the first item in the AIL. If the AIL is empty, then
* return NULL.
@@ -93,7 +93,7 @@ static inline struct xfs_log_item *
xfs_ail_min(
struct xfs_ail *ailp)
{
- return list_first_entry_or_null(&ailp->xa_ail, struct xfs_log_item,
+ return list_first_entry_or_null(&ailp->ail_head, struct xfs_log_item,
li_ail);
}
@@ -101,14 +101,14 @@ static inline void
xfs_trans_ail_update(
struct xfs_ail *ailp,
struct xfs_log_item *lip,
- xfs_lsn_t lsn) __releases(ailp->xa_lock)
+ xfs_lsn_t lsn) __releases(ailp->ail_lock)
{
xfs_trans_ail_update_bulk(ailp, NULL, &lip, 1, lsn);
}
bool xfs_ail_delete_one(struct xfs_ail *ailp, struct xfs_log_item *lip);
void xfs_trans_ail_delete(struct xfs_ail *ailp, struct xfs_log_item *lip,
- int shutdown_type) __releases(ailp->xa_lock);
+ int shutdown_type) __releases(ailp->ail_lock);
static inline void
xfs_trans_ail_remove(
@@ -117,12 +117,12 @@ xfs_trans_ail_remove(
{
struct xfs_ail *ailp = lip->li_ailp;
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
/* xfs_trans_ail_delete() drops the AIL lock */
if (lip->li_flags & XFS_LI_IN_AIL)
xfs_trans_ail_delete(ailp, lip, shutdown_type);
else
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
void xfs_ail_push(struct xfs_ail *, xfs_lsn_t);
@@ -149,9 +149,9 @@ xfs_trans_ail_copy_lsn(
xfs_lsn_t *src)
{
ASSERT(sizeof(xfs_lsn_t) == 8); /* don't lock if it shrinks */
- spin_lock(&ailp->xa_lock);
+ spin_lock(&ailp->ail_lock);
*dst = *src;
- spin_unlock(&ailp->xa_lock);
+ spin_unlock(&ailp->ail_lock);
}
#else
static inline void
@@ -172,7 +172,7 @@ xfs_clear_li_failed(
struct xfs_buf *bp = lip->li_buf;
ASSERT(lip->li_flags & XFS_LI_IN_AIL);
- lockdep_assert_held(&lip->li_ailp->xa_lock);
+ lockdep_assert_held(&lip->li_ailp->ail_lock);
if (lip->li_flags & XFS_LI_FAILED) {
lip->li_flags &= ~XFS_LI_FAILED;
@@ -186,7 +186,7 @@ xfs_set_li_failed(
struct xfs_log_item *lip,
struct xfs_buf *bp)
{
- lockdep_assert_held(&lip->li_ailp->xa_lock);
+ lockdep_assert_held(&lip->li_ailp->ail_lock);
if (!(lip->li_flags & XFS_LI_FAILED)) {
xfs_buf_hold(bp);
--
2.16.1
From: Matthew Wilcox <[email protected]>
None of these four bits may be used for slab allocations, so we can
use them for flags as long as we mask them off before passing them
to the slab allocator. Move the IDR flag from the top bits to the
bottom bits.
Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/idr.h | 3 ++-
include/linux/radix-tree.h | 7 ++++---
lib/radix-tree.c | 3 ++-
tools/testing/radix-tree/linux/gfp.h | 1 +
4 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/include/linux/idr.h b/include/linux/idr.h
index 7d6a6313f0ab..913c335054f0 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -29,7 +29,8 @@ struct idr {
#define IDR_FREE 0
/* Set the IDR flag and the IDR_FREE tag */
-#define IDR_RT_MARKER ((__force gfp_t)(3 << __GFP_BITS_SHIFT))
+#define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
+ (1 << (ROOT_TAG_SHIFT + IDR_FREE)))
#define IDR_INIT_BASE(base) { \
.idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER), \
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index fc55ff31eca7..6c4e2e716dac 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -104,9 +104,10 @@ struct radix_tree_node {
unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
};
-/* The top bits of gfp_mask are used to store the root tags and the IDR flag */
-#define ROOT_IS_IDR ((__force gfp_t)(1 << __GFP_BITS_SHIFT))
-#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT + 1)
+/* The IDR tag is stored in the low bits of the GFP flags */
+#define ROOT_IS_IDR ((__force gfp_t)4)
+/* The top bits of gfp_mask are used to store the root tags */
+#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
struct radix_tree_root {
gfp_t gfp_mask;
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 0a7ae3288a24..66732e2f9606 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -146,7 +146,7 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,
static inline gfp_t root_gfp_mask(const struct radix_tree_root *root)
{
- return root->gfp_mask & __GFP_BITS_MASK;
+ return root->gfp_mask & ((__GFP_BITS_MASK >> 4) << 4);
}
static inline void tag_set(struct radix_tree_node *node, unsigned int tag,
@@ -2285,6 +2285,7 @@ void __init radix_tree_init(void)
int ret;
BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
+ BUILD_BUG_ON(GFP_ZONEMASK != (__force gfp_t)15);
radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
sizeof(struct radix_tree_node), 0,
SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
diff --git a/tools/testing/radix-tree/linux/gfp.h b/tools/testing/radix-tree/linux/gfp.h
index e9fff59dfd8a..a72007d9818b 100644
--- a/tools/testing/radix-tree/linux/gfp.h
+++ b/tools/testing/radix-tree/linux/gfp.h
@@ -18,6 +18,7 @@
#define __GFP_RECLAIM (__GFP_DIRECT_RECLAIM|__GFP_KSWAPD_RECLAIM)
+#define GFP_ZONEMASK 0x0fu
#define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
#define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
#define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM)
--
2.16.1
Matthew,
On Mon, Feb 19, 2018 at 8:45 PM, Matthew Wilcox <[email protected]> wrote:
> From: Matthew Wilcox <[email protected]>
>
> This first function in the XArray API brings with it a lot of support
> infrastructure. The advanced API is based around the xa_state which is
> a more capable version of the radix_tree_iter.
>
> As the test-suite demonstrates, it is possible to use the xarray and
> radix tree APIs on the same data structure.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
<snip>
> --- /dev/null
> +++ b/tools/testing/radix-tree/xarray-test.c
> @@ -0,0 +1,56 @@
> +/*
> + * xarray-test.c: Test the XArray API
> + * Copyright (c) 2017 Microsoft Corporation <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + */
Do you mind using SPDX tags per [1] rather that this fine but long legalese?
Unless you are a legalese lover of course.
You will also get bonus karma points if you can spread the word within
your group!
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/license-rules.rst
--
Cordially
Philippe Ombredanne
On Tue, Feb 20, 2018 at 08:34:06AM +0100, Philippe Ombredanne wrote:
> > +++ b/tools/testing/radix-tree/xarray-test.c
> > @@ -0,0 +1,56 @@
> > +/*
> > + * xarray-test.c: Test the XArray API
> > + * Copyright (c) 2017 Microsoft Corporation <[email protected]>
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> > + * more details.
> > + */
>
> Do you mind using SPDX tags per [1] rather that this fine but long legalese?
> Unless you are a legalese lover of course.
Argh, missed that one.
I'm more concerned with the documentation license, though. I didn't
get a response from you to the email I sent Feb 12, Subject: License
documentation.
On Mon, 2018-02-19 at 11:44 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> None of these four bits may be used for slab allocations, so we can
> use them for flags as long as we mask them off before passing them
> to the slab allocator. Move the IDR flag from the top bits to the
> bottom bits.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> include/linux/idr.h | 3 ++-
> include/linux/radix-tree.h | 7 ++++---
> lib/radix-tree.c | 3 ++-
> tools/testing/radix-tree/linux/gfp.h | 1 +
> 4 files changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/idr.h b/include/linux/idr.h
> index 7d6a6313f0ab..913c335054f0 100644
> --- a/include/linux/idr.h
> +++ b/include/linux/idr.h
> @@ -29,7 +29,8 @@ struct idr {
> #define IDR_FREE 0
>
> /* Set the IDR flag and the IDR_FREE tag */
> -#define IDR_RT_MARKER ((__force gfp_t)(3 << __GFP_BITS_SHIFT))
> +#define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
> + (1 << (ROOT_TAG_SHIFT + IDR_FREE)))
>
> #define IDR_INIT_BASE(base) { \
> .idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER), \
> diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
> index fc55ff31eca7..6c4e2e716dac 100644
> --- a/include/linux/radix-tree.h
> +++ b/include/linux/radix-tree.h
> @@ -104,9 +104,10 @@ struct radix_tree_node {
> unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
> };
>
> -/* The top bits of gfp_mask are used to store the root tags and the IDR flag */
> -#define ROOT_IS_IDR ((__force gfp_t)(1 << __GFP_BITS_SHIFT))
> -#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT + 1)
> +/* The IDR tag is stored in the low bits of the GFP flags */
> +#define ROOT_IS_IDR ((__force gfp_t)4)
> +/* The top bits of gfp_mask are used to store the root tags */
> +#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
>
> struct radix_tree_root {
> gfp_t gfp_mask;
> diff --git a/lib/radix-tree.c b/lib/radix-tree.c
> index 0a7ae3288a24..66732e2f9606 100644
> --- a/lib/radix-tree.c
> +++ b/lib/radix-tree.c
> @@ -146,7 +146,7 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,
>
> static inline gfp_t root_gfp_mask(const struct radix_tree_root *root)
> {
> - return root->gfp_mask & __GFP_BITS_MASK;
> + return root->gfp_mask & ((__GFP_BITS_MASK >> 4) << 4);
Maybe phrase this in terms of a constant like GFP_ZONEMASK here? Would
this be more appropriate?
root->gfp_mask & (__GFP_BITS_MASK & ~GFP_ZONEMASK);
> }
> static inline void tag_set(struct radix_tree_node *node, unsigned int tag,
> @@ -2285,6 +2285,7 @@ void __init radix_tree_init(void)
> int ret;
>
> BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
> + BUILD_BUG_ON(GFP_ZONEMASK != (__force gfp_t)15);
> radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
> sizeof(struct radix_tree_node), 0,
> SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
> diff --git a/tools/testing/radix-tree/linux/gfp.h b/tools/testing/radix-tree/linux/gfp.h
> index e9fff59dfd8a..a72007d9818b 100644
> --- a/tools/testing/radix-tree/linux/gfp.h
> +++ b/tools/testing/radix-tree/linux/gfp.h
> @@ -18,6 +18,7 @@
>
> #define __GFP_RECLAIM (__GFP_DIRECT_RECLAIM|__GFP_KSWAPD_RECLAIM)
>
> +#define GFP_ZONEMASK 0x0fu
> #define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
> #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
> #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM)
--
Jeff Layton <[email protected]>
On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> XFS currently contains a copy-and-paste of __set_page_dirty(). Export
> it from buffer.c instead.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> fs/buffer.c | 3 ++-
> fs/xfs/xfs_aops.c | 15 ++-------------
> include/linux/mm.h | 1 +
> 3 files changed, 5 insertions(+), 14 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 9a73924db22f..0b487cdb7124 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -594,7 +594,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
> *
> * The caller must hold lock_page_memcg().
> */
> -static void __set_page_dirty(struct page *page, struct address_space *mapping,
> +void __set_page_dirty(struct page *page, struct address_space *mapping,
> int warn)
> {
> unsigned long flags;
> @@ -608,6 +608,7 @@ static void __set_page_dirty(struct page *page, struct address_space *mapping,
> }
> spin_unlock_irqrestore(&mapping->tree_lock, flags);
> }
> +EXPORT_SYMBOL_GPL(__set_page_dirty);
>
> /*
> * Add a page to the dirty page list.
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 9c6a830da0ee..31f2c4895a46 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -1472,19 +1472,8 @@ xfs_vm_set_page_dirty(
> newly_dirty = !TestSetPageDirty(page);
> spin_unlock(&mapping->private_lock);
>
> - if (newly_dirty) {
> - /* sigh - __set_page_dirty() is static, so copy it here, too */
> - unsigned long flags;
> -
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> - if (page->mapping) { /* Race with truncate? */
> - WARN_ON_ONCE(!PageUptodate(page));
> - account_page_dirtied(page, mapping);
> - radix_tree_tag_set(&mapping->page_tree,
> - page_index(page), PAGECACHE_TAG_DIRTY);
> - }
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> - }
> + if (newly_dirty)
> + __set_page_dirty(page, mapping, 1);
> unlock_page_memcg(page);
> if (newly_dirty)
> __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ad06d42adb1a..47b0fb0a6e41 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1454,6 +1454,7 @@ extern int try_to_release_page(struct page * page, gfp_t gfp_mask);
> extern void do_invalidatepage(struct page *page, unsigned int offset,
> unsigned int length);
>
> +void __set_page_dirty(struct page *, struct address_space *, int warn);
> int __set_page_dirty_nobuffers(struct page *page);
> int __set_page_dirty_no_writeback(struct page *page);
> int redirty_page_for_writepage(struct writeback_control *wbc,
Acked-by: Jeff Layton <[email protected]>
On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> Don't open-code accesses to data structure internals.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> fs/fscache/cookie.c | 2 +-
> fs/fscache/object.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
> index ff84258132bb..e9054e0c1a49 100644
> --- a/fs/fscache/cookie.c
> +++ b/fs/fscache/cookie.c
> @@ -608,7 +608,7 @@ void __fscache_relinquish_cookie(struct fscache_cookie *cookie, bool retire)
> /* Clear pointers back to the netfs */
> cookie->netfs_data = NULL;
> cookie->def = NULL;
> - BUG_ON(cookie->stores.rnode);
> + BUG_ON(!radix_tree_empty(&cookie->stores));
>
> if (cookie->parent) {
> ASSERTCMP(atomic_read(&cookie->parent->usage), >, 0);
> diff --git a/fs/fscache/object.c b/fs/fscache/object.c
> index 7a182c87f378..aa0e71f02c33 100644
> --- a/fs/fscache/object.c
> +++ b/fs/fscache/object.c
> @@ -956,7 +956,7 @@ static const struct fscache_state *_fscache_invalidate_object(struct fscache_obj
> * retire the object instead.
> */
> if (!fscache_use_cookie(object)) {
> - ASSERT(object->cookie->stores.rnode == NULL);
> + ASSERT(radix_tree_empty(&object->cookie->stores));
> set_bit(FSCACHE_OBJECT_RETIRED, &object->flags);
> _leave(" [no cookie]");
> return transit_to(KILL_OBJECT);
Reviewed-by: Jeff Layton <[email protected]>
On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> This results in no change in structure size on 64-bit x86 as it fits in
> the padding between the gfp_t and the void *.
>
> Initialising the spinlock requires a name for the benefit of lockdep,
> so RADIX_TREE_INIT() now needs to know the name of the radix tree it's
> initialising, and so do IDR_INIT() and IDA_INIT().
>
> Also add the xa_lock() and xa_unlock() family of wrappers to make it
> easier to use the lock. If we could rely on -fplan9-extensions in
> the compiler, we could avoid all of this syntactic sugar, but that
> wasn't added until gcc 4.6.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> fs/f2fs/gc.c | 2 +-
> include/linux/idr.h | 19 ++++++++++---------
> include/linux/radix-tree.h | 7 +++++--
> include/linux/xarray.h | 24 ++++++++++++++++++++++++
> kernel/pid.c | 2 +-
> tools/include/linux/spinlock.h | 1 +
> 6 files changed, 42 insertions(+), 13 deletions(-)
> create mode 100644 include/linux/xarray.h
>
> diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
> index aa720cc44509..7aa15134180e 100644
> --- a/fs/f2fs/gc.c
> +++ b/fs/f2fs/gc.c
> @@ -1006,7 +1006,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
> unsigned int init_segno = segno;
> struct gc_inode_list gc_list = {
> .ilist = LIST_HEAD_INIT(gc_list.ilist),
> - .iroot = RADIX_TREE_INIT(GFP_NOFS),
> + .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
> };
>
> trace_f2fs_gc_begin(sbi->sb, sync, background,
> diff --git a/include/linux/idr.h b/include/linux/idr.h
> index 913c335054f0..e856f4e0ab35 100644
> --- a/include/linux/idr.h
> +++ b/include/linux/idr.h
> @@ -32,27 +32,28 @@ struct idr {
> #define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
> (1 << (ROOT_TAG_SHIFT + IDR_FREE)))
>
> -#define IDR_INIT_BASE(base) { \
> - .idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER), \
> +#define IDR_INIT_BASE(name, base) { \
> + .idr_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER), \
> .idr_base = (base), \
> .idr_next = 0, \
> }
>
> /**
> * IDR_INIT() - Initialise an IDR.
> + * @name: Name of IDR.
> *
> * A freshly-initialised IDR contains no IDs.
> */
> -#define IDR_INIT IDR_INIT_BASE(0)
> +#define IDR_INIT(name) IDR_INIT_BASE(name, 0)
>
> /**
> - * DEFINE_IDR() - Define a statically-allocated IDR
> - * @name: Name of IDR
> + * DEFINE_IDR() - Define a statically-allocated IDR.
> + * @name: Name of IDR.
> *
> * An IDR defined using this macro is ready for use with no additional
> * initialisation required. It contains no IDs.
> */
> -#define DEFINE_IDR(name) struct idr name = IDR_INIT
> +#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)
>
> /**
> * idr_get_cursor - Return the current position of the cyclic allocator
> @@ -219,10 +220,10 @@ struct ida {
> struct radix_tree_root ida_rt;
> };
>
> -#define IDA_INIT { \
> - .ida_rt = RADIX_TREE_INIT(IDR_RT_MARKER | GFP_NOWAIT), \
> +#define IDA_INIT(name) { \
> + .ida_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER | GFP_NOWAIT), \
> }
> -#define DEFINE_IDA(name) struct ida name = IDA_INIT
> +#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)
>
> int ida_pre_get(struct ida *ida, gfp_t gfp_mask);
> int ida_get_new_above(struct ida *ida, int starting_id, int *p_id);
> diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
> index 6c4e2e716dac..34149e8b5f73 100644
> --- a/include/linux/radix-tree.h
> +++ b/include/linux/radix-tree.h
> @@ -110,20 +110,23 @@ struct radix_tree_node {
> #define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
>
> struct radix_tree_root {
> + spinlock_t xa_lock;
> gfp_t gfp_mask;
> struct radix_tree_node __rcu *rnode;
> };
>
> -#define RADIX_TREE_INIT(mask) { \
> +#define RADIX_TREE_INIT(name, mask) { \
> + .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
> .gfp_mask = (mask), \
> .rnode = NULL, \
> }
>
> #define RADIX_TREE(name, mask) \
> - struct radix_tree_root name = RADIX_TREE_INIT(mask)
> + struct radix_tree_root name = RADIX_TREE_INIT(name, mask)
>
> #define INIT_RADIX_TREE(root, mask) \
> do { \
> + spin_lock_init(&(root)->xa_lock); \
> (root)->gfp_mask = (mask); \
> (root)->rnode = NULL; \
> } while (0)
> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
> new file mode 100644
> index 000000000000..2dfc8006fe64
> --- /dev/null
> +++ b/include/linux/xarray.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#ifndef _LINUX_XARRAY_H
> +#define _LINUX_XARRAY_H
> +/*
> + * eXtensible Arrays
> + * Copyright (c) 2017 Microsoft Corporation
> + * Author: Matthew Wilcox <[email protected]>
> + */
> +
> +#include <linux/spinlock.h>
> +
> +#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
> +#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
> +#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
> +#define xa_lock_bh(xa) spin_lock_bh(&(xa)->xa_lock)
> +#define xa_unlock_bh(xa) spin_unlock_bh(&(xa)->xa_lock)
> +#define xa_lock_irq(xa) spin_lock_irq(&(xa)->xa_lock)
> +#define xa_unlock_irq(xa) spin_unlock_irq(&(xa)->xa_lock)
> +#define xa_lock_irqsave(xa, flags) \
> + spin_lock_irqsave(&(xa)->xa_lock, flags)
> +#define xa_unlock_irqrestore(xa, flags) \
> + spin_unlock_irqrestore(&(xa)->xa_lock, flags)
> +
> +#endif /* _LINUX_XARRAY_H */
> diff --git a/kernel/pid.c b/kernel/pid.c
> index ed6c343fe50d..157fe4b19971 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -70,7 +70,7 @@ int pid_max_max = PID_MAX_LIMIT;
> */
> struct pid_namespace init_pid_ns = {
> .kref = KREF_INIT(2),
> - .idr = IDR_INIT,
> + .idr = IDR_INIT(init_pid_ns.idr),
> .pid_allocated = PIDNS_ADDING,
> .level = 0,
> .child_reaper = &init_task,
> diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
> index 4ed569fcb139..b21b586b9854 100644
> --- a/tools/include/linux/spinlock.h
> +++ b/tools/include/linux/spinlock.h
> @@ -7,6 +7,7 @@
>
> #define spinlock_t pthread_mutex_t
> #define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
> +#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
>
> #define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
> #define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
Looks sane enough.
Reviewed-by: Jeff Layton <[email protected]>
On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> Remove the address_space ->tree_lock and use the xa_lock newly added to
> the radix_tree_root. Rename the address_space ->page_tree to ->pages,
> since we don't really care that it's a tree. Take the opportunity to
> rearrange the elements of address_space to pack them better on 64-bit,
> and make the comments more useful.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> Documentation/cgroup-v1/memory.txt | 2 +-
> Documentation/vm/page_migration | 14 +--
> arch/arm/include/asm/cacheflush.h | 6 +-
> arch/nios2/include/asm/cacheflush.h | 6 +-
> arch/parisc/include/asm/cacheflush.h | 6 +-
> drivers/staging/lustre/lustre/llite/glimpse.c | 2 +-
> drivers/staging/lustre/lustre/mdc/mdc_request.c | 8 +-
> fs/afs/write.c | 9 +-
> fs/btrfs/compression.c | 2 +-
> fs/btrfs/extent_io.c | 16 +--
> fs/btrfs/inode.c | 2 +-
> fs/buffer.c | 13 ++-
> fs/cifs/file.c | 9 +-
> fs/dax.c | 123 ++++++++++++------------
> fs/f2fs/data.c | 6 +-
> fs/f2fs/dir.c | 6 +-
> fs/f2fs/inline.c | 6 +-
> fs/f2fs/node.c | 8 +-
> fs/fs-writeback.c | 20 ++--
> fs/inode.c | 11 +--
> fs/nilfs2/btnode.c | 20 ++--
> fs/nilfs2/page.c | 22 ++---
> include/linux/backing-dev.h | 12 +--
> include/linux/fs.h | 17 ++--
> include/linux/mm.h | 2 +-
> include/linux/pagemap.h | 4 +-
> mm/filemap.c | 84 ++++++++--------
> mm/huge_memory.c | 10 +-
> mm/khugepaged.c | 49 +++++-----
> mm/memcontrol.c | 4 +-
> mm/migrate.c | 32 +++---
> mm/page-writeback.c | 42 ++++----
> mm/readahead.c | 2 +-
> mm/rmap.c | 4 +-
> mm/shmem.c | 60 ++++++------
> mm/swap_state.c | 17 ++--
> mm/truncate.c | 22 ++---
> mm/vmscan.c | 12 +--
> mm/workingset.c | 22 ++---
> 39 files changed, 344 insertions(+), 368 deletions(-)
>
> diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
> index a4af2e124e24..e8ed4c2c2e9c 100644
> --- a/Documentation/cgroup-v1/memory.txt
> +++ b/Documentation/cgroup-v1/memory.txt
> @@ -262,7 +262,7 @@ When oom event notifier is registered, event will be delivered.
> 2.6 Locking
>
> lock_page_cgroup()/unlock_page_cgroup() should not be called under
> - mapping->tree_lock.
> + the mapping's xa_lock.
>
> Other lock order is following:
> PG_locked.
> diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration
> index 0478ae2ad44a..faf849596a85 100644
> --- a/Documentation/vm/page_migration
> +++ b/Documentation/vm/page_migration
> @@ -90,7 +90,7 @@ Steps:
>
> 1. Lock the page to be migrated
>
> -2. Insure that writeback is complete.
> +2. Ensure that writeback is complete.
>
> 3. Lock the new page that we want to move to. It is locked so that accesses to
> this (not yet uptodate) page immediately lock while the move is in progress.
> @@ -100,8 +100,8 @@ Steps:
> mapcount is not zero then we do not migrate the page. All user space
> processes that attempt to access the page will now wait on the page lock.
>
> -5. The radix tree lock is taken. This will cause all processes trying
> - to access the page via the mapping to block on the radix tree spinlock.
> +5. The address space xa_lock is taken. This will cause all processes trying
> + to access the page via the mapping to block on the spinlock.
>
> 6. The refcount of the page is examined and we back out if references remain
> otherwise we know that we are the only one referencing this page.
> @@ -114,12 +114,12 @@ Steps:
>
> 9. The radix tree is changed to point to the new page.
>
> -10. The reference count of the old page is dropped because the radix tree
> +10. The reference count of the old page is dropped because the address space
> reference is gone. A reference to the new page is established because
> - the new page is referenced to by the radix tree.
> + the new page is referenced by the address space.
>
> -11. The radix tree lock is dropped. With that lookups in the mapping
> - become possible again. Processes will move from spinning on the tree_lock
> +11. The address space xa_lock is dropped. With that lookups in the mapping
> + become possible again. Processes will move from spinning on the xa_lock
> to sleeping on the locked new page.
>
> 12. The page contents are copied to the new page.
> diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
> index 74504b154256..f4ead9a74b7d 100644
> --- a/arch/arm/include/asm/cacheflush.h
> +++ b/arch/arm/include/asm/cacheflush.h
> @@ -318,10 +318,8 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
> #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
> extern void flush_kernel_dcache_page(struct page *);
>
> -#define flush_dcache_mmap_lock(mapping) \
> - spin_lock_irq(&(mapping)->tree_lock)
> -#define flush_dcache_mmap_unlock(mapping) \
> - spin_unlock_irq(&(mapping)->tree_lock)
> +#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
> +#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
>
> #define flush_icache_user_range(vma,page,addr,len) \
> flush_dcache_page(page)
> diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
> index 55e383c173f7..7a6eda381964 100644
> --- a/arch/nios2/include/asm/cacheflush.h
> +++ b/arch/nios2/include/asm/cacheflush.h
> @@ -46,9 +46,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> extern void flush_dcache_range(unsigned long start, unsigned long end);
> extern void invalidate_dcache_range(unsigned long start, unsigned long end);
>
> -#define flush_dcache_mmap_lock(mapping) \
> - spin_lock_irq(&(mapping)->tree_lock)
> -#define flush_dcache_mmap_unlock(mapping) \
> - spin_unlock_irq(&(mapping)->tree_lock)
> +#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
> +#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
>
> #endif /* _ASM_NIOS2_CACHEFLUSH_H */
> diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
> index 3742508cc534..b772dd320118 100644
> --- a/arch/parisc/include/asm/cacheflush.h
> +++ b/arch/parisc/include/asm/cacheflush.h
> @@ -54,10 +54,8 @@ void invalidate_kernel_vmap_range(void *vaddr, int size);
> #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> extern void flush_dcache_page(struct page *page);
>
> -#define flush_dcache_mmap_lock(mapping) \
> - spin_lock_irq(&(mapping)->tree_lock)
> -#define flush_dcache_mmap_unlock(mapping) \
> - spin_unlock_irq(&(mapping)->tree_lock)
> +#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
> +#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)
>
> #define flush_icache_page(vma,page) do { \
> flush_kernel_dcache_page(page); \
> diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
> index c43ac574274c..5f2843da911c 100644
> --- a/drivers/staging/lustre/lustre/llite/glimpse.c
> +++ b/drivers/staging/lustre/lustre/llite/glimpse.c
> @@ -69,7 +69,7 @@ blkcnt_t dirty_cnt(struct inode *inode)
> void *results[1];
>
> if (inode->i_mapping)
> - cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->page_tree,
> + cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->pages,
> results, 0, 1,
> PAGECACHE_TAG_DIRTY);
> if (cnt == 0 && atomic_read(&vob->vob_mmap_cnt) > 0)
> diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
> index 03e55bca4ada..45dcf9f958d4 100644
> --- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
> +++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
> @@ -937,14 +937,14 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
> struct page *page;
> int found;
>
> - spin_lock_irq(&mapping->tree_lock);
> - found = radix_tree_gang_lookup(&mapping->page_tree,
> + xa_lock_irq(&mapping->pages);
> + found = radix_tree_gang_lookup(&mapping->pages,
> (void **)&page, offset, 1);
> if (found > 0 && !radix_tree_exceptional_entry(page)) {
> struct lu_dirpage *dp;
>
> get_page(page);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> /*
> * In contrast to find_lock_page() we are sure that directory
> * page cannot be truncated (while DLM lock is held) and,
> @@ -992,7 +992,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
> page = ERR_PTR(-EIO);
> }
> } else {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> page = NULL;
> }
> return page;
> diff --git a/fs/afs/write.c b/fs/afs/write.c
> index 9370e2feb999..603d2ce48dbb 100644
> --- a/fs/afs/write.c
> +++ b/fs/afs/write.c
> @@ -570,10 +570,11 @@ static int afs_writepages_region(struct address_space *mapping,
>
> _debug("wback %lx", page->index);
>
> - /* at this point we hold neither mapping->tree_lock nor lock on
> - * the page itself: the page may be truncated or invalidated
> - * (changing page->mapping to NULL), or even swizzled back from
> - * swapper_space to tmpfs file mapping
> + /*
> + * at this point we hold neither the xa_lock nor the
> + * page lock: the page may be truncated or invalidated
> + * (changing page->mapping to NULL), or even swizzled
> + * back from swapper_space to tmpfs file mapping
> */
> ret = lock_page_killable(page);
> if (ret < 0) {
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 07d049c0c20f..0e35aa6aa2f1 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -458,7 +458,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> break;
>
> rcu_read_lock();
> - page = radix_tree_lookup(&mapping->page_tree, pg_index);
> + page = radix_tree_lookup(&mapping->pages, pg_index);
> rcu_read_unlock();
> if (page && !radix_tree_exceptional_entry(page)) {
> misses++;
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index dfeb74a0be77..1f2739702518 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3958,11 +3958,11 @@ static int extent_write_cache_pages(struct address_space *mapping,
>
> done_index = page->index;
> /*
> - * At this point we hold neither mapping->tree_lock nor
> - * lock on the page itself: the page may be truncated or
> - * invalidated (changing page->mapping to NULL), or even
> - * swizzled back from swapper_space to tmpfs file
> - * mapping
> + * At this point we hold neither the xa_lock nor
> + * the page lock: the page may be truncated or
> + * invalidated (changing page->mapping to NULL),
> + * or even swizzled back from swapper_space to
> + * tmpfs file mapping
> */
> if (!trylock_page(page)) {
> flush_write_bio(epd);
> @@ -5169,13 +5169,13 @@ void clear_extent_buffer_dirty(struct extent_buffer *eb)
> WARN_ON(!PagePrivate(page));
>
> clear_page_dirty_for_io(page);
> - spin_lock_irq(&page->mapping->tree_lock);
> + xa_lock_irq(&page->mapping->pages);
> if (!PageDirty(page)) {
> - radix_tree_tag_clear(&page->mapping->page_tree,
> + radix_tree_tag_clear(&page->mapping->pages,
> page_index(page),
> PAGECACHE_TAG_DIRTY);
> }
> - spin_unlock_irq(&page->mapping->tree_lock);
> + xa_unlock_irq(&page->mapping->pages);
> ClearPageError(page);
> unlock_page(page);
> }
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 53ca025655fc..d0016c1c7b04 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -7427,7 +7427,7 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
>
> bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
> {
> - struct radix_tree_root *root = &inode->i_mapping->page_tree;
> + struct radix_tree_root *root = &inode->i_mapping->pages;
> bool found = false;
> void **pagep = NULL;
> struct page *page = NULL;
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 0b487cdb7124..692ee249fb6a 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -185,10 +185,9 @@ EXPORT_SYMBOL(end_buffer_write_sync);
> * we get exclusion from try_to_free_buffers with the blockdev mapping's
> * private_lock.
> *
> - * Hack idea: for the blockdev mapping, i_bufferlist_lock contention
> + * Hack idea: for the blockdev mapping, private_lock contention
> * may be quite high. This code could TryLock the page, and if that
> - * succeeds, there is no need to take private_lock. (But if
> - * private_lock is contended then so is mapping->tree_lock).
> + * succeeds, there is no need to take private_lock.
> */
> static struct buffer_head *
> __find_get_block_slow(struct block_device *bdev, sector_t block)
> @@ -599,14 +598,14 @@ void __set_page_dirty(struct page *page, struct address_space *mapping,
> {
> unsigned long flags;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> if (page->mapping) { /* Race with truncate? */
> WARN_ON_ONCE(warn && !PageUptodate(page));
> account_page_dirtied(page, mapping);
> - radix_tree_tag_set(&mapping->page_tree,
> + radix_tree_tag_set(&mapping->pages,
> page_index(page), PAGECACHE_TAG_DIRTY);
> }
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> }
> EXPORT_SYMBOL_GPL(__set_page_dirty);
>
> @@ -1096,7 +1095,7 @@ __getblk_slow(struct block_device *bdev, sector_t block,
> * inode list.
> *
> * mark_buffer_dirty() is atomic. It takes bh->b_page->mapping->private_lock,
> - * mapping->tree_lock and mapping->host->i_lock.
> + * mapping xa_lock and mapping->host->i_lock.
> */
> void mark_buffer_dirty(struct buffer_head *bh)
> {
> diff --git a/fs/cifs/file.c b/fs/cifs/file.c
> index 7cee97b93a61..a6ace9ac4d94 100644
> --- a/fs/cifs/file.c
> +++ b/fs/cifs/file.c
> @@ -1987,11 +1987,10 @@ wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages,
> for (i = 0; i < found_pages; i++) {
> page = wdata->pages[i];
> /*
> - * At this point we hold neither mapping->tree_lock nor
> - * lock on the page itself: the page may be truncated or
> - * invalidated (changing page->mapping to NULL), or even
> - * swizzled back from swapper_space to tmpfs file
> - * mapping
> + * At this point we hold neither the xa_lock nor the
> + * page lock: the page may be truncated or invalidated
> + * (changing page->mapping to NULL), or even swizzled
> + * back from swapper_space to tmpfs file mapping
> */
>
> if (nr_pages == 0)
> diff --git a/fs/dax.c b/fs/dax.c
> index 0276df90e86c..cac580399ed4 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -159,11 +159,9 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
> }
>
> /*
> - * We do not necessarily hold the mapping->tree_lock when we call this
> - * function so it is possible that 'entry' is no longer a valid item in the
> - * radix tree. This is okay because all we really need to do is to find the
> - * correct waitqueue where tasks might be waiting for that old 'entry' and
> - * wake them.
> + * @entry may no longer be the entry at the index in the mapping.
> + * The important information it's conveying is whether the entry at
> + * this index used to be a PMD entry.
> */
> static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> pgoff_t index, void *entry, bool wake_all)
> @@ -175,7 +173,7 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
>
> /*
> * Checking for locked entry and prepare_to_wait_exclusive() happens
> - * under mapping->tree_lock, ditto for entry handling in our callers.
> + * under xa_lock, ditto for entry handling in our callers.
> * So at this point all tasks that could have seen our entry locked
> * must be in the waitqueue and the following check will see them.
> */
> @@ -184,41 +182,38 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> }
>
> /*
> - * Check whether the given slot is locked. The function must be called with
> - * mapping->tree_lock held
> + * Check whether the given slot is locked. Must be called with xa_lock held.
> */
> static inline int slot_locked(struct address_space *mapping, void **slot)
> {
> unsigned long entry = (unsigned long)
> - radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
> + radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
> return entry & RADIX_DAX_ENTRY_LOCK;
> }
>
> /*
> - * Mark the given slot is locked. The function must be called with
> - * mapping->tree_lock held
> + * Mark the given slot as locked. Must be called with xa_lock held.
> */
> static inline void *lock_slot(struct address_space *mapping, void **slot)
> {
> unsigned long entry = (unsigned long)
> - radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
> + radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
>
> entry |= RADIX_DAX_ENTRY_LOCK;
> - radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
> + radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
> return (void *)entry;
> }
>
> /*
> - * Mark the given slot is unlocked. The function must be called with
> - * mapping->tree_lock held
> + * Mark the given slot as unlocked. Must be called with xa_lock held.
> */
> static inline void *unlock_slot(struct address_space *mapping, void **slot)
> {
> unsigned long entry = (unsigned long)
> - radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
> + radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
>
> entry &= ~(unsigned long)RADIX_DAX_ENTRY_LOCK;
> - radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
> + radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
> return (void *)entry;
> }
>
> @@ -229,7 +224,7 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
> * put_locked_mapping_entry() when he locked the entry and now wants to
> * unlock it.
> *
> - * The function must be called with mapping->tree_lock held.
> + * Must be called with xa_lock held.
> */
> static void *get_unlocked_mapping_entry(struct address_space *mapping,
> pgoff_t index, void ***slotp)
> @@ -242,7 +237,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
> ewait.wait.func = wake_exceptional_entry_func;
>
> for (;;) {
> - entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
> + entry = __radix_tree_lookup(&mapping->pages, index, NULL,
> &slot);
> if (!entry ||
> WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
> @@ -255,10 +250,10 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
> wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
> prepare_to_wait_exclusive(wq, &ewait.wait,
> TASK_UNINTERRUPTIBLE);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> schedule();
> finish_wait(wq, &ewait.wait);
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> }
> }
>
> @@ -267,15 +262,15 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
> {
> void *entry, **slot;
>
> - spin_lock_irq(&mapping->tree_lock);
> - entry = __radix_tree_lookup(&mapping->page_tree, index, NULL, &slot);
> + xa_lock_irq(&mapping->pages);
> + entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
> if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
> !slot_locked(mapping, slot))) {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return;
> }
> unlock_slot(mapping, slot);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> dax_wake_mapping_entry_waiter(mapping, index, entry, false);
> }
>
> @@ -332,7 +327,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> void *entry, **slot;
>
> restart:
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> entry = get_unlocked_mapping_entry(mapping, index, &slot);
>
> if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
> @@ -364,12 +359,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> if (pmd_downgrade) {
> /*
> * Make sure 'entry' remains valid while we drop
> - * mapping->tree_lock.
> + * xa_lock.
> */
> entry = lock_slot(mapping, slot);
> }
>
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> /*
> * Besides huge zero pages the only other thing that gets
> * downgraded are empty entries which don't need to be
> @@ -386,26 +381,26 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> put_locked_mapping_entry(mapping, index);
> return ERR_PTR(err);
> }
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> if (!entry) {
> /*
> - * We needed to drop the page_tree lock while calling
> + * We needed to drop the pages lock while calling
> * radix_tree_preload() and we didn't have an entry to
> * lock. See if another thread inserted an entry at
> * our index during this time.
> */
> - entry = __radix_tree_lookup(&mapping->page_tree, index,
> + entry = __radix_tree_lookup(&mapping->pages, index,
> NULL, &slot);
> if (entry) {
> radix_tree_preload_end();
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> goto restart;
> }
> }
>
> if (pmd_downgrade) {
> - radix_tree_delete(&mapping->page_tree, index);
> + radix_tree_delete(&mapping->pages, index);
> mapping->nrexceptional--;
> dax_wake_mapping_entry_waiter(mapping, index, entry,
> true);
> @@ -413,11 +408,11 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
>
> entry = dax_radix_locked_entry(0, size_flag | RADIX_DAX_EMPTY);
>
> - err = __radix_tree_insert(&mapping->page_tree, index,
> + err = __radix_tree_insert(&mapping->pages, index,
> dax_radix_order(entry), entry);
> radix_tree_preload_end();
> if (err) {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> /*
> * Our insertion of a DAX entry failed, most likely
> * because we were inserting a PMD entry and it
> @@ -430,12 +425,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
> }
> /* Good, we have inserted empty locked entry into the tree. */
> mapping->nrexceptional++;
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return entry;
> }
> entry = lock_slot(mapping, slot);
> out_unlock:
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return entry;
> }
>
> @@ -444,22 +439,22 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
> {
> int ret = 0;
> void *entry;
> - struct radix_tree_root *page_tree = &mapping->page_tree;
> + struct radix_tree_root *pages = &mapping->pages;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> entry = get_unlocked_mapping_entry(mapping, index, NULL);
> if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
> goto out;
> if (!trunc &&
> - (radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_DIRTY) ||
> - radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE)))
> + (radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
> + radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE)))
> goto out;
> - radix_tree_delete(page_tree, index);
> + radix_tree_delete(pages, index);
> mapping->nrexceptional--;
> ret = 1;
> out:
> put_unlocked_mapping_entry(mapping, index, entry);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return ret;
> }
> /*
> @@ -529,7 +524,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> void *entry, sector_t sector,
> unsigned long flags, bool dirty)
> {
> - struct radix_tree_root *page_tree = &mapping->page_tree;
> + struct radix_tree_root *pages = &mapping->pages;
> void *new_entry;
> pgoff_t index = vmf->pgoff;
>
> @@ -545,7 +540,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> unmap_mapping_pages(mapping, vmf->pgoff, 1, false);
> }
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> new_entry = dax_radix_locked_entry(sector, flags);
>
> if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> @@ -561,17 +556,17 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
> void **slot;
> void *ret;
>
> - ret = __radix_tree_lookup(page_tree, index, &node, &slot);
> + ret = __radix_tree_lookup(pages, index, &node, &slot);
> WARN_ON_ONCE(ret != entry);
> - __radix_tree_replace(page_tree, node, slot,
> + __radix_tree_replace(pages, node, slot,
> new_entry, NULL);
> entry = new_entry;
> }
>
> if (dirty)
> - radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY);
> + radix_tree_tag_set(pages, index, PAGECACHE_TAG_DIRTY);
>
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return entry;
> }
>
> @@ -661,7 +656,7 @@ static int dax_writeback_one(struct block_device *bdev,
> struct dax_device *dax_dev, struct address_space *mapping,
> pgoff_t index, void *entry)
> {
> - struct radix_tree_root *page_tree = &mapping->page_tree;
> + struct radix_tree_root *pages = &mapping->pages;
> void *entry2, **slot, *kaddr;
> long ret = 0, id;
> sector_t sector;
> @@ -676,7 +671,7 @@ static int dax_writeback_one(struct block_device *bdev,
> if (WARN_ON(!radix_tree_exceptional_entry(entry)))
> return -EIO;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
> /* Entry got punched out / reallocated? */
> if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
> @@ -695,7 +690,7 @@ static int dax_writeback_one(struct block_device *bdev,
> }
>
> /* Another fsync thread may have already written back this entry */
> - if (!radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE))
> + if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
> goto put_unlocked;
> /* Lock the entry to serialize with page faults */
> entry = lock_slot(mapping, slot);
> @@ -703,11 +698,11 @@ static int dax_writeback_one(struct block_device *bdev,
> * We can clear the tag now but we have to be careful so that concurrent
> * dax_writeback_one() calls for the same index cannot finish before we
> * actually flush the caches. This is achieved as the calls will look
> - * at the entry only under tree_lock and once they do that they will
> + * at the entry only under xa_lock and once they do that they will
> * see the entry locked and wait for it to unlock.
> */
> - radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_TOWRITE);
> - spin_unlock_irq(&mapping->tree_lock);
> + radix_tree_tag_clear(pages, index, PAGECACHE_TAG_TOWRITE);
> + xa_unlock_irq(&mapping->pages);
>
> /*
> * Even if dax_writeback_mapping_range() was given a wbc->range_start
> @@ -725,7 +720,7 @@ static int dax_writeback_one(struct block_device *bdev,
> goto dax_unlock;
>
> /*
> - * dax_direct_access() may sleep, so cannot hold tree_lock over
> + * dax_direct_access() may sleep, so cannot hold xa_lock over
> * its invocation.
> */
> ret = dax_direct_access(dax_dev, pgoff, size / PAGE_SIZE, &kaddr, &pfn);
> @@ -745,9 +740,9 @@ static int dax_writeback_one(struct block_device *bdev,
> * the pfn mappings are writeprotected and fault waits for mapping
> * entry lock.
> */
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_DIRTY);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> + radix_tree_tag_clear(pages, index, PAGECACHE_TAG_DIRTY);
> + xa_unlock_irq(&mapping->pages);
> trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
> dax_unlock:
> dax_read_unlock(id);
> @@ -756,7 +751,7 @@ static int dax_writeback_one(struct block_device *bdev,
>
> put_unlocked:
> put_unlocked_mapping_entry(mapping, index, entry2);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return ret;
> }
>
> @@ -1524,21 +1519,21 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
> pgoff_t index = vmf->pgoff;
> int vmf_ret, error;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> entry = get_unlocked_mapping_entry(mapping, index, &slot);
> /* Did we race with someone splitting entry or so? */
> if (!entry ||
> (pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
> (pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
> put_unlocked_mapping_entry(mapping, index, entry);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
> VM_FAULT_NOPAGE);
> return VM_FAULT_NOPAGE;
> }
> - radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY);
> + radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
> entry = lock_slot(mapping, slot);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> switch (pe_size) {
> case PE_SIZE_PTE:
> error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 7578ed1a85e0..4eee39befc67 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2381,12 +2381,12 @@ void f2fs_set_page_dirty_nobuffers(struct page *page)
> SetPageDirty(page);
> spin_unlock(&mapping->private_lock);
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> WARN_ON_ONCE(!PageUptodate(page));
> account_page_dirtied(page, mapping);
> - radix_tree_tag_set(&mapping->page_tree,
> + radix_tree_tag_set(&mapping->pages,
> page_index(page), PAGECACHE_TAG_DIRTY);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> unlock_page_memcg(page);
>
> __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
> index f00b5ed8c011..0fd9695eddf6 100644
> --- a/fs/f2fs/dir.c
> +++ b/fs/f2fs/dir.c
> @@ -741,10 +741,10 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
>
> if (bit_pos == NR_DENTRY_IN_BLOCK &&
> !truncate_hole(dir, page->index, page->index + 1)) {
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> - radix_tree_tag_clear(&mapping->page_tree, page_index(page),
> + xa_lock_irqsave(&mapping->pages, flags);
> + radix_tree_tag_clear(&mapping->pages, page_index(page),
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> clear_page_dirty_for_io(page);
> ClearPagePrivate(page);
> diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
> index 90e38d8ea688..7858b8e15f33 100644
> --- a/fs/f2fs/inline.c
> +++ b/fs/f2fs/inline.c
> @@ -226,10 +226,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
> kunmap_atomic(src_addr);
> set_page_dirty(dn.inode_page);
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> - radix_tree_tag_clear(&mapping->page_tree, page_index(page),
> + xa_lock_irqsave(&mapping->pages, flags);
> + radix_tree_tag_clear(&mapping->pages, page_index(page),
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> set_inode_flag(inode, FI_APPEND_WRITE);
> set_inode_flag(inode, FI_DATA_EXIST);
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 177c438e4a56..fba2644abdf0 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -91,11 +91,11 @@ static void clear_node_page_dirty(struct page *page)
> unsigned int long flags;
>
> if (PageDirty(page)) {
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> - radix_tree_tag_clear(&mapping->page_tree,
> + xa_lock_irqsave(&mapping->pages, flags);
> + radix_tree_tag_clear(&mapping->pages,
> page_index(page),
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> clear_page_dirty_for_io(page);
> dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
> @@ -1140,7 +1140,7 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
> f2fs_bug_on(sbi, check_nid_range(sbi, nid));
>
> rcu_read_lock();
> - apage = radix_tree_lookup(&NODE_MAPPING(sbi)->page_tree, nid);
> + apage = radix_tree_lookup(&NODE_MAPPING(sbi)->pages, nid);
> rcu_read_unlock();
> if (apage)
> return;
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index d4d04fee568a..d5c0e70dbfa8 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -347,9 +347,9 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> * By the time control reaches here, RCU grace period has passed
> * since I_WB_SWITCH assertion and all wb stat update transactions
> * between unlocked_inode_to_wb_begin/end() are guaranteed to be
> - * synchronizing against mapping->tree_lock.
> + * synchronizing against xa_lock.
> *
> - * Grabbing old_wb->list_lock, inode->i_lock and mapping->tree_lock
> + * Grabbing old_wb->list_lock, inode->i_lock and xa_lock
> * gives us exclusion against all wb related operations on @inode
> * including IO list manipulations and stat updates.
> */
> @@ -361,7 +361,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING);
> }
> spin_lock(&inode->i_lock);
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> /*
> * Once I_FREEING is visible under i_lock, the eviction path owns
> @@ -373,22 +373,22 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> /*
> * Count and transfer stats. Note that PAGECACHE_TAG_DIRTY points
> * to possibly dirty pages while PAGECACHE_TAG_WRITEBACK points to
> - * pages actually under underwriteback.
> + * pages actually under writeback.
> */
> - radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
> PAGECACHE_TAG_DIRTY) {
> struct page *page = radix_tree_deref_slot_protected(slot,
> - &mapping->tree_lock);
> + &mapping->pages.xa_lock);
> if (likely(page) && PageDirty(page)) {
> dec_wb_stat(old_wb, WB_RECLAIMABLE);
> inc_wb_stat(new_wb, WB_RECLAIMABLE);
> }
> }
>
> - radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
> PAGECACHE_TAG_WRITEBACK) {
> struct page *page = radix_tree_deref_slot_protected(slot,
> - &mapping->tree_lock);
> + &mapping->pages.xa_lock);
> if (likely(page)) {
> WARN_ON_ONCE(!PageWriteback(page));
> dec_wb_stat(old_wb, WB_WRITEBACK);
> @@ -430,7 +430,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> */
> smp_store_release(&inode->i_state, inode->i_state & ~I_WB_SWITCH);
>
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> spin_unlock(&inode->i_lock);
> spin_unlock(&new_wb->list_lock);
> spin_unlock(&old_wb->list_lock);
> @@ -507,7 +507,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> /*
> * In addition to synchronizing among switchers, I_WB_SWITCH tells
> * the RCU protected stat update paths to grab the mapping's
> - * tree_lock so that stat transfer can synchronize against them.
> + * xa_lock so that stat transfer can synchronize against them.
> * Let's continue after I_WB_SWITCH is guaranteed to be visible.
> */
> call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
> diff --git a/fs/inode.c b/fs/inode.c
> index ef362364d396..07e26909e24d 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -349,8 +349,7 @@ EXPORT_SYMBOL(inc_nlink);
> void address_space_init_once(struct address_space *mapping)
> {
> memset(mapping, 0, sizeof(*mapping));
> - INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC | __GFP_ACCOUNT);
> - spin_lock_init(&mapping->tree_lock);
> + INIT_RADIX_TREE(&mapping->pages, GFP_ATOMIC | __GFP_ACCOUNT);
> init_rwsem(&mapping->i_mmap_rwsem);
> INIT_LIST_HEAD(&mapping->private_list);
> spin_lock_init(&mapping->private_lock);
> @@ -499,14 +498,14 @@ EXPORT_SYMBOL(__remove_inode_hash);
> void clear_inode(struct inode *inode)
> {
> /*
> - * We have to cycle tree_lock here because reclaim can be still in the
> + * We have to cycle the xa_lock here because reclaim can be in the
> * process of removing the last page (in __delete_from_page_cache())
> - * and we must not free mapping under it.
> + * and we must not free the mapping under it.
> */
> - spin_lock_irq(&inode->i_data.tree_lock);
> + xa_lock_irq(&inode->i_data.pages);
> BUG_ON(inode->i_data.nrpages);
> BUG_ON(inode->i_data.nrexceptional);
> - spin_unlock_irq(&inode->i_data.tree_lock);
> + xa_unlock_irq(&inode->i_data.pages);
> BUG_ON(!list_empty(&inode->i_data.private_list));
> BUG_ON(!(inode->i_state & I_FREEING));
> BUG_ON(inode->i_state & I_CLEAR);
> diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
> index c21e0b4454a6..9e2a00207436 100644
> --- a/fs/nilfs2/btnode.c
> +++ b/fs/nilfs2/btnode.c
> @@ -193,9 +193,9 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
> (unsigned long long)oldkey,
> (unsigned long long)newkey);
>
> - spin_lock_irq(&btnc->tree_lock);
> - err = radix_tree_insert(&btnc->page_tree, newkey, obh->b_page);
> - spin_unlock_irq(&btnc->tree_lock);
> + xa_lock_irq(&btnc->pages);
> + err = radix_tree_insert(&btnc->pages, newkey, obh->b_page);
> + xa_unlock_irq(&btnc->pages);
> /*
> * Note: page->index will not change to newkey until
> * nilfs_btnode_commit_change_key() will be called.
> @@ -251,11 +251,11 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
> (unsigned long long)newkey);
> mark_buffer_dirty(obh);
>
> - spin_lock_irq(&btnc->tree_lock);
> - radix_tree_delete(&btnc->page_tree, oldkey);
> - radix_tree_tag_set(&btnc->page_tree, newkey,
> + xa_lock_irq(&btnc->pages);
> + radix_tree_delete(&btnc->pages, oldkey);
> + radix_tree_tag_set(&btnc->pages, newkey,
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irq(&btnc->tree_lock);
> + xa_unlock_irq(&btnc->pages);
>
> opage->index = obh->b_blocknr = newkey;
> unlock_page(opage);
> @@ -283,9 +283,9 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
> return;
>
> if (nbh == NULL) { /* blocksize == pagesize */
> - spin_lock_irq(&btnc->tree_lock);
> - radix_tree_delete(&btnc->page_tree, newkey);
> - spin_unlock_irq(&btnc->tree_lock);
> + xa_lock_irq(&btnc->pages);
> + radix_tree_delete(&btnc->pages, newkey);
> + xa_unlock_irq(&btnc->pages);
> unlock_page(ctxt->bh->b_page);
> } else
> brelse(nbh);
> diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
> index 68241512d7c1..1c6703efde9e 100644
> --- a/fs/nilfs2/page.c
> +++ b/fs/nilfs2/page.c
> @@ -331,15 +331,15 @@ void nilfs_copy_back_pages(struct address_space *dmap,
> struct page *page2;
>
> /* move the page to the destination cache */
> - spin_lock_irq(&smap->tree_lock);
> - page2 = radix_tree_delete(&smap->page_tree, offset);
> + xa_lock_irq(&smap->pages);
> + page2 = radix_tree_delete(&smap->pages, offset);
> WARN_ON(page2 != page);
>
> smap->nrpages--;
> - spin_unlock_irq(&smap->tree_lock);
> + xa_unlock_irq(&smap->pages);
>
> - spin_lock_irq(&dmap->tree_lock);
> - err = radix_tree_insert(&dmap->page_tree, offset, page);
> + xa_lock_irq(&dmap->pages);
> + err = radix_tree_insert(&dmap->pages, offset, page);
> if (unlikely(err < 0)) {
> WARN_ON(err == -EEXIST);
> page->mapping = NULL;
> @@ -348,11 +348,11 @@ void nilfs_copy_back_pages(struct address_space *dmap,
> page->mapping = dmap;
> dmap->nrpages++;
> if (PageDirty(page))
> - radix_tree_tag_set(&dmap->page_tree,
> + radix_tree_tag_set(&dmap->pages,
> offset,
> PAGECACHE_TAG_DIRTY);
> }
> - spin_unlock_irq(&dmap->tree_lock);
> + xa_unlock_irq(&dmap->pages);
> }
> unlock_page(page);
> }
> @@ -474,15 +474,15 @@ int __nilfs_clear_page_dirty(struct page *page)
> struct address_space *mapping = page->mapping;
>
> if (mapping) {
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> if (test_bit(PG_dirty, &page->flags)) {
> - radix_tree_tag_clear(&mapping->page_tree,
> + radix_tree_tag_clear(&mapping->pages,
> page_index(page),
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return clear_page_dirty_for_io(page);
> }
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return 0;
> }
> return TestClearPageDirty(page);
> diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
> index 3e4ce54d84ab..3df0d20e23f3 100644
> --- a/include/linux/backing-dev.h
> +++ b/include/linux/backing-dev.h
> @@ -329,7 +329,7 @@ static inline bool inode_to_wb_is_valid(struct inode *inode)
> * @inode: inode of interest
> *
> * Returns the wb @inode is currently associated with. The caller must be
> - * holding either @inode->i_lock, @inode->i_mapping->tree_lock, or the
> + * holding either @inode->i_lock, @inode->i_mapping->pages.xa_lock, or the
> * associated wb's list_lock.
> */
> static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
> @@ -337,7 +337,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
> #ifdef CONFIG_LOCKDEP
> WARN_ON_ONCE(debug_locks &&
> (!lockdep_is_held(&inode->i_lock) &&
> - !lockdep_is_held(&inode->i_mapping->tree_lock) &&
> + !lockdep_is_held(&inode->i_mapping->pages.xa_lock) &&
> !lockdep_is_held(&inode->i_wb->list_lock)));
> #endif
> return inode->i_wb;
> @@ -349,7 +349,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
> * @lockedp: temp bool output param, to be passed to the end function
> *
> * The caller wants to access the wb associated with @inode but isn't
> - * holding inode->i_lock, mapping->tree_lock or wb->list_lock. This
> + * holding inode->i_lock, mapping->pages.xa_lock or wb->list_lock. This
> * function determines the wb associated with @inode and ensures that the
> * association doesn't change until the transaction is finished with
> * unlocked_inode_to_wb_end().
> @@ -370,10 +370,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
> *lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
>
> if (unlikely(*lockedp))
> - spin_lock_irq(&inode->i_mapping->tree_lock);
> + xa_lock_irq(&inode->i_mapping->pages);
>
> /*
> - * Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock.
> + * Protected by either !I_WB_SWITCH + rcu_read_lock() or xa_lock.
> * inode_to_wb() will bark. Deref directly.
> */
> return inode->i_wb;
> @@ -387,7 +387,7 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
> static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
> {
> if (unlikely(locked))
> - spin_unlock_irq(&inode->i_mapping->tree_lock);
> + xa_unlock_irq(&inode->i_mapping->pages);
>
> rcu_read_unlock();
> }
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 2a815560fda0..e227f68e0418 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -13,6 +13,7 @@
> #include <linux/list_lru.h>
> #include <linux/llist.h>
> #include <linux/radix-tree.h>
> +#include <linux/xarray.h>
> #include <linux/rbtree.h>
> #include <linux/init.h>
> #include <linux/pid.h>
> @@ -390,23 +391,21 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
>
> struct address_space {
> struct inode *host; /* owner: inode, block_device */
> - struct radix_tree_root page_tree; /* radix tree of all pages */
> - spinlock_t tree_lock; /* and lock protecting it */
> + struct radix_tree_root pages; /* cached pages */
> + gfp_t gfp_mask; /* for allocating pages */
> atomic_t i_mmap_writable;/* count VM_SHARED mappings */
> struct rb_root_cached i_mmap; /* tree of private and shared mappings */
> struct rw_semaphore i_mmap_rwsem; /* protect tree, count, list */
> - /* Protected by tree_lock together with the radix tree */
> + /* Protected by pages.xa_lock */
> unsigned long nrpages; /* number of total pages */
> - /* number of shadow or DAX exceptional entries */
> - unsigned long nrexceptional;
> + unsigned long nrexceptional; /* shadow or DAX entries */
> pgoff_t writeback_index;/* writeback starts here */
> const struct address_space_operations *a_ops; /* methods */
> unsigned long flags; /* error bits */
> + errseq_t wb_err;
> spinlock_t private_lock; /* for use by the address_space */
> - gfp_t gfp_mask; /* implicit gfp mask for allocations */
> - struct list_head private_list; /* for use by the address_space */
> + struct list_head private_list; /* ditto */
> void *private_data; /* ditto */
> - errseq_t wb_err;
> } __attribute__((aligned(sizeof(long)))) __randomize_layout;
> /*
> * On most architectures that alignment is already the case; but
> @@ -1986,7 +1985,7 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
> *
> * I_WB_SWITCH Cgroup bdi_writeback switching in progress. Used to
> * synchronize competing switching instances and to tell
> - * wb stat updates to grab mapping->tree_lock. See
> + * wb stat updates to grab mapping->pages.xa_lock. See
> * inode_switch_wb_work_fn() for details.
> *
> * I_OVL_INUSE Used by overlayfs to get exclusive ownership on upper
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 47b0fb0a6e41..aad22344d685 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -738,7 +738,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
> * refcount. The each user mapping also has a reference to the page.
> *
> * The pagecache pages are stored in a per-mapping radix tree, which is
> - * rooted at mapping->page_tree, and indexed by offset.
> + * rooted at mapping->pages, and indexed by offset.
> * Where 2.4 and early 2.6 kernels kept dirty/clean pages in per-address_space
> * lists, we instead now tag pages as dirty/writeback in the radix tree.
> *
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 34ce3ebf97d5..80a6149152d4 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -144,7 +144,7 @@ void release_pages(struct page **pages, int nr);
> * 3. check the page is still in pagecache (if no, goto 1)
> *
> * Remove-side that cares about stability of _refcount (eg. reclaim) has the
> - * following (with tree_lock held for write):
> + * following (with pages.xa_lock held):
> * A. atomically check refcount is correct and set it to 0 (atomic_cmpxchg)
> * B. remove page from pagecache
> * C. free the page
> @@ -157,7 +157,7 @@ void release_pages(struct page **pages, int nr);
> *
> * It is possible that between 1 and 2, the page is removed then the exact same
> * page is inserted into the same position in pagecache. That's OK: the
> - * old find_get_page using tree_lock could equally have run before or after
> + * old find_get_page using a lock could equally have run before or after
> * such a re-insertion, depending on order that locks are granted.
> *
> * Lookups racing against pagecache insertion isn't a big problem: either 1
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 693f62212a59..7588b7f1f479 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -66,7 +66,7 @@
> * ->i_mmap_rwsem (truncate_pagecache)
> * ->private_lock (__free_pte->__set_page_dirty_buffers)
> * ->swap_lock (exclusive_swap_page, others)
> - * ->mapping->tree_lock
> + * ->mapping->pages.xa_lock
> *
> * ->i_mutex
> * ->i_mmap_rwsem (truncate->unmap_mapping_range)
> @@ -74,7 +74,7 @@
> * ->mmap_sem
> * ->i_mmap_rwsem
> * ->page_table_lock or pte_lock (various, mainly in memory.c)
> - * ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
> + * ->mapping->pages.xa_lock (arch-dependent flush_dcache_mmap_lock)
> *
> * ->mmap_sem
> * ->lock_page (access_process_vm)
> @@ -84,7 +84,7 @@
> *
> * bdi->wb.list_lock
> * sb_lock (fs/fs-writeback.c)
> - * ->mapping->tree_lock (__sync_single_inode)
> + * ->mapping->pages.xa_lock (__sync_single_inode)
> *
> * ->i_mmap_rwsem
> * ->anon_vma.lock (vma_adjust)
> @@ -95,11 +95,11 @@
> * ->page_table_lock or pte_lock
> * ->swap_lock (try_to_unmap_one)
> * ->private_lock (try_to_unmap_one)
> - * ->tree_lock (try_to_unmap_one)
> + * ->pages.xa_lock (try_to_unmap_one)
> * ->zone_lru_lock(zone) (follow_page->mark_page_accessed)
> * ->zone_lru_lock(zone) (check_pte_range->isolate_lru_page)
> * ->private_lock (page_remove_rmap->set_page_dirty)
> - * ->tree_lock (page_remove_rmap->set_page_dirty)
> + * ->pages.xa_lock (page_remove_rmap->set_page_dirty)
> * bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
> * ->inode->i_lock (page_remove_rmap->set_page_dirty)
> * ->memcg->move_lock (page_remove_rmap->lock_page_memcg)
> @@ -118,14 +118,15 @@ static int page_cache_tree_insert(struct address_space *mapping,
> void **slot;
> int error;
>
> - error = __radix_tree_create(&mapping->page_tree, page->index, 0,
> + error = __radix_tree_create(&mapping->pages, page->index, 0,
> &node, &slot);
> if (error)
> return error;
> if (*slot) {
> void *p;
>
> - p = radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
> + p = radix_tree_deref_slot_protected(slot,
> + &mapping->pages.xa_lock);
> if (!radix_tree_exceptional_entry(p))
> return -EEXIST;
>
> @@ -133,7 +134,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
> if (shadowp)
> *shadowp = p;
> }
> - __radix_tree_replace(&mapping->page_tree, node, slot, page,
> + __radix_tree_replace(&mapping->pages, node, slot, page,
> workingset_lookup_update(mapping));
> mapping->nrpages++;
> return 0;
> @@ -155,13 +156,13 @@ static void page_cache_tree_delete(struct address_space *mapping,
> struct radix_tree_node *node;
> void **slot;
>
> - __radix_tree_lookup(&mapping->page_tree, page->index + i,
> + __radix_tree_lookup(&mapping->pages, page->index + i,
> &node, &slot);
>
> VM_BUG_ON_PAGE(!node && nr != 1, page);
>
> - radix_tree_clear_tags(&mapping->page_tree, node, slot);
> - __radix_tree_replace(&mapping->page_tree, node, slot, shadow,
> + radix_tree_clear_tags(&mapping->pages, node, slot);
> + __radix_tree_replace(&mapping->pages, node, slot, shadow,
> workingset_lookup_update(mapping));
> }
>
> @@ -253,7 +254,7 @@ static void unaccount_page_cache_page(struct address_space *mapping,
> /*
> * Delete a page from the page cache and free it. Caller has to make
> * sure the page is locked and that nobody else uses it - or that usage
> - * is safe. The caller must hold the mapping's tree_lock.
> + * is safe. The caller must hold the xa_lock.
> */
> void __delete_from_page_cache(struct page *page, void *shadow)
> {
> @@ -296,9 +297,9 @@ void delete_from_page_cache(struct page *page)
> unsigned long flags;
>
> BUG_ON(!PageLocked(page));
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> __delete_from_page_cache(page, NULL);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> page_cache_free_page(mapping, page);
> }
> @@ -309,14 +310,14 @@ EXPORT_SYMBOL(delete_from_page_cache);
> * @mapping: the mapping to which pages belong
> * @pvec: pagevec with pages to delete
> *
> - * The function walks over mapping->page_tree and removes pages passed in @pvec
> - * from the radix tree. The function expects @pvec to be sorted by page index.
> - * It tolerates holes in @pvec (radix tree entries at those indices are not
> + * The function walks over mapping->pages and removes pages passed in @pvec
> + * from the mapping. The function expects @pvec to be sorted by page index.
> + * It tolerates holes in @pvec (mapping entries at those indices are not
> * modified). The function expects only THP head pages to be present in the
> - * @pvec and takes care to delete all corresponding tail pages from the radix
> - * tree as well.
> + * @pvec and takes care to delete all corresponding tail pages from the
> + * mapping as well.
> *
> - * The function expects mapping->tree_lock to be held.
> + * The function expects xa_lock to be held.
> */
> static void
> page_cache_tree_delete_batch(struct address_space *mapping,
> @@ -330,11 +331,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
> pgoff_t start;
>
> start = pvec->pages[0]->index;
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> if (i >= pagevec_count(pvec) && !tail_pages)
> break;
> page = radix_tree_deref_slot_protected(slot,
> - &mapping->tree_lock);
> + &mapping->pages.xa_lock);
> if (radix_tree_exceptional_entry(page))
> continue;
> if (!tail_pages) {
> @@ -357,8 +358,8 @@ page_cache_tree_delete_batch(struct address_space *mapping,
> } else {
> tail_pages--;
> }
> - radix_tree_clear_tags(&mapping->page_tree, iter.node, slot);
> - __radix_tree_replace(&mapping->page_tree, iter.node, slot, NULL,
> + radix_tree_clear_tags(&mapping->pages, iter.node, slot);
> + __radix_tree_replace(&mapping->pages, iter.node, slot, NULL,
> workingset_lookup_update(mapping));
> total_pages++;
> }
> @@ -374,14 +375,14 @@ void delete_from_page_cache_batch(struct address_space *mapping,
> if (!pagevec_count(pvec))
> return;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> for (i = 0; i < pagevec_count(pvec); i++) {
> trace_mm_filemap_delete_from_page_cache(pvec->pages[i]);
>
> unaccount_page_cache_page(mapping, pvec->pages[i]);
> }
> page_cache_tree_delete_batch(mapping, pvec);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> for (i = 0; i < pagevec_count(pvec); i++)
> page_cache_free_page(mapping, pvec->pages[i]);
> @@ -798,7 +799,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
> new->mapping = mapping;
> new->index = offset;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> __delete_from_page_cache(old, NULL);
> error = page_cache_tree_insert(mapping, new, NULL);
> BUG_ON(error);
> @@ -810,7 +811,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
> __inc_node_page_state(new, NR_FILE_PAGES);
> if (PageSwapBacked(new))
> __inc_node_page_state(new, NR_SHMEM);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> mem_cgroup_migrate(old, new);
> radix_tree_preload_end();
> if (freepage)
> @@ -852,7 +853,7 @@ static int __add_to_page_cache_locked(struct page *page,
> page->mapping = mapping;
> page->index = offset;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> error = page_cache_tree_insert(mapping, page, shadowp);
> radix_tree_preload_end();
> if (unlikely(error))
> @@ -861,7 +862,7 @@ static int __add_to_page_cache_locked(struct page *page,
> /* hugetlb pages do not participate in page cache accounting. */
> if (!huge)
> __inc_node_page_state(page, NR_FILE_PAGES);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> if (!huge)
> mem_cgroup_commit_charge(page, memcg, false, false);
> trace_mm_filemap_add_to_page_cache(page);
> @@ -869,7 +870,7 @@ static int __add_to_page_cache_locked(struct page *page,
> err_insert:
> page->mapping = NULL;
> /* Leave page->index set: truncation relies upon it */
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> if (!huge)
> mem_cgroup_cancel_charge(page, memcg, false);
> put_page(page);
> @@ -1353,7 +1354,7 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
> for (i = 0; i < max_scan; i++) {
> struct page *page;
>
> - page = radix_tree_lookup(&mapping->page_tree, index);
> + page = radix_tree_lookup(&mapping->pages, index);
> if (!page || radix_tree_exceptional_entry(page))
> break;
> index++;
> @@ -1394,7 +1395,7 @@ pgoff_t page_cache_prev_hole(struct address_space *mapping,
> for (i = 0; i < max_scan; i++) {
> struct page *page;
>
> - page = radix_tree_lookup(&mapping->page_tree, index);
> + page = radix_tree_lookup(&mapping->pages, index);
> if (!page || radix_tree_exceptional_entry(page))
> break;
> index--;
> @@ -1427,7 +1428,7 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
> rcu_read_lock();
> repeat:
> page = NULL;
> - pagep = radix_tree_lookup_slot(&mapping->page_tree, offset);
> + pagep = radix_tree_lookup_slot(&mapping->pages, offset);
> if (pagep) {
> page = radix_tree_deref_slot(pagep);
> if (unlikely(!page))
> @@ -1633,7 +1634,7 @@ unsigned find_get_entries(struct address_space *mapping,
> return 0;
>
> rcu_read_lock();
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> struct page *head, *page;
> repeat:
> page = radix_tree_deref_slot(slot);
> @@ -1710,7 +1711,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
> return 0;
>
> rcu_read_lock();
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, *start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) {
> struct page *head, *page;
>
> if (iter.index > end)
> @@ -1795,7 +1796,7 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
> return 0;
>
> rcu_read_lock();
> - radix_tree_for_each_contig(slot, &mapping->page_tree, &iter, index) {
> + radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) {
> struct page *head, *page;
> repeat:
> page = radix_tree_deref_slot(slot);
> @@ -1875,8 +1876,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
> return 0;
>
> rcu_read_lock();
> - radix_tree_for_each_tagged(slot, &mapping->page_tree,
> - &iter, *index, tag) {
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) {
> struct page *head, *page;
>
> if (iter.index > end)
> @@ -1969,8 +1969,7 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
> return 0;
>
> rcu_read_lock();
> - radix_tree_for_each_tagged(slot, &mapping->page_tree,
> - &iter, start, tag) {
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) {
> struct page *head, *page;
> repeat:
> page = radix_tree_deref_slot(slot);
> @@ -2624,8 +2623,7 @@ void filemap_map_pages(struct vm_fault *vmf,
> struct page *head, *page;
>
> rcu_read_lock();
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
> - start_pgoff) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) {
> if (iter.index > end_pgoff)
> break;
> repeat:
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 87ab9b8f56b5..4b60f55f1f8b 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2450,7 +2450,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> } else {
> /* Additional pin to radix tree */
> page_ref_add(head, 2);
> - spin_unlock(&head->mapping->tree_lock);
> + xa_unlock(&head->mapping->pages);
> }
>
> spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
> @@ -2658,15 +2658,15 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> if (mapping) {
> void **pslot;
>
> - spin_lock(&mapping->tree_lock);
> - pslot = radix_tree_lookup_slot(&mapping->page_tree,
> + xa_lock(&mapping->pages);
> + pslot = radix_tree_lookup_slot(&mapping->pages,
> page_index(head));
> /*
> * Check if the head page is present in radix tree.
> * We assume all tail are present too, if head is there.
> */
> if (radix_tree_deref_slot_protected(pslot,
> - &mapping->tree_lock) != head)
> + &mapping->pages.xa_lock) != head)
> goto fail;
> }
>
> @@ -2700,7 +2700,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> }
> spin_unlock(&pgdata->split_queue_lock);
> fail: if (mapping)
> - spin_unlock(&mapping->tree_lock);
> + xa_unlock(&mapping->pages);
> spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
> unfreeze_page(head);
> ret = -EBUSY;
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b7e2268dfc9a..5800093fe94a 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1339,8 +1339,8 @@ static void collapse_shmem(struct mm_struct *mm,
> */
>
> index = start;
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + xa_lock_irq(&mapping->pages);
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> int n = min(iter.index, end) - index;
>
> /*
> @@ -1353,7 +1353,7 @@ static void collapse_shmem(struct mm_struct *mm,
> }
> nr_none += n;
> for (; index < min(iter.index, end); index++) {
> - radix_tree_insert(&mapping->page_tree, index,
> + radix_tree_insert(&mapping->pages, index,
> new_page + (index % HPAGE_PMD_NR));
> }
>
> @@ -1362,16 +1362,16 @@ static void collapse_shmem(struct mm_struct *mm,
> break;
>
> page = radix_tree_deref_slot_protected(slot,
> - &mapping->tree_lock);
> + &mapping->pages.xa_lock);
> if (radix_tree_exceptional_entry(page) || !PageUptodate(page)) {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> /* swap in or instantiate fallocated page */
> if (shmem_getpage(mapping->host, index, &page,
> SGP_NOHUGE)) {
> result = SCAN_FAIL;
> goto tree_unlocked;
> }
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> } else if (trylock_page(page)) {
> get_page(page);
> } else {
> @@ -1380,7 +1380,7 @@ static void collapse_shmem(struct mm_struct *mm,
> }
>
> /*
> - * The page must be locked, so we can drop the tree_lock
> + * The page must be locked, so we can drop the xa_lock
> * without racing with truncate.
> */
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> @@ -1391,7 +1391,7 @@ static void collapse_shmem(struct mm_struct *mm,
> result = SCAN_TRUNCATED;
> goto out_unlock;
> }
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
>
> if (isolate_lru_page(page)) {
> result = SCAN_DEL_PAGE_LRU;
> @@ -1401,11 +1401,11 @@ static void collapse_shmem(struct mm_struct *mm,
> if (page_mapped(page))
> unmap_mapping_pages(mapping, index, 1, false);
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> - slot = radix_tree_lookup_slot(&mapping->page_tree, index);
> + slot = radix_tree_lookup_slot(&mapping->pages, index);
> VM_BUG_ON_PAGE(page != radix_tree_deref_slot_protected(slot,
> - &mapping->tree_lock), page);
> + &mapping->pages.xa_lock), page);
> VM_BUG_ON_PAGE(page_mapped(page), page);
>
> /*
> @@ -1426,14 +1426,14 @@ static void collapse_shmem(struct mm_struct *mm,
> list_add_tail(&page->lru, &pagelist);
>
> /* Finally, replace with the new page. */
> - radix_tree_replace_slot(&mapping->page_tree, slot,
> + radix_tree_replace_slot(&mapping->pages, slot,
> new_page + (index % HPAGE_PMD_NR));
>
> slot = radix_tree_iter_resume(slot, &iter);
> index++;
> continue;
> out_lru:
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> putback_lru_page(page);
> out_isolate_failed:
> unlock_page(page);
> @@ -1459,14 +1459,14 @@ static void collapse_shmem(struct mm_struct *mm,
> }
>
> for (; index < end; index++) {
> - radix_tree_insert(&mapping->page_tree, index,
> + radix_tree_insert(&mapping->pages, index,
> new_page + (index % HPAGE_PMD_NR));
> }
> nr_none += n;
> }
>
> tree_locked:
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> tree_unlocked:
>
> if (result == SCAN_SUCCEED) {
> @@ -1515,9 +1515,8 @@ static void collapse_shmem(struct mm_struct *mm,
> } else {
> /* Something went wrong: rollback changes to the radix-tree */
> shmem_uncharge(mapping->host, nr_none);
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
> - start) {
> + xa_lock_irq(&mapping->pages);
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> if (iter.index >= end)
> break;
> page = list_first_entry_or_null(&pagelist,
> @@ -1527,8 +1526,7 @@ static void collapse_shmem(struct mm_struct *mm,
> break;
> nr_none--;
> /* Put holes back where they were */
> - radix_tree_delete(&mapping->page_tree,
> - iter.index);
> + radix_tree_delete(&mapping->pages, iter.index);
> continue;
> }
>
> @@ -1537,16 +1535,15 @@ static void collapse_shmem(struct mm_struct *mm,
> /* Unfreeze the page. */
> list_del(&page->lru);
> page_ref_unfreeze(page, 2);
> - radix_tree_replace_slot(&mapping->page_tree,
> - slot, page);
> + radix_tree_replace_slot(&mapping->pages, slot, page);
> slot = radix_tree_iter_resume(slot, &iter);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> putback_lru_page(page);
> unlock_page(page);
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> }
> VM_BUG_ON(nr_none);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
>
> /* Unfreeze new_page, caller would take care about freeing it */
> page_ref_unfreeze(new_page, 1);
> @@ -1574,7 +1571,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
> swap = 0;
> memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
> rcu_read_lock();
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> if (iter.index >= start + HPAGE_PMD_NR)
> break;
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 670e99b68aa6..d89cb08ac39b 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5967,9 +5967,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
>
> /*
> * Interrupts should be disabled here because the caller holds the
> - * mapping->tree_lock lock which is taken with interrupts-off. It is
> + * mapping->pages xa_lock which is taken with interrupts-off. It is
> * important here to have the interrupts disabled because it is the
> - * only synchronisation we have for udpating the per-CPU variables.
> + * only synchronisation we have for updating the per-CPU variables.
> */
> VM_BUG_ON(!irqs_disabled());
> mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 1e5525a25691..184bc1d0e187 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -466,20 +466,21 @@ int migrate_page_move_mapping(struct address_space *mapping,
> oldzone = page_zone(page);
> newzone = page_zone(newpage);
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> - pslot = radix_tree_lookup_slot(&mapping->page_tree,
> + pslot = radix_tree_lookup_slot(&mapping->pages,
> page_index(page));
>
> expected_count += 1 + page_has_private(page);
> if (page_count(page) != expected_count ||
> - radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
> - spin_unlock_irq(&mapping->tree_lock);
> + radix_tree_deref_slot_protected(pslot,
> + &mapping->pages.xa_lock) != page) {
> + xa_unlock_irq(&mapping->pages);
> return -EAGAIN;
> }
>
> if (!page_ref_freeze(page, expected_count)) {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return -EAGAIN;
> }
>
> @@ -493,7 +494,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> if (mode == MIGRATE_ASYNC && head &&
> !buffer_migrate_lock_buffers(head, mode)) {
> page_ref_unfreeze(page, expected_count);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return -EAGAIN;
> }
>
> @@ -521,7 +522,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> SetPageDirty(newpage);
> }
>
> - radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
> + radix_tree_replace_slot(&mapping->pages, pslot, newpage);
>
> /*
> * Drop cache reference from old page by unfreezing
> @@ -530,7 +531,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
> */
> page_ref_unfreeze(page, expected_count - 1);
>
> - spin_unlock(&mapping->tree_lock);
> + xa_unlock(&mapping->pages);
> /* Leave irq disabled to prevent preemption while updating stats */
>
> /*
> @@ -573,20 +574,19 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
> int expected_count;
> void **pslot;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> - pslot = radix_tree_lookup_slot(&mapping->page_tree,
> - page_index(page));
> + pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page));
>
> expected_count = 2 + page_has_private(page);
> if (page_count(page) != expected_count ||
> - radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
> - spin_unlock_irq(&mapping->tree_lock);
> + radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) {
> + xa_unlock_irq(&mapping->pages);
> return -EAGAIN;
> }
>
> if (!page_ref_freeze(page, expected_count)) {
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> return -EAGAIN;
> }
>
> @@ -595,11 +595,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
>
> get_page(newpage);
>
> - radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
> + radix_tree_replace_slot(&mapping->pages, pslot, newpage);
>
> page_ref_unfreeze(page, expected_count - 1);
>
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
>
> return MIGRATEPAGE_SUCCESS;
> }
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 586f31261c83..588ce729d199 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -2099,7 +2099,7 @@ void __init page_writeback_init(void)
> * so that it can tag pages faster than a dirtying process can create them).
> */
> /*
> - * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce tree_lock latency.
> + * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency.
> */
> void tag_pages_for_writeback(struct address_space *mapping,
> pgoff_t start, pgoff_t end)
> @@ -2109,22 +2109,22 @@ void tag_pages_for_writeback(struct address_space *mapping,
> struct radix_tree_iter iter;
> void **slot;
>
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, start,
> + xa_lock_irq(&mapping->pages);
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start,
> PAGECACHE_TAG_DIRTY) {
> if (iter.index > end)
> break;
> - radix_tree_iter_tag_set(&mapping->page_tree, &iter,
> + radix_tree_iter_tag_set(&mapping->pages, &iter,
> PAGECACHE_TAG_TOWRITE);
> tagged++;
> if ((tagged % WRITEBACK_TAG_BATCH) != 0)
> continue;
> slot = radix_tree_iter_resume(slot, &iter);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> cond_resched();
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> }
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> }
> EXPORT_SYMBOL(tag_pages_for_writeback);
>
> @@ -2467,13 +2467,13 @@ int __set_page_dirty_nobuffers(struct page *page)
> return 1;
> }
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> BUG_ON(page_mapping(page) != mapping);
> WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
> account_page_dirtied(page, mapping);
> - radix_tree_tag_set(&mapping->page_tree, page_index(page),
> + radix_tree_tag_set(&mapping->pages, page_index(page),
> PAGECACHE_TAG_DIRTY);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> unlock_page_memcg(page);
>
> if (mapping->host) {
> @@ -2718,11 +2718,10 @@ int test_clear_page_writeback(struct page *page)
> struct backing_dev_info *bdi = inode_to_bdi(inode);
> unsigned long flags;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> ret = TestClearPageWriteback(page);
> if (ret) {
> - radix_tree_tag_clear(&mapping->page_tree,
> - page_index(page),
> + radix_tree_tag_clear(&mapping->pages, page_index(page),
> PAGECACHE_TAG_WRITEBACK);
> if (bdi_cap_account_writeback(bdi)) {
> struct bdi_writeback *wb = inode_to_wb(inode);
> @@ -2736,7 +2735,7 @@ int test_clear_page_writeback(struct page *page)
> PAGECACHE_TAG_WRITEBACK))
> sb_clear_inode_writeback(mapping->host);
>
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> } else {
> ret = TestClearPageWriteback(page);
> }
> @@ -2766,7 +2765,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
> struct backing_dev_info *bdi = inode_to_bdi(inode);
> unsigned long flags;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> ret = TestSetPageWriteback(page);
> if (!ret) {
> bool on_wblist;
> @@ -2774,8 +2773,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
> on_wblist = mapping_tagged(mapping,
> PAGECACHE_TAG_WRITEBACK);
>
> - radix_tree_tag_set(&mapping->page_tree,
> - page_index(page),
> + radix_tree_tag_set(&mapping->pages, page_index(page),
> PAGECACHE_TAG_WRITEBACK);
> if (bdi_cap_account_writeback(bdi))
> inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK);
> @@ -2789,14 +2787,12 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
> sb_mark_inode_writeback(mapping->host);
> }
> if (!PageDirty(page))
> - radix_tree_tag_clear(&mapping->page_tree,
> - page_index(page),
> + radix_tree_tag_clear(&mapping->pages, page_index(page),
> PAGECACHE_TAG_DIRTY);
> if (!keep_write)
> - radix_tree_tag_clear(&mapping->page_tree,
> - page_index(page),
> + radix_tree_tag_clear(&mapping->pages, page_index(page),
> PAGECACHE_TAG_TOWRITE);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> } else {
> ret = TestSetPageWriteback(page);
> }
> @@ -2816,7 +2812,7 @@ EXPORT_SYMBOL(__test_set_page_writeback);
> */
> int mapping_tagged(struct address_space *mapping, int tag)
> {
> - return radix_tree_tagged(&mapping->page_tree, tag);
> + return radix_tree_tagged(&mapping->pages, tag);
> }
> EXPORT_SYMBOL(mapping_tagged);
>
> diff --git a/mm/readahead.c b/mm/readahead.c
> index c4ca70239233..514188fd2489 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -175,7 +175,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
> break;
>
> rcu_read_lock();
> - page = radix_tree_lookup(&mapping->page_tree, page_offset);
> + page = radix_tree_lookup(&mapping->pages, page_offset);
> rcu_read_unlock();
> if (page && !radix_tree_exceptional_entry(page))
> continue;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 47db27f8049e..87c1ca0cf1a3 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -32,11 +32,11 @@
> * mmlist_lock (in mmput, drain_mmlist and others)
> * mapping->private_lock (in __set_page_dirty_buffers)
> * mem_cgroup_{begin,end}_page_stat (memcg->move_lock)
> - * mapping->tree_lock (widely used)
> + * mapping->pages.xa_lock (widely used)
> * inode->i_lock (in set_page_dirty's __mark_inode_dirty)
> * bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
> * sb_lock (within inode_lock in fs/fs-writeback.c)
> - * mapping->tree_lock (widely used, in set_page_dirty,
> + * mapping->pages.xa_lock (widely used, in set_page_dirty,
> * in arch-dependent flush_dcache_mmap_lock,
> * within bdi.wb->list_lock in __sync_single_inode)
> *
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 1907688b75ee..b2fdc258853d 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -332,12 +332,12 @@ static int shmem_radix_tree_replace(struct address_space *mapping,
>
> VM_BUG_ON(!expected);
> VM_BUG_ON(!replacement);
> - item = __radix_tree_lookup(&mapping->page_tree, index, &node, &pslot);
> + item = __radix_tree_lookup(&mapping->pages, index, &node, &pslot);
> if (!item)
> return -ENOENT;
> if (item != expected)
> return -ENOENT;
> - __radix_tree_replace(&mapping->page_tree, node, pslot,
> + __radix_tree_replace(&mapping->pages, node, pslot,
> replacement, NULL);
> return 0;
> }
> @@ -355,7 +355,7 @@ static bool shmem_confirm_swap(struct address_space *mapping,
> void *item;
>
> rcu_read_lock();
> - item = radix_tree_lookup(&mapping->page_tree, index);
> + item = radix_tree_lookup(&mapping->pages, index);
> rcu_read_unlock();
> return item == swp_to_radix_entry(swap);
> }
> @@ -581,14 +581,14 @@ static int shmem_add_to_page_cache(struct page *page,
> page->mapping = mapping;
> page->index = index;
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> if (PageTransHuge(page)) {
> void __rcu **results;
> pgoff_t idx;
> int i;
>
> error = 0;
> - if (radix_tree_gang_lookup_slot(&mapping->page_tree,
> + if (radix_tree_gang_lookup_slot(&mapping->pages,
> &results, &idx, index, 1) &&
> idx < index + HPAGE_PMD_NR) {
> error = -EEXIST;
> @@ -596,14 +596,14 @@ static int shmem_add_to_page_cache(struct page *page,
>
> if (!error) {
> for (i = 0; i < HPAGE_PMD_NR; i++) {
> - error = radix_tree_insert(&mapping->page_tree,
> + error = radix_tree_insert(&mapping->pages,
> index + i, page + i);
> VM_BUG_ON(error);
> }
> count_vm_event(THP_FILE_ALLOC);
> }
> } else if (!expected) {
> - error = radix_tree_insert(&mapping->page_tree, index, page);
> + error = radix_tree_insert(&mapping->pages, index, page);
> } else {
> error = shmem_radix_tree_replace(mapping, index, expected,
> page);
> @@ -615,10 +615,10 @@ static int shmem_add_to_page_cache(struct page *page,
> __inc_node_page_state(page, NR_SHMEM_THPS);
> __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
> __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> } else {
> page->mapping = NULL;
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> page_ref_sub(page, nr);
> }
> return error;
> @@ -634,13 +634,13 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
>
> VM_BUG_ON_PAGE(PageCompound(page), page);
>
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> error = shmem_radix_tree_replace(mapping, page->index, page, radswap);
> page->mapping = NULL;
> mapping->nrpages--;
> __dec_node_page_state(page, NR_FILE_PAGES);
> __dec_node_page_state(page, NR_SHMEM);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> put_page(page);
> BUG_ON(error);
> }
> @@ -653,9 +653,9 @@ static int shmem_free_swap(struct address_space *mapping,
> {
> void *old;
>
> - spin_lock_irq(&mapping->tree_lock);
> - old = radix_tree_delete_item(&mapping->page_tree, index, radswap);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> + old = radix_tree_delete_item(&mapping->pages, index, radswap);
> + xa_unlock_irq(&mapping->pages);
> if (old != radswap)
> return -ENOENT;
> free_swap_and_cache(radix_to_swp_entry(radswap));
> @@ -666,7 +666,7 @@ static int shmem_free_swap(struct address_space *mapping,
> * Determine (in bytes) how many of the shmem object's pages mapped by the
> * given offsets are swapped out.
> *
> - * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
> + * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
> * as long as the inode doesn't go away and racy results are not a problem.
> */
> unsigned long shmem_partial_swap_usage(struct address_space *mapping,
> @@ -679,7 +679,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
>
> rcu_read_lock();
>
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> if (iter.index >= end)
> break;
>
> @@ -708,7 +708,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
> * Determine (in bytes) how many of the shmem object's pages mapped by the
> * given vma is swapped out.
> *
> - * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
> + * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
> * as long as the inode doesn't go away and racy results are not a problem.
> */
> unsigned long shmem_swap_usage(struct vm_area_struct *vma)
> @@ -1123,7 +1123,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
> int error = 0;
>
> radswap = swp_to_radix_entry(swap);
> - index = find_swap_entry(&mapping->page_tree, radswap);
> + index = find_swap_entry(&mapping->pages, radswap);
> if (index == -1)
> return -EAGAIN; /* tell shmem_unuse we found nothing */
>
> @@ -1436,7 +1436,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
>
> hindex = round_down(index, HPAGE_PMD_NR);
> rcu_read_lock();
> - if (radix_tree_gang_lookup_slot(&mapping->page_tree, &results, &idx,
> + if (radix_tree_gang_lookup_slot(&mapping->pages, &results, &idx,
> hindex, 1) && idx < hindex + HPAGE_PMD_NR) {
> rcu_read_unlock();
> return NULL;
> @@ -1549,14 +1549,14 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
> * Our caller will very soon move newpage out of swapcache, but it's
> * a nice clean interface for us to replace oldpage by newpage there.
> */
> - spin_lock_irq(&swap_mapping->tree_lock);
> + xa_lock_irq(&swap_mapping->pages);
> error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,
> newpage);
> if (!error) {
> __inc_node_page_state(newpage, NR_FILE_PAGES);
> __dec_node_page_state(oldpage, NR_FILE_PAGES);
> }
> - spin_unlock_irq(&swap_mapping->tree_lock);
> + xa_unlock_irq(&swap_mapping->pages);
>
> if (unlikely(error)) {
> /*
> @@ -2622,7 +2622,7 @@ static void shmem_tag_pins(struct address_space *mapping)
> start = 0;
> rcu_read_lock();
>
> - radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
> + radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
> page = radix_tree_deref_slot(slot);
> if (!page || radix_tree_exception(page)) {
> if (radix_tree_deref_retry(page)) {
> @@ -2630,10 +2630,10 @@ static void shmem_tag_pins(struct address_space *mapping)
> continue;
> }
> } else if (page_count(page) - page_mapcount(page) > 1) {
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_tag_set(&mapping->page_tree, iter.index,
> + xa_lock_irq(&mapping->pages);
> + radix_tree_tag_set(&mapping->pages, iter.index,
> SHMEM_TAG_PINNED);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> }
>
> if (need_resched()) {
> @@ -2665,7 +2665,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)
>
> error = 0;
> for (scan = 0; scan <= LAST_SCAN; scan++) {
> - if (!radix_tree_tagged(&mapping->page_tree, SHMEM_TAG_PINNED))
> + if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED))
> break;
>
> if (!scan)
> @@ -2675,7 +2675,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)
>
> start = 0;
> rcu_read_lock();
> - radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter,
> + radix_tree_for_each_tagged(slot, &mapping->pages, &iter,
> start, SHMEM_TAG_PINNED) {
>
> page = radix_tree_deref_slot(slot);
> @@ -2701,10 +2701,10 @@ static int shmem_wait_for_pins(struct address_space *mapping)
> error = -EBUSY;
> }
>
> - spin_lock_irq(&mapping->tree_lock);
> - radix_tree_tag_clear(&mapping->page_tree,
> + xa_lock_irq(&mapping->pages);
> + radix_tree_tag_clear(&mapping->pages,
> iter.index, SHMEM_TAG_PINNED);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> continue_resched:
> if (need_resched()) {
> slot = radix_tree_iter_resume(slot, &iter);
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 39ae7cfad90f..3f95e8fc4cb2 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -124,10 +124,10 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
> SetPageSwapCache(page);
>
> address_space = swap_address_space(entry);
> - spin_lock_irq(&address_space->tree_lock);
> + xa_lock_irq(&address_space->pages);
> for (i = 0; i < nr; i++) {
> set_page_private(page + i, entry.val + i);
> - error = radix_tree_insert(&address_space->page_tree,
> + error = radix_tree_insert(&address_space->pages,
> idx + i, page + i);
> if (unlikely(error))
> break;
> @@ -145,13 +145,13 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
> VM_BUG_ON(error == -EEXIST);
> set_page_private(page + i, 0UL);
> while (i--) {
> - radix_tree_delete(&address_space->page_tree, idx + i);
> + radix_tree_delete(&address_space->pages, idx + i);
> set_page_private(page + i, 0UL);
> }
> ClearPageSwapCache(page);
> page_ref_sub(page, nr);
> }
> - spin_unlock_irq(&address_space->tree_lock);
> + xa_unlock_irq(&address_space->pages);
>
> return error;
> }
> @@ -188,7 +188,7 @@ void __delete_from_swap_cache(struct page *page)
> address_space = swap_address_space(entry);
> idx = swp_offset(entry);
> for (i = 0; i < nr; i++) {
> - radix_tree_delete(&address_space->page_tree, idx + i);
> + radix_tree_delete(&address_space->pages, idx + i);
> set_page_private(page + i, 0);
> }
> ClearPageSwapCache(page);
> @@ -272,9 +272,9 @@ void delete_from_swap_cache(struct page *page)
> entry.val = page_private(page);
>
> address_space = swap_address_space(entry);
> - spin_lock_irq(&address_space->tree_lock);
> + xa_lock_irq(&address_space->pages);
> __delete_from_swap_cache(page);
> - spin_unlock_irq(&address_space->tree_lock);
> + xa_unlock_irq(&address_space->pages);
>
> put_swap_page(page, entry);
> page_ref_sub(page, hpage_nr_pages(page));
> @@ -612,12 +612,11 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> return -ENOMEM;
> for (i = 0; i < nr; i++) {
> space = spaces + i;
> - INIT_RADIX_TREE(&space->page_tree, GFP_ATOMIC|__GFP_NOWARN);
> + INIT_RADIX_TREE(&space->pages, GFP_ATOMIC|__GFP_NOWARN);
> atomic_set(&space->i_mmap_writable, 0);
> space->a_ops = &swap_aops;
> /* swap cache doesn't use writeback related tags */
> mapping_set_no_writeback_tags(space);
> - spin_lock_init(&space->tree_lock);
> }
> nr_swapper_spaces[type] = nr;
> rcu_assign_pointer(swapper_spaces[type], spaces);
> diff --git a/mm/truncate.c b/mm/truncate.c
> index c34e2fd4f583..295a33a06fac 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -36,11 +36,11 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
> struct radix_tree_node *node;
> void **slot;
>
> - if (!__radix_tree_lookup(&mapping->page_tree, index, &node, &slot))
> + if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot))
> return;
> if (*slot != entry)
> return;
> - __radix_tree_replace(&mapping->page_tree, node, slot, NULL,
> + __radix_tree_replace(&mapping->pages, node, slot, NULL,
> workingset_update_node);
> mapping->nrexceptional--;
> }
> @@ -48,9 +48,9 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
> static void clear_shadow_entry(struct address_space *mapping, pgoff_t index,
> void *entry)
> {
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> __clear_shadow_entry(mapping, index, entry);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> }
>
> /*
> @@ -79,7 +79,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
> dax = dax_mapping(mapping);
> lock = !dax && indices[j] < end;
> if (lock)
> - spin_lock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
>
> for (i = j; i < pagevec_count(pvec); i++) {
> struct page *page = pvec->pages[i];
> @@ -102,7 +102,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
> }
>
> if (lock)
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_unlock_irq(&mapping->pages);
> pvec->nr = j;
> }
>
> @@ -518,8 +518,8 @@ void truncate_inode_pages_final(struct address_space *mapping)
> * modification that does not see AS_EXITING is
> * completed before starting the final truncate.
> */
> - spin_lock_irq(&mapping->tree_lock);
> - spin_unlock_irq(&mapping->tree_lock);
> + xa_lock_irq(&mapping->pages);
> + xa_unlock_irq(&mapping->pages);
>
> truncate_inode_pages(mapping, 0);
> }
> @@ -627,13 +627,13 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
> if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
> return 0;
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> if (PageDirty(page))
> goto failed;
>
> BUG_ON(page_has_private(page));
> __delete_from_page_cache(page, NULL);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> if (mapping->a_ops->freepage)
> mapping->a_ops->freepage(page);
> @@ -641,7 +641,7 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
> put_page(page); /* pagecache ref */
> return 1;
> failed:
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> return 0;
> }
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 444749669187..93f4b4634431 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -656,7 +656,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> BUG_ON(!PageLocked(page));
> BUG_ON(mapping != page_mapping(page));
>
> - spin_lock_irqsave(&mapping->tree_lock, flags);
> + xa_lock_irqsave(&mapping->pages, flags);
> /*
> * The non racy check for a busy page.
> *
> @@ -680,7 +680,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> * load is not satisfied before that of page->_refcount.
> *
> * Note that if SetPageDirty is always performed via set_page_dirty,
> - * and thus under tree_lock, then this ordering is not required.
> + * and thus under xa_lock, then this ordering is not required.
> */
> if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
> refcount = 1 + HPAGE_PMD_NR;
> @@ -698,7 +698,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> swp_entry_t swap = { .val = page_private(page) };
> mem_cgroup_swapout(page, swap);
> __delete_from_swap_cache(page);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> put_swap_page(page, swap);
> } else {
> void (*freepage)(struct page *);
> @@ -719,13 +719,13 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> * only page cache pages found in these are zero pages
> * covering holes, and because we don't want to mix DAX
> * exceptional entries and shadow exceptional entries in the
> - * same page_tree.
> + * same address_space.
> */
> if (reclaimed && page_is_file_cache(page) &&
> !mapping_exiting(mapping) && !dax_mapping(mapping))
> shadow = workingset_eviction(mapping, page);
> __delete_from_page_cache(page, shadow);
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
>
> if (freepage != NULL)
> freepage(page);
> @@ -734,7 +734,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> return 1;
>
> cannot_free:
> - spin_unlock_irqrestore(&mapping->tree_lock, flags);
> + xa_unlock_irqrestore(&mapping->pages, flags);
> return 0;
> }
>
> diff --git a/mm/workingset.c b/mm/workingset.c
> index b7d616a3bbbe..3cb3586181e6 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -202,7 +202,7 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
> * @mapping: address space the page was backing
> * @page: the page being evicted
> *
> - * Returns a shadow entry to be stored in @mapping->page_tree in place
> + * Returns a shadow entry to be stored in @mapping->pages in place
> * of the evicted @page so that a later refault can be detected.
> */
> void *workingset_eviction(struct address_space *mapping, struct page *page)
> @@ -348,7 +348,7 @@ void workingset_update_node(struct radix_tree_node *node)
> *
> * Avoid acquiring the list_lru lock when the nodes are
> * already where they should be. The list_empty() test is safe
> - * as node->private_list is protected by &mapping->tree_lock.
> + * as node->private_list is protected by mapping->pages.xa_lock.
> */
> if (node->count && node->count == node->exceptional) {
> if (list_empty(&node->private_list))
> @@ -366,7 +366,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
> unsigned long nodes;
> unsigned long cache;
>
> - /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
> + /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
> local_irq_disable();
> nodes = list_lru_shrink_count(&shadow_nodes, sc);
> local_irq_enable();
> @@ -419,21 +419,21 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
>
> /*
> * Page cache insertions and deletions synchroneously maintain
> - * the shadow node LRU under the mapping->tree_lock and the
> + * the shadow node LRU under the mapping->pages.xa_lock and the
> * lru_lock. Because the page cache tree is emptied before
> * the inode can be destroyed, holding the lru_lock pins any
> * address_space that has radix tree nodes on the LRU.
> *
> - * We can then safely transition to the mapping->tree_lock to
> + * We can then safely transition to the mapping->pages.xa_lock to
> * pin only the address_space of the particular node we want
> * to reclaim, take the node off-LRU, and drop the lru_lock.
> */
>
> node = container_of(item, struct radix_tree_node, private_list);
> - mapping = container_of(node->root, struct address_space, page_tree);
> + mapping = container_of(node->root, struct address_space, pages);
>
> /* Coming from the list, invert the lock order */
> - if (!spin_trylock(&mapping->tree_lock)) {
> + if (!xa_trylock(&mapping->pages)) {
> spin_unlock(lru_lock);
> ret = LRU_RETRY;
> goto out;
> @@ -468,11 +468,11 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
> if (WARN_ON_ONCE(node->exceptional))
> goto out_invalid;
> inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
> - __radix_tree_delete_node(&mapping->page_tree, node,
> + __radix_tree_delete_node(&mapping->pages, node,
> workingset_lookup_update(mapping));
>
> out_invalid:
> - spin_unlock(&mapping->tree_lock);
> + xa_unlock(&mapping->pages);
> ret = LRU_REMOVED_RETRY;
> out:
> local_irq_enable();
> @@ -487,7 +487,7 @@ static unsigned long scan_shadow_nodes(struct shrinker *shrinker,
> {
> unsigned long ret;
>
> - /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
> + /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
> local_irq_disable();
> ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL);
> local_irq_enable();
> @@ -503,7 +503,7 @@ static struct shrinker workingset_shadow_shrinker = {
>
> /*
> * Our list_lru->lock is IRQ-safe as it nests inside the IRQ-safe
> - * mapping->tree_lock.
> + * mapping->pages.xa_lock.
> */
> static struct lock_class_key shadow_nodes_key;
>
Straightforward change and the doc comments are a nice cleanup. Big
patch though and I didn't go over it in detail, so:
Acked-by: Jeff Layton <[email protected]>
On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> This results in no change in structure size on 64-bit x86 as it fits in
> the padding between the gfp_t and the void *.
>
While the patch itself looks fine, we should take note that this will
likely increase the size of radix_tree_root on 32-bit arches.
I don't think that's necessarily a deal breaker, but there are a lot of
users of radix_tree_root. Many of those users have their own spinlock
for radix tree accesses, and could be trivially changed to use the
xa_lock. That would need to be done piecemeal though.
A less disruptive idea might be to just create some new struct that's a
spinlock + radix_tree_root, and then use that going forward in the
xarray conversion. That might be better anyway if you're considering a
more phased approach for getting this merged.
> Initialising the spinlock requires a name for the benefit of lockdep,
> so RADIX_TREE_INIT() now needs to know the name of the radix tree it's
> initialising, and so do IDR_INIT() and IDA_INIT().
>
> Also add the xa_lock() and xa_unlock() family of wrappers to make it
> easier to use the lock. If we could rely on -fplan9-extensions in
> the compiler, we could avoid all of this syntactic sugar, but that
> wasn't added until gcc 4.6.
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> ---
> fs/f2fs/gc.c | 2 +-
> include/linux/idr.h | 19 ++++++++++---------
> include/linux/radix-tree.h | 7 +++++--
> include/linux/xarray.h | 24 ++++++++++++++++++++++++
> kernel/pid.c | 2 +-
> tools/include/linux/spinlock.h | 1 +
> 6 files changed, 42 insertions(+), 13 deletions(-)
> create mode 100644 include/linux/xarray.h
>
> diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
> index aa720cc44509..7aa15134180e 100644
> --- a/fs/f2fs/gc.c
> +++ b/fs/f2fs/gc.c
> @@ -1006,7 +1006,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
> unsigned int init_segno = segno;
> struct gc_inode_list gc_list = {
> .ilist = LIST_HEAD_INIT(gc_list.ilist),
> - .iroot = RADIX_TREE_INIT(GFP_NOFS),
> + .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
> };
>
> trace_f2fs_gc_begin(sbi->sb, sync, background,
> diff --git a/include/linux/idr.h b/include/linux/idr.h
> index 913c335054f0..e856f4e0ab35 100644
> --- a/include/linux/idr.h
> +++ b/include/linux/idr.h
> @@ -32,27 +32,28 @@ struct idr {
> #define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
> (1 << (ROOT_TAG_SHIFT + IDR_FREE)))
>
> -#define IDR_INIT_BASE(base) { \
> - .idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER), \
> +#define IDR_INIT_BASE(name, base) { \
> + .idr_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER), \
> .idr_base = (base), \
> .idr_next = 0, \
> }
>
> /**
> * IDR_INIT() - Initialise an IDR.
> + * @name: Name of IDR.
> *
> * A freshly-initialised IDR contains no IDs.
> */
> -#define IDR_INIT IDR_INIT_BASE(0)
> +#define IDR_INIT(name) IDR_INIT_BASE(name, 0)
>
> /**
> - * DEFINE_IDR() - Define a statically-allocated IDR
> - * @name: Name of IDR
> + * DEFINE_IDR() - Define a statically-allocated IDR.
> + * @name: Name of IDR.
> *
> * An IDR defined using this macro is ready for use with no additional
> * initialisation required. It contains no IDs.
> */
> -#define DEFINE_IDR(name) struct idr name = IDR_INIT
> +#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)
>
> /**
> * idr_get_cursor - Return the current position of the cyclic allocator
> @@ -219,10 +220,10 @@ struct ida {
> struct radix_tree_root ida_rt;
> };
>
> -#define IDA_INIT { \
> - .ida_rt = RADIX_TREE_INIT(IDR_RT_MARKER | GFP_NOWAIT), \
> +#define IDA_INIT(name) { \
> + .ida_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER | GFP_NOWAIT), \
> }
> -#define DEFINE_IDA(name) struct ida name = IDA_INIT
> +#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)
>
> int ida_pre_get(struct ida *ida, gfp_t gfp_mask);
> int ida_get_new_above(struct ida *ida, int starting_id, int *p_id);
> diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
> index 6c4e2e716dac..34149e8b5f73 100644
> --- a/include/linux/radix-tree.h
> +++ b/include/linux/radix-tree.h
> @@ -110,20 +110,23 @@ struct radix_tree_node {
> #define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)
>
> struct radix_tree_root {
> + spinlock_t xa_lock;
> gfp_t gfp_mask;
> struct radix_tree_node __rcu *rnode;
> };
>
> -#define RADIX_TREE_INIT(mask) { \
> +#define RADIX_TREE_INIT(name, mask) { \
> + .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
> .gfp_mask = (mask), \
> .rnode = NULL, \
> }
>
> #define RADIX_TREE(name, mask) \
> - struct radix_tree_root name = RADIX_TREE_INIT(mask)
> + struct radix_tree_root name = RADIX_TREE_INIT(name, mask)
>
> #define INIT_RADIX_TREE(root, mask) \
> do { \
> + spin_lock_init(&(root)->xa_lock); \
> (root)->gfp_mask = (mask); \
> (root)->rnode = NULL; \
> } while (0)
> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
> new file mode 100644
> index 000000000000..2dfc8006fe64
> --- /dev/null
> +++ b/include/linux/xarray.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#ifndef _LINUX_XARRAY_H
> +#define _LINUX_XARRAY_H
> +/*
> + * eXtensible Arrays
> + * Copyright (c) 2017 Microsoft Corporation
> + * Author: Matthew Wilcox <[email protected]>
> + */
> +
> +#include <linux/spinlock.h>
> +
> +#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
> +#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
> +#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
> +#define xa_lock_bh(xa) spin_lock_bh(&(xa)->xa_lock)
> +#define xa_unlock_bh(xa) spin_unlock_bh(&(xa)->xa_lock)
> +#define xa_lock_irq(xa) spin_lock_irq(&(xa)->xa_lock)
> +#define xa_unlock_irq(xa) spin_unlock_irq(&(xa)->xa_lock)
> +#define xa_lock_irqsave(xa, flags) \
> + spin_lock_irqsave(&(xa)->xa_lock, flags)
> +#define xa_unlock_irqrestore(xa, flags) \
> + spin_unlock_irqrestore(&(xa)->xa_lock, flags)
> +
> +#endif /* _LINUX_XARRAY_H */
> diff --git a/kernel/pid.c b/kernel/pid.c
> index ed6c343fe50d..157fe4b19971 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -70,7 +70,7 @@ int pid_max_max = PID_MAX_LIMIT;
> */
> struct pid_namespace init_pid_ns = {
> .kref = KREF_INIT(2),
> - .idr = IDR_INIT,
> + .idr = IDR_INIT(init_pid_ns.idr),
> .pid_allocated = PIDNS_ADDING,
> .level = 0,
> .child_reaper = &init_task,
> diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
> index 4ed569fcb139..b21b586b9854 100644
> --- a/tools/include/linux/spinlock.h
> +++ b/tools/include/linux/spinlock.h
> @@ -7,6 +7,7 @@
>
> #define spinlock_t pthread_mutex_t
> #define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
> +#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
>
> #define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
> #define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
--
Jeff Layton <[email protected]>
On Sat, Mar 03, 2018 at 09:55:22AM -0500, Jeff Layton wrote:
> On Mon, 2018-02-19 at 11:45 -0800, Matthew Wilcox wrote:
> > From: Matthew Wilcox <[email protected]>
> >
> > This results in no change in structure size on 64-bit x86 as it fits in
> > the padding between the gfp_t and the void *.
> >
>
> While the patch itself looks fine, we should take note that this will
> likely increase the size of radix_tree_root on 32-bit arches.
>
> I don't think that's necessarily a deal breaker, but there are a lot of
> users of radix_tree_root. Many of those users have their own spinlock
> for radix tree accesses, and could be trivially changed to use the
> xa_lock. That would need to be done piecemeal though.
>
> A less disruptive idea might be to just create some new struct that's a
> spinlock + radix_tree_root, and then use that going forward in the
> xarray conversion. That might be better anyway if you're considering a
> more phased approach for getting this merged.
Well, it's a choice. If we do:
struct xarray {
spinlock_t xa_lock;
struct radix_tree_root root;
};
then the padding on 64-bit turns that into a 24-byte struct. So do we
spend the extra 4 bytes on 32-bit and have the struct the way we want it
to look from the beginning, or do we spend the extra 8 bytes on 64-bit
and have to redo the struct accessors after the conversions are complete?
I chose option (a), but reasonable people can disagree on that choice.
On Sat, Mar 03, 2018 at 07:44:36AM -0500, Jeff Layton wrote:
> > - return root->gfp_mask & __GFP_BITS_MASK;
> > + return root->gfp_mask & ((__GFP_BITS_MASK >> 4) << 4);
>
> Maybe phrase this in terms of a constant like GFP_ZONEMASK here? Would
> this be more appropriate?
Yeah, that's a better idea. This is only interim; once all radix tree users
are converted to the xarray, we stop storing GFP flags here. So I hadn't
put much thought into it, but I'll change it.