2018-01-17 21:13:19

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 00/99] XArray version 6

From: Matthew Wilcox <[email protected]>

This version of the XArray has no known bugs. I have converted the
radix tree test suite entirely over to the XArray and fixed all bugs
that it has uncovered. There are additional tests in the test suite for
the XArray, so I now claim the XArray has better test coverage than the
Radix Tree did. Of course, that is not the same thing as fewer bugs,
but it now stands up to the tender embraces of Trinity without crashing.

You can get this version from my git tree here:
http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/xarray-2018-01-09
which includes a number of other patches that are at least tangentially
related to this patch set.

Most of the work I've done recently has been converting additional users
from the radix tree to the XArray. That's going pretty well; still 24
radix tree users left to convert. It's been worth doing because I've
spotted several common patterns that have led to changes (the lock_type,
reserve/release) and some common patterns that I'll add support for
later (chaining multiple entries from a single index, wanting to use
64-bit indices on 32-bit machines, having an array of XArrays, various
workarounds for not having range entries yet).

As far as line count goes, for the whole git tree, we're at:
212 files changed, 7764 insertions(+), 7002 deletions(-)
with another 1376 lines to delete from radix-tree.[ch]. That doesn't take
into account the 371 lines of xarray.rst, the 587 lines of xarray-test.c,
and the fact that almost half of lib/xarray.c and include/linux/xarray.h
is documentation.

Changes since version 5:

- Rebased to 4.15-rc8

API changes:
- Renamed __xa_init() to xa_init_flags().
- Added DEFINE_XARRAY_FLAGS().
- Renamed xa_ctx to xa_lock_type; store it in the XA_FLAGS and use separate
locking classes for each type so that lockdep doesn't emit spurious
warnings. It also reduces the amount of boilerplate.
- Combined __xa_store_bh, __xa_store_irq and __xa_store into __xa_store().
- Ditto for __xa_cmpxchg().
- Renamed xa_store_empty() to xa_insert().
- Added __xa_insert().
- Added xa_reserve() and xa_release().
- Renamed XA_NO_TAG to XA_PRESENT.
- Combined xa_get_entries(), xa_get_tagged() and xa_get_maybe_tag()
into xa_extract().
- Added 'filter' argument to xa_find(), xa_find_after() and xa_for_each()
to match xa_extract() and provide the functionality that would
have otherwise had to be added in the form of xa_find_tag(),
xa_find_tag_after() and xa_for_each_tag().
- Replaced workingset_lookup_update() with mapping_set_update().
- Renamed page_cache_tree_delete() to page_cache_delete().

New xarray users:
- Converted SuperH interrupt controller radix tree to XArray.
- Converted blk-cgroup radix tree to XArray.
- Converted blk-ioc radix tree to XArray.
- Converted i915 handles_vma radix tree to XArray.
- Converted s390 gmap radix trees to XArray.
- Converted hwspinlock to XArray.
- Converted btrfs fs_roots to XArray.
- Converted btrfs reada_zones to XArray.
- Converted btrfs reada_extents to XArray.
- Converted btrfs reada_tree to XArray.
- Converted btrfs buffer_radix to XArray.
- Converted btrfs delayed_nodes to XArray.
- Converted btrfs name_cache to XArray.
- Converted f2fs pids radix tree to XArray.
- Converted f2fs ino_root radix tree to XArray.
- Converted f2fs extent_tree to XArray.
- Converted f2fs gclist radix tree to XArray.
- Converted dma-debug active cacheline radix tree to XArray.
- Converted Xen pvcalls-back socketpass_mappings to XArray.
- Converted net/qrtr radix tree to XArray.
- Converted null_blk radix trees to XArray.

Documentation:
- Added a bit more internals documentation.
- Rewrote xa_init_flags documentation.
- Added the __xa_ functions to the locking table.
- Rewrote the section on using the __xa_ functions.

Internal changes:
- Free up the bottom four bits of the xa_flags, since these are not
valid GFP flags to pass to kmem_cache_alloc().
- Moved the XA_FLAGS_TRACK_FREE bit to the bottom bits of the flags to leave
space for more tags (later).
- Fixed multiple bugs in xas_find() and xas_find_tag().
- Fixed bug in shrinking XArray (and add a test case that exercises it).
- Fixed bug in erasing multi-index entries.
- Fixed a compile warning with CONFIG_RADIX_TREE_MULTIORDER=n.
- Added an xas_update() helper.
- Use ->array to track an xa_node's state through its lifecycle
(allocated -> rcu_free -> actually free).
- Made XA_BUG_ON dump the entire tree while XA_NODE_BUG_ON dumps only the
node that appears suspect.
- Fixed debugging printks to use %px and pr_cont/pr_info etc.
- Renamed some internal tag functions.
- Moved xa_track_free() from xarray.h to xarray.c.

Test suite:
- Added new tests for xas_find() and xas_find_tag().
- Added new tests for the update_node functionality.
- Converted the radix tree test suite to the xarray API.

Matthew Wilcox (99):
xarray: Add the xa_lock to the radix_tree_root
page cache: Use xa_lock
xarray: Replace exceptional entries
xarray: Change definition of sibling entries
xarray: Add definition of struct xarray
xarray: Define struct xa_node
xarray: Add documentation
xarray: Add xa_load
xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag
xarray: Add xa_store
xarray: Add xa_cmpxchg and xa_insert
xarray: Add xa_for_each
xarray: Add xa_extract
xarray: Add xa_destroy
xarray: Add xas_next and xas_prev
xarray: Add xas_create_range
xarray: Add MAINTAINERS entry
xarray: Add ability to store errno values
idr: Convert to XArray
ida: Convert to XArray
xarray: Add xa_reserve and xa_release
page cache: Convert hole search to XArray
page cache: Add page_cache_range_empty function
page cache: Add and replace pages using the XArray
page cache: Convert page deletion to XArray
page cache: Convert page cache lookups to XArray
page cache: Convert delete_batch to XArray
page cache: Remove stray radix comment
page cache: Convert filemap_range_has_page to XArray
mm: Convert page-writeback to XArray
mm: Convert workingset to XArray
mm: Convert truncate to XArray
mm: Convert add_to_swap_cache to XArray
mm: Convert delete_from_swap_cache to XArray
mm: Convert __do_page_cache_readahead to XArray
mm: Convert page migration to XArray
mm: Convert huge_memory to XArray
mm: Convert collapse_shmem to XArray
mm: Convert khugepaged_scan_shmem to XArray
pagevec: Use xa_tag_t
shmem: Convert replace to XArray
shmem: Convert shmem_confirm_swap to XArray
shmem: Convert find_swap_entry to XArray
shmem: Convert shmem_tag_pins to XArray
shmem: Convert shmem_wait_for_pins to XArray
shmem: Convert shmem_add_to_page_cache to XArray
shmem: Convert shmem_alloc_hugepage to XArray
shmem: Convert shmem_free_swap to XArray
shmem: Convert shmem_partial_swap_usage to XArray
shmem: Comment fixups
btrfs: Convert page cache to XArray
fs: Convert buffer to XArray
fs: Convert writeback to XArray
nilfs2: Convert to XArray
f2fs: Convert to XArray
lustre: Convert to XArray
dax: Convert dax_unlock_mapping_entry to XArray
dax: Convert lock_slot to XArray
dax: More XArray conversion
dax: Convert __dax_invalidate_mapping_entry to XArray
dax: Convert dax_writeback_one to XArray
dax: Convert dax_insert_pfn_mkwrite to XArray
dax: Convert dax_insert_mapping_entry to XArray
dax: Convert grab_mapping_entry to XArray
dax: Fix sparse warning
page cache: Finish XArray conversion
mm: Convert cgroup writeback to XArray
vmalloc: Convert to XArray
brd: Convert to XArray
xfs: Convert m_perag_tree to XArray
xfs: Convert pag_ici_root to XArray
xfs: Convert xfs dquot to XArray
xfs: Convert mru cache to XArray
usb: Convert xhci-mem to XArray
md: Convert raid5-cache to XArray
irqdomain: Convert to XArray
fscache: Convert to XArray
sh: intc: Convert to XArray
blk-cgroup: Convert to XArray
blk-ioc: Convert to XArray
i915: Convert handles_vma to XArray
s390: Convert gmap to XArray
hwspinlock: Convert to XArray
btrfs: Convert fs_roots_radix to XArray
btrfs: Remove unused spinlock
btrfs: Convert reada_zones to XArray
btrfs: Convert reada_extents to XArray
btrfs: Convert reada_tree to XArray
btrfs: Convert buffer_radix to XArray
btrfs: Convert delayed_nodes_tree to XArray
btrfs: Convert name_cache to XArray
f2fs: Convert pids radix tree to XArray
f2fs: Convert ino_root to XArray
f2fs: Convert extent_tree_root to XArray
f2fs: Convert gclist.iroot to XArray
dma-debug: Convert to XArray
xen: Convert pvcalls-back to XArray
qrtr: Convert to XArray
null_blk: Convert to XArray

Documentation/cgroup-v1/memory.txt | 2 +-
Documentation/core-api/index.rst | 1 +
Documentation/core-api/xarray.rst | 371 +++++
Documentation/vm/page_migration | 14 +-
MAINTAINERS | 12 +
arch/arm/include/asm/cacheflush.h | 6 +-
arch/nios2/include/asm/cacheflush.h | 6 +-
arch/parisc/include/asm/cacheflush.h | 6 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +-
arch/powerpc/include/asm/nohash/64/pgtable.h | 4 +-
arch/s390/include/asm/gmap.h | 12 +-
arch/s390/mm/gmap.c | 133 +-
block/bfq-cgroup.c | 4 +-
block/blk-cgroup.c | 52 +-
block/blk-ioc.c | 13 +-
block/cfq-iosched.c | 4 +-
drivers/block/brd.c | 93 +-
drivers/block/null_blk.c | 87 +-
drivers/gpu/drm/i915/i915_gem.c | 19 +-
drivers/gpu/drm/i915/i915_gem_context.c | 12 +-
drivers/gpu/drm/i915/i915_gem_context.h | 4 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 +-
drivers/gpu/drm/i915/selftests/mock_context.c | 2 +-
drivers/hwspinlock/hwspinlock_core.c | 151 +-
drivers/md/raid5-cache.c | 119 +-
drivers/sh/intc/core.c | 9 +-
drivers/sh/intc/internals.h | 5 +-
drivers/sh/intc/virq.c | 72 +-
drivers/staging/lustre/lustre/llite/glimpse.c | 12 +-
drivers/staging/lustre/lustre/mdc/mdc_request.c | 16 +-
drivers/usb/host/xhci-mem.c | 68 +-
drivers/usb/host/xhci.h | 6 +-
drivers/xen/pvcalls-back.c | 51 +-
fs/afs/write.c | 9 +-
fs/btrfs/btrfs_inode.h | 7 +-
fs/btrfs/compression.c | 6 +-
fs/btrfs/ctree.h | 31 +-
fs/btrfs/delayed-inode.c | 65 +-
fs/btrfs/disk-io.c | 73 +-
fs/btrfs/extent_io.c | 106 +-
fs/btrfs/inode.c | 72 +-
fs/btrfs/reada.c | 205 ++-
fs/btrfs/send.c | 19 +-
fs/btrfs/tests/btrfs-tests.c | 29 +-
fs/btrfs/transaction.c | 87 +-
fs/btrfs/volumes.c | 5 +-
fs/btrfs/volumes.h | 5 +-
fs/buffer.c | 25 +-
fs/cifs/file.c | 9 +-
fs/dax.c | 383 ++---
fs/ext4/inode.c | 2 +-
fs/f2fs/checkpoint.c | 85 +-
fs/f2fs/data.c | 9 +-
fs/f2fs/dir.c | 5 +-
fs/f2fs/extent_cache.c | 59 +-
fs/f2fs/f2fs.h | 6 +-
fs/f2fs/gc.c | 14 +-
fs/f2fs/gc.h | 2 +-
fs/f2fs/inline.c | 6 +-
fs/f2fs/node.c | 10 +-
fs/f2fs/super.c | 2 -
fs/f2fs/trace.c | 60 +-
fs/f2fs/trace.h | 2 -
fs/fs-writeback.c | 37 +-
fs/fscache/cookie.c | 6 +-
fs/fscache/internal.h | 2 +-
fs/fscache/object.c | 2 +-
fs/fscache/page.c | 152 +-
fs/fscache/stats.c | 6 +-
fs/gfs2/aops.c | 2 +-
fs/inode.c | 11 +-
fs/nfs/blocklayout/blocklayout.c | 2 +-
fs/nilfs2/btnode.c | 41 +-
fs/nilfs2/page.c | 78 +-
fs/proc/task_mmu.c | 2 +-
fs/xfs/libxfs/xfs_sb.c | 11 +-
fs/xfs/libxfs/xfs_sb.h | 2 +-
fs/xfs/xfs_dquot.c | 38 +-
fs/xfs/xfs_icache.c | 146 +-
fs/xfs/xfs_icache.h | 11 +-
fs/xfs/xfs_inode.c | 24 +-
fs/xfs/xfs_mount.c | 22 +-
fs/xfs/xfs_mount.h | 6 +-
fs/xfs/xfs_mru_cache.c | 72 +-
fs/xfs/xfs_qm.c | 36 +-
fs/xfs/xfs_qm.h | 18 +-
include/linux/backing-dev-defs.h | 2 +-
include/linux/backing-dev.h | 14 +-
include/linux/blk-cgroup.h | 5 +-
include/linux/fs.h | 68 +-
include/linux/fscache.h | 8 +-
include/linux/idr.h | 168 ++-
include/linux/iocontext.h | 6 +-
include/linux/irqdomain.h | 10 +-
include/linux/mm.h | 17 +-
include/linux/pagemap.h | 16 +-
include/linux/pagevec.h | 8 +-
include/linux/radix-tree.h | 97 +-
include/linux/swap.h | 22 +-
include/linux/swapops.h | 19 +-
include/linux/xarray.h | 1052 ++++++++++++++
kernel/irq/irqdomain.c | 39 +-
kernel/pid.c | 2 +-
lib/Makefile | 2 +-
lib/dma-debug.c | 105 +-
lib/idr.c | 633 ++++----
lib/radix-tree.c | 399 ++----
lib/xarray.c | 1753 +++++++++++++++++++++++
mm/backing-dev.c | 22 +-
mm/filemap.c | 766 ++++------
mm/huge_memory.c | 23 +-
mm/khugepaged.c | 182 +--
mm/madvise.c | 2 +-
mm/memcontrol.c | 6 +-
mm/memory.c | 16 +-
mm/migrate.c | 41 +-
mm/mincore.c | 2 +-
mm/page-writeback.c | 78 +-
mm/readahead.c | 10 +-
mm/rmap.c | 4 +-
mm/shmem.c | 312 ++--
mm/swap.c | 6 +-
mm/swap_state.c | 124 +-
mm/truncate.c | 45 +-
mm/vmalloc.c | 39 +-
mm/vmscan.c | 14 +-
mm/workingset.c | 89 +-
net/qrtr/qrtr.c | 21 +-
tools/include/linux/spinlock.h | 12 +-
tools/testing/radix-tree/.gitignore | 2 +
tools/testing/radix-tree/Makefile | 15 +-
tools/testing/radix-tree/idr-test.c | 40 +-
tools/testing/radix-tree/linux/bug.h | 1 +
tools/testing/radix-tree/linux/kconfig.h | 1 +
tools/testing/radix-tree/linux/kernel.h | 5 +
tools/testing/radix-tree/linux/lockdep.h | 11 +
tools/testing/radix-tree/linux/rcupdate.h | 2 +
tools/testing/radix-tree/linux/xarray.h | 3 +
tools/testing/radix-tree/multiorder.c | 83 +-
tools/testing/radix-tree/regression1.c | 68 +-
tools/testing/radix-tree/test.c | 53 +-
tools/testing/radix-tree/test.h | 6 +
tools/testing/radix-tree/xarray-test.c | 587 ++++++++
143 files changed, 6647 insertions(+), 3990 deletions(-)
create mode 100644 Documentation/core-api/xarray.rst
create mode 100644 include/linux/xarray.h
create mode 100644 lib/xarray.c
create mode 100644 tools/testing/radix-tree/linux/kconfig.h
create mode 100644 tools/testing/radix-tree/linux/lockdep.h
create mode 100644 tools/testing/radix-tree/linux/xarray.h
create mode 100644 tools/testing/radix-tree/xarray-test.c

--
2.15.1



2018-01-17 20:24:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 01/99] xarray: Add the xa_lock to the radix_tree_root

From: Matthew Wilcox <[email protected]>

This results in no change in structure size on 64-bit x86 as it fits in
the padding between the gfp_t and the void *.

Initialising the spinlock requires a name for the benefit of lockdep,
so RADIX_TREE_INIT() now needs to know the name of the radix tree it's
initialising, and so do IDR_INIT() and IDA_INIT().

Also add the xa_lock() and xa_unlock() family of wrappers to make it
easier to use the lock. If we could rely on -fplan9-extensions in
the compiler, we could avoid all of this syntactic sugar, but that
wasn't added until gcc 4.6.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/gc.c | 2 +-
include/linux/idr.h | 12 ++++++------
include/linux/radix-tree.h | 7 +++++--
include/linux/xarray.h | 24 ++++++++++++++++++++++++
kernel/pid.c | 2 +-
tools/include/linux/spinlock.h | 1 +
6 files changed, 38 insertions(+), 10 deletions(-)
create mode 100644 include/linux/xarray.h

diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index d844dcb80570..aac1e02f75df 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -991,7 +991,7 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
unsigned int init_segno = segno;
struct gc_inode_list gc_list = {
.ilist = LIST_HEAD_INIT(gc_list.ilist),
- .iroot = RADIX_TREE_INIT(GFP_NOFS),
+ .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
};

trace_f2fs_gc_begin(sbi->sb, sync, background,
diff --git a/include/linux/idr.h b/include/linux/idr.h
index ed1459d36b9d..11eea38b9629 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -32,11 +32,11 @@ struct idr {
#define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
(1 << (ROOT_TAG_SHIFT + IDR_FREE)))

-#define IDR_INIT \
+#define IDR_INIT(name) \
{ \
- .idr_rt = RADIX_TREE_INIT(IDR_RT_MARKER) \
+ .idr_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER) \
}
-#define DEFINE_IDR(name) struct idr name = IDR_INIT
+#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)

/**
* idr_get_cursor - Return the current position of the cyclic allocator
@@ -195,10 +195,10 @@ struct ida {
struct radix_tree_root ida_rt;
};

-#define IDA_INIT { \
- .ida_rt = RADIX_TREE_INIT(IDR_RT_MARKER | GFP_NOWAIT), \
+#define IDA_INIT(name) { \
+ .ida_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER | GFP_NOWAIT), \
}
-#define DEFINE_IDA(name) struct ida name = IDA_INIT
+#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)

int ida_pre_get(struct ida *ida, gfp_t gfp_mask);
int ida_get_new_above(struct ida *ida, int starting_id, int *p_id);
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 6c4e2e716dac..34149e8b5f73 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -110,20 +110,23 @@ struct radix_tree_node {
#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)

struct radix_tree_root {
+ spinlock_t xa_lock;
gfp_t gfp_mask;
struct radix_tree_node __rcu *rnode;
};

-#define RADIX_TREE_INIT(mask) { \
+#define RADIX_TREE_INIT(name, mask) { \
+ .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
.gfp_mask = (mask), \
.rnode = NULL, \
}

#define RADIX_TREE(name, mask) \
- struct radix_tree_root name = RADIX_TREE_INIT(mask)
+ struct radix_tree_root name = RADIX_TREE_INIT(name, mask)

#define INIT_RADIX_TREE(root, mask) \
do { \
+ spin_lock_init(&(root)->xa_lock); \
(root)->gfp_mask = (mask); \
(root)->rnode = NULL; \
} while (0)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
new file mode 100644
index 000000000000..2dfc8006fe64
--- /dev/null
+++ b/include/linux/xarray.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#ifndef _LINUX_XARRAY_H
+#define _LINUX_XARRAY_H
+/*
+ * eXtensible Arrays
+ * Copyright (c) 2017 Microsoft Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ */
+
+#include <linux/spinlock.h>
+
+#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
+#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
+#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
+#define xa_lock_bh(xa) spin_lock_bh(&(xa)->xa_lock)
+#define xa_unlock_bh(xa) spin_unlock_bh(&(xa)->xa_lock)
+#define xa_lock_irq(xa) spin_lock_irq(&(xa)->xa_lock)
+#define xa_unlock_irq(xa) spin_unlock_irq(&(xa)->xa_lock)
+#define xa_lock_irqsave(xa, flags) \
+ spin_lock_irqsave(&(xa)->xa_lock, flags)
+#define xa_unlock_irqrestore(xa, flags) \
+ spin_unlock_irqrestore(&(xa)->xa_lock, flags)
+
+#endif /* _LINUX_XARRAY_H */
diff --git a/kernel/pid.c b/kernel/pid.c
index 1e8bb6550ec4..fa8bbd16c0d6 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -58,7 +58,7 @@ int pid_max_max = PID_MAX_LIMIT;
*/
struct pid_namespace init_pid_ns = {
.kref = KREF_INIT(2),
- .idr = IDR_INIT,
+ .idr = IDR_INIT(init_pid_ns.idr),
.pid_allocated = PIDNS_ADDING,
.level = 0,
.child_reaper = &init_task,
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 4ed569fcb139..b21b586b9854 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -7,6 +7,7 @@

#define spinlock_t pthread_mutex_t
#define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
+#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER

#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
--
2.15.1


2018-01-17 20:24:46

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 82/99] s390: Convert gmap to XArray

From: Matthew Wilcox <[email protected]>

The three radix trees in gmap are all converted to the XArray.
This is another case where the multiple locks held mandates the use
of the xa_reserve() API. The gmap_insert_rmap() function is
considerably simplified by using the advanced API;
gmap_radix_tree_free() turns out to just be xa_destroy(), and
gmap_rmap_radix_tree_free() is a nice little iteration followed
by xa_destroy().

Signed-off-by: Matthew Wilcox <[email protected]>
---
arch/s390/include/asm/gmap.h | 12 ++--
arch/s390/mm/gmap.c | 133 +++++++++++++++----------------------------
2 files changed, 51 insertions(+), 94 deletions(-)

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index e07cce88dfb0..7695a01d19d7 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -14,14 +14,14 @@
* @list: list head for the mm->context gmap list
* @crst_list: list of all crst tables used in the guest address space
* @mm: pointer to the parent mm_struct
- * @guest_to_host: radix tree with guest to host address translation
- * @host_to_guest: radix tree with pointer to segment table entries
+ * @guest_to_host: guest to host address translation
+ * @host_to_guest: pointers to segment table entries
* @guest_table_lock: spinlock to protect all entries in the guest page table
* @ref_count: reference counter for the gmap structure
* @table: pointer to the page directory
* @asce: address space control element for gmap page table
* @pfault_enabled: defines if pfaults are applicable for the guest
- * @host_to_rmap: radix tree with gmap_rmap lists
+ * @host_to_rmap: gmap_rmap lists
* @children: list of shadow gmap structures
* @pt_list: list of all page tables used in the shadow guest address space
* @shadow_lock: spinlock to protect the shadow gmap list
@@ -35,8 +35,8 @@ struct gmap {
struct list_head list;
struct list_head crst_list;
struct mm_struct *mm;
- struct radix_tree_root guest_to_host;
- struct radix_tree_root host_to_guest;
+ struct xarray guest_to_host;
+ struct xarray host_to_guest;
spinlock_t guest_table_lock;
atomic_t ref_count;
unsigned long *table;
@@ -45,7 +45,7 @@ struct gmap {
void *private;
bool pfault_enabled;
/* Additional data for shadow guest address spaces */
- struct radix_tree_root host_to_rmap;
+ struct xarray host_to_rmap;
struct list_head children;
struct list_head pt_list;
spinlock_t shadow_lock;
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 05d459b638f5..818a5e80914d 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -60,9 +60,9 @@ static struct gmap *gmap_alloc(unsigned long limit)
INIT_LIST_HEAD(&gmap->crst_list);
INIT_LIST_HEAD(&gmap->children);
INIT_LIST_HEAD(&gmap->pt_list);
- INIT_RADIX_TREE(&gmap->guest_to_host, GFP_KERNEL);
- INIT_RADIX_TREE(&gmap->host_to_guest, GFP_ATOMIC);
- INIT_RADIX_TREE(&gmap->host_to_rmap, GFP_ATOMIC);
+ xa_init(&gmap->guest_to_host);
+ xa_init(&gmap->host_to_guest);
+ xa_init(&gmap->host_to_rmap);
spin_lock_init(&gmap->guest_table_lock);
spin_lock_init(&gmap->shadow_lock);
atomic_set(&gmap->ref_count, 1);
@@ -121,55 +121,16 @@ static void gmap_flush_tlb(struct gmap *gmap)
__tlb_flush_global();
}

-static void gmap_radix_tree_free(struct radix_tree_root *root)
-{
- struct radix_tree_iter iter;
- unsigned long indices[16];
- unsigned long index;
- void __rcu **slot;
- int i, nr;
-
- /* A radix tree is freed by deleting all of its entries */
- index = 0;
- do {
- nr = 0;
- radix_tree_for_each_slot(slot, root, &iter, index) {
- indices[nr] = iter.index;
- if (++nr == 16)
- break;
- }
- for (i = 0; i < nr; i++) {
- index = indices[i];
- radix_tree_delete(root, index);
- }
- } while (nr > 0);
-}
-
-static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
+static void gmap_rmap_free(struct xarray *xa)
{
struct gmap_rmap *rmap, *rnext, *head;
- struct radix_tree_iter iter;
- unsigned long indices[16];
- unsigned long index;
- void __rcu **slot;
- int i, nr;
-
- /* A radix tree is freed by deleting all of its entries */
- index = 0;
- do {
- nr = 0;
- radix_tree_for_each_slot(slot, root, &iter, index) {
- indices[nr] = iter.index;
- if (++nr == 16)
- break;
- }
- for (i = 0; i < nr; i++) {
- index = indices[i];
- head = radix_tree_delete(root, index);
- gmap_for_each_rmap_safe(rmap, rnext, head)
- kfree(rmap);
- }
- } while (nr > 0);
+ unsigned long index = 0;
+
+ xa_for_each(xa, head, index, ULONG_MAX, XA_PRESENT) {
+ gmap_for_each_rmap_safe(rmap, rnext, head)
+ kfree(rmap);
+ }
+ xa_destroy(xa);
}

/**
@@ -188,15 +149,15 @@ static void gmap_free(struct gmap *gmap)
/* Free all segment & region tables. */
list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
__free_pages(page, CRST_ALLOC_ORDER);
- gmap_radix_tree_free(&gmap->guest_to_host);
- gmap_radix_tree_free(&gmap->host_to_guest);
+ xa_destroy(&gmap->guest_to_host);
+ xa_destroy(&gmap->host_to_guest);

/* Free additional data for a shadow gmap */
if (gmap_is_shadow(gmap)) {
/* Free all page tables. */
list_for_each_entry_safe(page, next, &gmap->pt_list, lru)
page_table_free_pgste(page);
- gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
+ gmap_rmap_free(&gmap->host_to_rmap);
/* Release reference to the parent */
gmap_put(gmap->parent);
}
@@ -358,7 +319,7 @@ static int __gmap_unlink_by_vmaddr(struct gmap *gmap, unsigned long vmaddr)

BUG_ON(gmap_is_shadow(gmap));
spin_lock(&gmap->guest_table_lock);
- entry = radix_tree_delete(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
+ entry = xa_erase(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
if (entry) {
flush = (*entry != _SEGMENT_ENTRY_EMPTY);
*entry = _SEGMENT_ENTRY_EMPTY;
@@ -378,7 +339,7 @@ static int __gmap_unmap_by_gaddr(struct gmap *gmap, unsigned long gaddr)
{
unsigned long vmaddr;

- vmaddr = (unsigned long) radix_tree_delete(&gmap->guest_to_host,
+ vmaddr = (unsigned long) xa_erase(&gmap->guest_to_host,
gaddr >> PMD_SHIFT);
return vmaddr ? __gmap_unlink_by_vmaddr(gmap, vmaddr) : 0;
}
@@ -441,9 +402,9 @@ int gmap_map_segment(struct gmap *gmap, unsigned long from,
/* Remove old translation */
flush |= __gmap_unmap_by_gaddr(gmap, to + off);
/* Store new translation */
- if (radix_tree_insert(&gmap->guest_to_host,
+ if (xa_is_err(xa_store(&gmap->guest_to_host,
(to + off) >> PMD_SHIFT,
- (void *) from + off))
+ (void *) from + off, GFP_KERNEL)))
break;
}
up_write(&gmap->mm->mmap_sem);
@@ -474,7 +435,7 @@ unsigned long __gmap_translate(struct gmap *gmap, unsigned long gaddr)
unsigned long vmaddr;

vmaddr = (unsigned long)
- radix_tree_lookup(&gmap->guest_to_host, gaddr >> PMD_SHIFT);
+ xa_load(&gmap->guest_to_host, gaddr >> PMD_SHIFT);
/* Note: guest_to_host is empty for a shadow gmap */
return vmaddr ? (vmaddr | (gaddr & ~PMD_MASK)) : -EFAULT;
}
@@ -588,21 +549,19 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr)
if (pmd_large(*pmd))
return -EFAULT;
/* Link gmap segment table entry location to page table. */
- rc = radix_tree_preload(GFP_KERNEL);
+ rc = xa_reserve(&gmap->host_to_guest, vmaddr >> PMD_SHIFT, GFP_KERNEL);
if (rc)
return rc;
ptl = pmd_lock(mm, pmd);
spin_lock(&gmap->guest_table_lock);
if (*table == _SEGMENT_ENTRY_EMPTY) {
- rc = radix_tree_insert(&gmap->host_to_guest,
- vmaddr >> PMD_SHIFT, table);
+ rc = xa_err(xa_store(&gmap->host_to_guest, vmaddr >> PMD_SHIFT,
+ table, GFP_NOWAIT | __GFP_NOFAIL));
if (!rc)
*table = pmd_val(*pmd);
- } else
- rc = 0;
+ }
spin_unlock(&gmap->guest_table_lock);
spin_unlock(ptl);
- radix_tree_preload_end();
return rc;
}

@@ -660,7 +619,7 @@ void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
pte_t *ptep;

/* Find the vm address for the guest address */
- vmaddr = (unsigned long) radix_tree_lookup(&gmap->guest_to_host,
+ vmaddr = (unsigned long) xa_load(&gmap->guest_to_host,
gaddr >> PMD_SHIFT);
if (vmaddr) {
vmaddr |= gaddr & ~PMD_MASK;
@@ -682,8 +641,7 @@ void gmap_discard(struct gmap *gmap, unsigned long from, unsigned long to)
for (gaddr = from; gaddr < to;
gaddr = (gaddr + PMD_SIZE) & PMD_MASK) {
/* Find the vm address for the guest address */
- vmaddr = (unsigned long)
- radix_tree_lookup(&gmap->guest_to_host,
+ vmaddr = (unsigned long) xa_load(&gmap->guest_to_host,
gaddr >> PMD_SHIFT);
if (!vmaddr)
continue;
@@ -1002,29 +960,24 @@ int gmap_read_table(struct gmap *gmap, unsigned long gaddr, unsigned long *val)
EXPORT_SYMBOL_GPL(gmap_read_table);

/**
- * gmap_insert_rmap - add a rmap to the host_to_rmap radix tree
+ * gmap_insert_rmap - add a rmap to the host_to_rmap
* @sg: pointer to the shadow guest address space structure
* @vmaddr: vm address associated with the rmap
* @rmap: pointer to the rmap structure
*
- * Called with the sg->guest_table_lock
+ * Called with the sg->guest_table_lock and page table lock held
*/
static inline void gmap_insert_rmap(struct gmap *sg, unsigned long vmaddr,
struct gmap_rmap *rmap)
{
- void __rcu **slot;
+ XA_STATE(xas, &sg->host_to_rmap, vmaddr >> PAGE_SHIFT);

BUG_ON(!gmap_is_shadow(sg));
- slot = radix_tree_lookup_slot(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT);
- if (slot) {
- rmap->next = radix_tree_deref_slot_protected(slot,
- &sg->guest_table_lock);
- radix_tree_replace_slot(&sg->host_to_rmap, slot, rmap);
- } else {
- rmap->next = NULL;
- radix_tree_insert(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT,
- rmap);
- }
+
+ xas_lock(&xas);
+ rmap->next = xas_load(&xas);
+ xas_store(&xas, rmap);
+ xas_unlock(&xas);
}

/**
@@ -1058,7 +1011,8 @@ static int gmap_protect_rmap(struct gmap *sg, unsigned long raddr,
if (!rmap)
return -ENOMEM;
rmap->raddr = raddr;
- rc = radix_tree_preload(GFP_KERNEL);
+ rc = xa_reserve(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT,
+ GFP_KERNEL);
if (rc) {
kfree(rmap);
return rc;
@@ -1074,7 +1028,7 @@ static int gmap_protect_rmap(struct gmap *sg, unsigned long raddr,
spin_unlock(&sg->guest_table_lock);
gmap_pte_op_end(ptl);
}
- radix_tree_preload_end();
+ xa_release(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT);
if (rc) {
kfree(rmap);
rc = gmap_pte_op_fixup(parent, paddr, vmaddr, prot);
@@ -1962,7 +1916,8 @@ int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte)
rc = vmaddr;
break;
}
- rc = radix_tree_preload(GFP_KERNEL);
+ rc = xa_reserve(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT,
+ GFP_KERNEL);
if (rc)
break;
rc = -EAGAIN;
@@ -1974,7 +1929,8 @@ int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte)
if (!tptep) {
spin_unlock(&sg->guest_table_lock);
gmap_pte_op_end(ptl);
- radix_tree_preload_end();
+ xa_release(&sg->host_to_rmap,
+ vmaddr >> PAGE_SHIFT);
break;
}
rc = ptep_shadow_pte(sg->mm, saddr, sptep, tptep, pte);
@@ -1983,11 +1939,13 @@ int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte)
gmap_insert_rmap(sg, vmaddr, rmap);
rmap = NULL;
rc = 0;
+ } else {
+ xa_release(&sg->host_to_rmap,
+ vmaddr >> PAGE_SHIFT);
}
gmap_pte_op_end(ptl);
spin_unlock(&sg->guest_table_lock);
}
- radix_tree_preload_end();
if (!rc)
break;
rc = gmap_pte_op_fixup(parent, paddr, vmaddr, prot);
@@ -2030,7 +1988,7 @@ static void gmap_shadow_notify(struct gmap *sg, unsigned long vmaddr,
return;
}
/* Remove the page table tree from on specific entry */
- head = radix_tree_delete(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT);
+ head = xa_erase(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT);
gmap_for_each_rmap_safe(rmap, rnext, head) {
bits = rmap->raddr & _SHADOW_RMAP_MASK;
raddr = rmap->raddr ^ bits;
@@ -2078,8 +2036,7 @@ void ptep_notify(struct mm_struct *mm, unsigned long vmaddr,
rcu_read_lock();
list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
spin_lock(&gmap->guest_table_lock);
- table = radix_tree_lookup(&gmap->host_to_guest,
- vmaddr >> PMD_SHIFT);
+ table = xa_load(&gmap->host_to_guest, vmaddr >> PMD_SHIFT);
if (table)
gaddr = __gmap_segment_gaddr(table) + offset;
spin_unlock(&gmap->guest_table_lock);
--
2.15.1


2018-01-17 20:25:15

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 80/99] blk-ioc: Convert to XArray

From: Matthew Wilcox <[email protected]>

Skip converting the lock to use xa_lock; I think this code can live with
the double-locking.

Signed-off-by: Matthew Wilcox <[email protected]>
---
block/blk-ioc.c | 13 +++++++------
include/linux/iocontext.h | 6 +++---
2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index f23311e4b201..baf83c8ac503 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -68,7 +68,7 @@ static void ioc_destroy_icq(struct io_cq *icq)

lockdep_assert_held(&ioc->lock);

- radix_tree_delete(&ioc->icq_tree, icq->q->id);
+ xa_erase(&ioc->icq_array, icq->q->id);
hlist_del_init(&icq->ioc_node);
list_del_init(&icq->q_node);

@@ -278,7 +278,7 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node)
atomic_set(&ioc->nr_tasks, 1);
atomic_set(&ioc->active_ref, 1);
spin_lock_init(&ioc->lock);
- INIT_RADIX_TREE(&ioc->icq_tree, GFP_ATOMIC | __GFP_HIGH);
+ xa_init_flags(&ioc->icq_array, XA_FLAGS_LOCK_IRQ);
INIT_HLIST_HEAD(&ioc->icq_list);
INIT_WORK(&ioc->release_work, ioc_release_fn);

@@ -363,7 +363,7 @@ struct io_cq *ioc_lookup_icq(struct io_context *ioc, struct request_queue *q)
if (icq && icq->q == q)
goto out;

- icq = radix_tree_lookup(&ioc->icq_tree, q->id);
+ icq = xa_load(&ioc->icq_array, q->id);
if (icq && icq->q == q)
rcu_assign_pointer(ioc->icq_hint, icq); /* allowed to race */
else
@@ -398,7 +398,7 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q,
if (!icq)
return NULL;

- if (radix_tree_maybe_preload(gfp_mask) < 0) {
+ if (xa_reserve(&ioc->icq_array, q->id, gfp_mask)) {
kmem_cache_free(et->icq_cache, icq);
return NULL;
}
@@ -412,7 +412,8 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q,
spin_lock_irq(q->queue_lock);
spin_lock(&ioc->lock);

- if (likely(!radix_tree_insert(&ioc->icq_tree, q->id, icq))) {
+ if (likely(!xa_store(&ioc->icq_array, q->id, icq,
+ GFP_ATOMIC | __GFP_HIGH))) {
hlist_add_head(&icq->ioc_node, &ioc->icq_list);
list_add(&icq->q_node, &q->icq_list);
if (et->uses_mq && et->ops.mq.init_icq)
@@ -421,6 +422,7 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q,
et->ops.sq.elevator_init_icq_fn(icq);
} else {
kmem_cache_free(et->icq_cache, icq);
+ xa_erase(&ioc->icq_array, q->id);
icq = ioc_lookup_icq(ioc, q);
if (!icq)
printk(KERN_ERR "cfq: icq link failed!\n");
@@ -428,7 +430,6 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q,

spin_unlock(&ioc->lock);
spin_unlock_irq(q->queue_lock);
- radix_tree_preload_end();
return icq;
}

diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index dba15ca8e60b..e16224f70084 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -2,9 +2,9 @@
#ifndef IOCONTEXT_H
#define IOCONTEXT_H

-#include <linux/radix-tree.h>
#include <linux/rcupdate.h>
#include <linux/workqueue.h>
+#include <linux/xarray.h>

enum {
ICQ_EXITED = 1 << 2,
@@ -56,7 +56,7 @@ enum {
* - ioc->icq_list and icq->ioc_node are protected by ioc lock.
* q->icq_list and icq->q_node by q lock.
*
- * - ioc->icq_tree and ioc->icq_hint are protected by ioc lock, while icq
+ * - ioc->icq_array and ioc->icq_hint are protected by ioc lock, while icq
* itself is protected by q lock. However, both the indexes and icq
* itself are also RCU managed and lookup can be performed holding only
* the q lock.
@@ -111,7 +111,7 @@ struct io_context {
int nr_batch_requests; /* Number of requests left in the batch */
unsigned long last_waited; /* Time last woken after wait for request */

- struct radix_tree_root icq_tree;
+ struct xarray icq_array;
struct io_cq __rcu *icq_hint;
struct hlist_head icq_list;

--
2.15.1


2018-01-17 20:25:30

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 81/99] i915: Convert handles_vma to XArray

From: Matthew Wilcox <[email protected]>

Straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 2 +-
drivers/gpu/drm/i915/i915_gem_context.c | 12 +++++-------
drivers/gpu/drm/i915/i915_gem_context.h | 4 ++--
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 +++---
drivers/gpu/drm/i915/selftests/mock_context.c | 2 +-
5 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 25ce7bcf9988..69e944f4dfce 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3351,7 +3351,7 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
if (ctx->file_priv != fpriv)
continue;

- vma = radix_tree_delete(&ctx->handles_vma, lut->handle);
+ vma = xa_erase(&ctx->handles_vma, lut->handle);
GEM_BUG_ON(vma->obj != obj);

/* We allow the process to have multiple handles to the same
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index f782cf2069c1..1aff35ba6e18 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -95,9 +95,9 @@

static void lut_close(struct i915_gem_context *ctx)
{
+ XA_STATE(xas, &ctx->handles_vma, 0);
struct i915_lut_handle *lut, *ln;
- struct radix_tree_iter iter;
- void __rcu **slot;
+ struct i915_vma *vma;

list_for_each_entry_safe(lut, ln, &ctx->handles_list, ctx_link) {
list_del(&lut->obj_link);
@@ -105,10 +105,8 @@ static void lut_close(struct i915_gem_context *ctx)
}

rcu_read_lock();
- radix_tree_for_each_slot(slot, &ctx->handles_vma, &iter, 0) {
- struct i915_vma *vma = rcu_dereference_raw(*slot);
-
- radix_tree_iter_delete(&ctx->handles_vma, &iter, slot);
+ xas_for_each(&xas, vma, ULONG_MAX) {
+ xas_store(&xas, NULL);
__i915_gem_object_release_unless_active(vma->obj);
}
rcu_read_unlock();
@@ -276,7 +274,7 @@ __create_hw_context(struct drm_i915_private *dev_priv,
ctx->i915 = dev_priv;
ctx->priority = I915_PRIORITY_NORMAL;

- INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
+ xa_init(&ctx->handles_vma);
INIT_LIST_HEAD(&ctx->handles_list);

/* Default context will never have a file_priv */
diff --git a/drivers/gpu/drm/i915/i915_gem_context.h b/drivers/gpu/drm/i915/i915_gem_context.h
index 44688e22a5c2..8e3e0d002f77 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.h
+++ b/drivers/gpu/drm/i915/i915_gem_context.h
@@ -181,11 +181,11 @@ struct i915_gem_context {
/** remap_slice: Bitmask of cache lines that need remapping */
u8 remap_slice;

- /** handles_vma: rbtree to look up our context specific obj/vma for
+ /** handles_vma: lookup our context specific obj/vma for
* the user handle. (user handles are per fd, but the binding is
* per vm, which may be one per context or shared with the global GTT)
*/
- struct radix_tree_root handles_vma;
+ struct xarray handles_vma;

/** handles_list: reverse list of all the rbtree entries in use for
* this context, which allows us to free all the allocations on
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 435ed95df144..828f4b5473ea 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -683,7 +683,7 @@ static int eb_select_context(struct i915_execbuffer *eb)

static int eb_lookup_vmas(struct i915_execbuffer *eb)
{
- struct radix_tree_root *handles_vma = &eb->ctx->handles_vma;
+ struct xarray *handles_vma = &eb->ctx->handles_vma;
struct drm_i915_gem_object *obj;
unsigned int i;
int err;
@@ -702,7 +702,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
struct i915_lut_handle *lut;
struct i915_vma *vma;

- vma = radix_tree_lookup(handles_vma, handle);
+ vma = xa_load(handles_vma, handle);
if (likely(vma))
goto add_vma;

@@ -724,7 +724,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
goto err_obj;
}

- err = radix_tree_insert(handles_vma, handle, vma);
+ err = xa_err(xa_store(handles_vma, handle, vma, GFP_KERNEL));
if (unlikely(err)) {
kfree(lut);
goto err_obj;
diff --git a/drivers/gpu/drm/i915/selftests/mock_context.c b/drivers/gpu/drm/i915/selftests/mock_context.c
index bbf80d42e793..b664a7159242 100644
--- a/drivers/gpu/drm/i915/selftests/mock_context.c
+++ b/drivers/gpu/drm/i915/selftests/mock_context.c
@@ -40,7 +40,7 @@ mock_context(struct drm_i915_private *i915,
INIT_LIST_HEAD(&ctx->link);
ctx->i915 = i915;

- INIT_RADIX_TREE(&ctx->handles_vma, GFP_KERNEL);
+ xa_init(&ctx->handles_vma);
INIT_LIST_HEAD(&ctx->handles_list);

ret = ida_simple_get(&i915->contexts.hw_ida,
--
2.15.1


2018-01-17 20:25:50

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 97/99] xen: Convert pvcalls-back to XArray

From: Matthew Wilcox <[email protected]>

This is a straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/xen/pvcalls-back.c | 51 ++++++++++++++--------------------------------
1 file changed, 15 insertions(+), 36 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index c7822d8078b9..e059d2e777e1 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -15,10 +15,10 @@
#include <linux/inet.h>
#include <linux/kthread.h>
#include <linux/list.h>
-#include <linux/radix-tree.h>
#include <linux/module.h>
#include <linux/semaphore.h>
#include <linux/wait.h>
+#include <linux/xarray.h>
#include <net/sock.h>
#include <net/inet_common.h>
#include <net/inet_connection_sock.h>
@@ -50,7 +50,7 @@ struct pvcalls_fedata {
struct xen_pvcalls_back_ring ring;
int irq;
struct list_head socket_mappings;
- struct radix_tree_root socketpass_mappings;
+ struct xarray socketpass_mappings;
struct semaphore socket_lock;
};

@@ -492,10 +492,9 @@ static int pvcalls_back_release(struct xenbus_device *dev,
goto out;
}
}
- mappass = radix_tree_lookup(&fedata->socketpass_mappings,
- req->u.release.id);
+ mappass = xa_load(&fedata->socketpass_mappings, req->u.release.id);
if (mappass != NULL) {
- radix_tree_delete(&fedata->socketpass_mappings, mappass->id);
+ xa_erase(&fedata->socketpass_mappings, mappass->id);
up(&fedata->socket_lock);
ret = pvcalls_back_release_passive(dev, fedata, mappass);
} else
@@ -650,10 +649,8 @@ static int pvcalls_back_bind(struct xenbus_device *dev,
map->fedata = fedata;
map->id = req->u.bind.id;

- down(&fedata->socket_lock);
- ret = radix_tree_insert(&fedata->socketpass_mappings, map->id,
- map);
- up(&fedata->socket_lock);
+ ret = xa_err(xa_store(&fedata->socketpass_mappings, map->id, map,
+ GFP_KERNEL));
if (ret)
goto out;

@@ -689,9 +686,7 @@ static int pvcalls_back_listen(struct xenbus_device *dev,

fedata = dev_get_drvdata(&dev->dev);

- down(&fedata->socket_lock);
- map = radix_tree_lookup(&fedata->socketpass_mappings, req->u.listen.id);
- up(&fedata->socket_lock);
+ map = xa_load(&fedata->socketpass_mappings, req->u.listen.id);
if (map == NULL)
goto out;

@@ -717,10 +712,7 @@ static int pvcalls_back_accept(struct xenbus_device *dev,

fedata = dev_get_drvdata(&dev->dev);

- down(&fedata->socket_lock);
- mappass = radix_tree_lookup(&fedata->socketpass_mappings,
- req->u.accept.id);
- up(&fedata->socket_lock);
+ mappass = xa_load(&fedata->socketpass_mappings, req->u.accept.id);
if (mappass == NULL)
goto out_error;

@@ -765,10 +757,7 @@ static int pvcalls_back_poll(struct xenbus_device *dev,

fedata = dev_get_drvdata(&dev->dev);

- down(&fedata->socket_lock);
- mappass = radix_tree_lookup(&fedata->socketpass_mappings,
- req->u.poll.id);
- up(&fedata->socket_lock);
+ mappass = xa_load(&fedata->socketpass_mappings, req->u.poll.id);
if (mappass == NULL)
return -EINVAL;

@@ -960,7 +949,7 @@ static int backend_connect(struct xenbus_device *dev)
fedata->dev = dev;

INIT_LIST_HEAD(&fedata->socket_mappings);
- INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
+ xa_init(&fedata->socketpass_mappings);
sema_init(&fedata->socket_lock, 1);
dev_set_drvdata(&dev->dev, fedata);

@@ -984,9 +973,7 @@ static int backend_disconnect(struct xenbus_device *dev)
struct pvcalls_fedata *fedata;
struct sock_mapping *map, *n;
struct sockpass_mapping *mappass;
- struct radix_tree_iter iter;
- void **slot;
-
+ unsigned long index = 0;

fedata = dev_get_drvdata(&dev->dev);

@@ -996,18 +983,10 @@ static int backend_disconnect(struct xenbus_device *dev)
pvcalls_back_release_active(dev, fedata, map);
}

- radix_tree_for_each_slot(slot, &fedata->socketpass_mappings, &iter, 0) {
- mappass = radix_tree_deref_slot(slot);
- if (!mappass)
- continue;
- if (radix_tree_exception(mappass)) {
- if (radix_tree_deref_retry(mappass))
- slot = radix_tree_iter_retry(&iter);
- } else {
- radix_tree_delete(&fedata->socketpass_mappings,
- mappass->id);
- pvcalls_back_release_passive(dev, fedata, mappass);
- }
+ xa_for_each(&fedata->socketpass_mappings, mappass, index, ULONG_MAX,
+ XA_PRESENT) {
+ xa_erase(&fedata->socketpass_mappings, index);
+ pvcalls_back_release_passive(dev, fedata, mappass);
}
up(&fedata->socket_lock);

--
2.15.1


2018-01-17 20:25:56

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 95/99] f2fs: Convert gclist.iroot to XArray

From: Matthew Wilcox <[email protected]>

Straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/gc.c | 14 +++++++-------
fs/f2fs/gc.h | 2 +-
2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index aac1e02f75df..2b33068dc36b 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -417,7 +417,7 @@ static struct inode *find_gc_inode(struct gc_inode_list *gc_list, nid_t ino)
{
struct inode_entry *ie;

- ie = radix_tree_lookup(&gc_list->iroot, ino);
+ ie = xa_load(&gc_list->iroot, ino);
if (ie)
return ie->inode;
return NULL;
@@ -434,7 +434,7 @@ static void add_gc_inode(struct gc_inode_list *gc_list, struct inode *inode)
new_ie = f2fs_kmem_cache_alloc(inode_entry_slab, GFP_NOFS);
new_ie->inode = inode;

- f2fs_radix_tree_insert(&gc_list->iroot, inode->i_ino, new_ie);
+ xa_store(&gc_list->iroot, inode->i_ino, new_ie, GFP_NOFS);
list_add_tail(&new_ie->list, &gc_list->ilist);
}

@@ -442,7 +442,7 @@ static void put_gc_inode(struct gc_inode_list *gc_list)
{
struct inode_entry *ie, *next_ie;
list_for_each_entry_safe(ie, next_ie, &gc_list->ilist, list) {
- radix_tree_delete(&gc_list->iroot, ie->inode->i_ino);
+ xa_erase(&gc_list->iroot, ie->inode->i_ino);
iput(ie->inode);
list_del(&ie->list);
kmem_cache_free(inode_entry_slab, ie);
@@ -989,10 +989,10 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
int ret = 0;
struct cp_control cpc;
unsigned int init_segno = segno;
- struct gc_inode_list gc_list = {
- .ilist = LIST_HEAD_INIT(gc_list.ilist),
- .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
- };
+ struct gc_inode_list gc_list;
+
+ xa_init(&gc_list.iroot);
+ INIT_LIST_HEAD(&gc_list.ilist);

trace_f2fs_gc_begin(sbi->sb, sync, background,
get_pages(sbi, F2FS_DIRTY_NODES),
diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
index 9325191fab2d..769259b0a4f6 100644
--- a/fs/f2fs/gc.h
+++ b/fs/f2fs/gc.h
@@ -41,7 +41,7 @@ struct f2fs_gc_kthread {

struct gc_inode_list {
struct list_head ilist;
- struct radix_tree_root iroot;
+ struct xarray iroot;
};

/*
--
2.15.1


2018-01-17 20:27:14

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 98/99] qrtr: Convert to XArray

From: Matthew Wilcox <[email protected]>

Moved the kref protection under the xa_lock too.

Signed-off-by: Matthew Wilcox <[email protected]>
---
net/qrtr/qrtr.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
index 77ab05e23001..7de9a06d2aa2 100644
--- a/net/qrtr/qrtr.c
+++ b/net/qrtr/qrtr.c
@@ -104,10 +104,10 @@ static inline struct qrtr_sock *qrtr_sk(struct sock *sk)
static unsigned int qrtr_local_nid = -1;

/* for node ids */
-static RADIX_TREE(qrtr_nodes, GFP_KERNEL);
+static DEFINE_XARRAY(qrtr_nodes);
/* broadcast list */
static LIST_HEAD(qrtr_all_nodes);
-/* lock for qrtr_nodes, qrtr_all_nodes and node reference */
+/* lock for qrtr_all_nodes */
static DEFINE_MUTEX(qrtr_node_lock);

/* local port allocation management */
@@ -148,12 +148,15 @@ static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
* kref_put_mutex. As such, the node mutex is expected to be locked on call.
*/
static void __qrtr_node_release(struct kref *kref)
+ __releases(qrtr_nodes.xa_lock)
{
struct qrtr_node *node = container_of(kref, struct qrtr_node, ref);

if (node->nid != QRTR_EP_NID_AUTO)
- radix_tree_delete(&qrtr_nodes, node->nid);
+ __xa_erase(&qrtr_nodes, node->nid);
+ xa_unlock(&qrtr_nodes);

+ mutex_lock(&qrtr_node_lock);
list_del(&node->item);
mutex_unlock(&qrtr_node_lock);

@@ -174,7 +177,7 @@ static void qrtr_node_release(struct qrtr_node *node)
{
if (!node)
return;
- kref_put_mutex(&node->ref, __qrtr_node_release, &qrtr_node_lock);
+ kref_put_lock(&node->ref, __qrtr_node_release, &qrtr_nodes.xa_lock);
}

/* Pass an outgoing packet socket buffer to the endpoint driver. */
@@ -217,10 +220,10 @@ static struct qrtr_node *qrtr_node_lookup(unsigned int nid)
{
struct qrtr_node *node;

- mutex_lock(&qrtr_node_lock);
- node = radix_tree_lookup(&qrtr_nodes, nid);
+ xa_lock(&qrtr_nodes);
+ node = xa_load(&qrtr_nodes, nid);
node = qrtr_node_acquire(node);
- mutex_unlock(&qrtr_node_lock);
+ xa_unlock(&qrtr_nodes);

return node;
}
@@ -235,10 +238,8 @@ static void qrtr_node_assign(struct qrtr_node *node, unsigned int nid)
if (node->nid != QRTR_EP_NID_AUTO || nid == QRTR_EP_NID_AUTO)
return;

- mutex_lock(&qrtr_node_lock);
- radix_tree_insert(&qrtr_nodes, nid, node);
node->nid = nid;
- mutex_unlock(&qrtr_node_lock);
+ xa_store(&qrtr_nodes, nid, node, GFP_KERNEL);
}

/**
--
2.15.1


2018-01-17 20:27:25

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 96/99] dma-debug: Convert to XArray

From: Matthew Wilcox <[email protected]>

This is an unusual way to use the xarray tags. If any other users
come up, we can add an xas_get_tags() / xas_set_tags() API, but until
then I don't want to encourage this kind of abuse.

Signed-off-by: Matthew Wilcox <[email protected]>
---
lib/dma-debug.c | 105 +++++++++++++++++++++++++-------------------------------
1 file changed, 46 insertions(+), 59 deletions(-)

diff --git a/lib/dma-debug.c b/lib/dma-debug.c
index fb4af570ce04..965b3837d060 100644
--- a/lib/dma-debug.c
+++ b/lib/dma-debug.c
@@ -22,7 +22,6 @@
#include <linux/dma-mapping.h>
#include <linux/sched/task.h>
#include <linux/stacktrace.h>
-#include <linux/radix-tree.h>
#include <linux/dma-debug.h>
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
@@ -30,6 +29,7 @@
#include <linux/uaccess.h>
#include <linux/export.h>
#include <linux/device.h>
+#include <linux/xarray.h>
#include <linux/types.h>
#include <linux/sched.h>
#include <linux/ctype.h>
@@ -465,9 +465,8 @@ EXPORT_SYMBOL(debug_dma_dump_mappings);
* At any time debug_dma_assert_idle() can be called to trigger a
* warning if any cachelines in the given page are in the active set.
*/
-static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT);
-static DEFINE_SPINLOCK(radix_lock);
-#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+static DEFINE_XARRAY_FLAGS(dma_active_cacheline, XA_FLAGS_LOCK_IRQ);
+#define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << XA_MAX_TAGS) - 1)
#define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
#define CACHELINES_PER_PAGE (1 << CACHELINE_PER_PAGE_SHIFT)

@@ -477,37 +476,40 @@ static phys_addr_t to_cacheline_number(struct dma_debug_entry *entry)
(entry->offset >> L1_CACHE_SHIFT);
}

-static int active_cacheline_read_overlap(phys_addr_t cln)
+static unsigned int active_cacheline_read_overlap(struct xa_state *xas)
{
- int overlap = 0, i;
+ unsigned int tags = 0;
+ xa_tag_t tag;

- for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--)
- if (radix_tree_tag_get(&dma_active_cacheline, cln, i))
- overlap |= 1 << i;
- return overlap;
+ for (tag = 0; tag < XA_MAX_TAGS; tag++)
+ if (xas_get_tag(xas, tag))
+ tags |= 1U << tag;
+
+ return tags;
}

-static int active_cacheline_set_overlap(phys_addr_t cln, int overlap)
+static int active_cacheline_set_overlap(struct xa_state *xas, int overlap)
{
- int i;
+ xa_tag_t tag;

if (overlap > ACTIVE_CACHELINE_MAX_OVERLAP || overlap < 0)
return overlap;

- for (i = RADIX_TREE_MAX_TAGS - 1; i >= 0; i--)
- if (overlap & 1 << i)
- radix_tree_tag_set(&dma_active_cacheline, cln, i);
+ for (tag = 0; tag < XA_MAX_TAGS; tag++) {
+ if (overlap & (1U << tag))
+ xas_set_tag(xas, tag);
else
- radix_tree_tag_clear(&dma_active_cacheline, cln, i);
+ xas_clear_tag(xas, tag);
+ }

return overlap;
}

-static void active_cacheline_inc_overlap(phys_addr_t cln)
+static void active_cacheline_inc_overlap(struct xa_state *xas)
{
- int overlap = active_cacheline_read_overlap(cln);
+ int overlap = active_cacheline_read_overlap(xas);

- overlap = active_cacheline_set_overlap(cln, ++overlap);
+ overlap = active_cacheline_set_overlap(xas, ++overlap);

/* If we overflowed the overlap counter then we're potentially
* leaking dma-mappings. Otherwise, if maps and unmaps are
@@ -517,21 +519,22 @@ static void active_cacheline_inc_overlap(phys_addr_t cln)
*/
WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP,
"DMA-API: exceeded %d overlapping mappings of cacheline %pa\n",
- ACTIVE_CACHELINE_MAX_OVERLAP, &cln);
+ ACTIVE_CACHELINE_MAX_OVERLAP, &xas->xa_index);
}

-static int active_cacheline_dec_overlap(phys_addr_t cln)
+static int active_cacheline_dec_overlap(struct xa_state *xas)
{
- int overlap = active_cacheline_read_overlap(cln);
+ int overlap = active_cacheline_read_overlap(xas);

- return active_cacheline_set_overlap(cln, --overlap);
+ return active_cacheline_set_overlap(xas, --overlap);
}

static int active_cacheline_insert(struct dma_debug_entry *entry)
{
phys_addr_t cln = to_cacheline_number(entry);
+ XA_STATE(xas, &dma_active_cacheline, cln);
unsigned long flags;
- int rc;
+ struct dma_debug_entry *exists;

/* If the device is not writing memory then we don't have any
* concerns about the cpu consuming stale data. This mitigates
@@ -540,32 +543,32 @@ static int active_cacheline_insert(struct dma_debug_entry *entry)
if (entry->direction == DMA_TO_DEVICE)
return 0;

- spin_lock_irqsave(&radix_lock, flags);
- rc = radix_tree_insert(&dma_active_cacheline, cln, entry);
- if (rc == -EEXIST)
- active_cacheline_inc_overlap(cln);
- spin_unlock_irqrestore(&radix_lock, flags);
+ xas_lock_irqsave(&xas, flags);
+ exists = xas_create(&xas);
+ if (exists)
+ active_cacheline_inc_overlap(&xas);
+ else
+ xas_store(&xas, entry);
+ xas_unlock_irqrestore(&xas, flags);

- return rc;
+ return xas_error(&xas);
}

static void active_cacheline_remove(struct dma_debug_entry *entry)
{
phys_addr_t cln = to_cacheline_number(entry);
+ XA_STATE(xas, &dma_active_cacheline, cln);
unsigned long flags;

/* ...mirror the insert case */
if (entry->direction == DMA_TO_DEVICE)
return;

- spin_lock_irqsave(&radix_lock, flags);
- /* since we are counting overlaps the final put of the
- * cacheline will occur when the overlap count is 0.
- * active_cacheline_dec_overlap() returns -1 in that case
- */
- if (active_cacheline_dec_overlap(cln) < 0)
- radix_tree_delete(&dma_active_cacheline, cln);
- spin_unlock_irqrestore(&radix_lock, flags);
+ xas_lock_irqsave(&xas, flags);
+ xas_load(&xas);
+ if (active_cacheline_dec_overlap(&xas) < 0)
+ xas_store(&xas, NULL);
+ xas_unlock_irqrestore(&xas, flags);
}

/**
@@ -578,12 +581,8 @@ static void active_cacheline_remove(struct dma_debug_entry *entry)
*/
void debug_dma_assert_idle(struct page *page)
{
- static struct dma_debug_entry *ents[CACHELINES_PER_PAGE];
- struct dma_debug_entry *entry = NULL;
- void **results = (void **) &ents;
- unsigned int nents, i;
- unsigned long flags;
- phys_addr_t cln;
+ struct dma_debug_entry *entry;
+ unsigned long cln;

if (dma_debug_disabled())
return;
@@ -591,21 +590,9 @@ void debug_dma_assert_idle(struct page *page)
if (!page)
return;

- cln = (phys_addr_t) page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT;
- spin_lock_irqsave(&radix_lock, flags);
- nents = radix_tree_gang_lookup(&dma_active_cacheline, results, cln,
- CACHELINES_PER_PAGE);
- for (i = 0; i < nents; i++) {
- phys_addr_t ent_cln = to_cacheline_number(ents[i]);
-
- if (ent_cln == cln) {
- entry = ents[i];
- break;
- } else if (ent_cln >= cln + CACHELINES_PER_PAGE)
- break;
- }
- spin_unlock_irqrestore(&radix_lock, flags);
-
+ cln = page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT;
+ entry = xa_find(&dma_active_cacheline, &cln,
+ cln + CACHELINES_PER_PAGE - 1, XA_PRESENT);
if (!entry)
return;

--
2.15.1


2018-01-17 20:28:00

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 99/99] null_blk: Convert to XArray

From: Matthew Wilcox <[email protected]>

We can probably avoid the call to xa_reserve() by changing the locking,
but I didn't feel confident enough to do that.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/block/null_blk.c | 87 +++++++++++++++++++++---------------------------
1 file changed, 38 insertions(+), 49 deletions(-)

diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index ad0477ae820f..d90d173b8885 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -15,6 +15,7 @@
#include <linux/lightnvm.h>
#include <linux/configfs.h>
#include <linux/badblocks.h>
+#include <linux/xarray.h>

#define SECTOR_SHIFT 9
#define PAGE_SECTORS_SHIFT (PAGE_SHIFT - SECTOR_SHIFT)
@@ -90,8 +91,8 @@ struct nullb_page {
struct nullb_device {
struct nullb *nullb;
struct config_item item;
- struct radix_tree_root data; /* data stored in the disk */
- struct radix_tree_root cache; /* disk cache data */
+ struct xarray data; /* data stored in the disk */
+ struct xarray cache; /* disk cache data */
unsigned long flags; /* device flags */
unsigned int curr_cache;
struct badblocks badblocks;
@@ -558,8 +559,8 @@ static struct nullb_device *null_alloc_dev(void)
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
return NULL;
- INIT_RADIX_TREE(&dev->data, GFP_ATOMIC);
- INIT_RADIX_TREE(&dev->cache, GFP_ATOMIC);
+ xa_init_flags(&dev->data, XA_FLAGS_LOCK_IRQ);
+ xa_init_flags(&dev->cache, XA_FLAGS_LOCK_IRQ);
if (badblocks_init(&dev->badblocks, 0)) {
kfree(dev);
return NULL;
@@ -752,18 +753,18 @@ static void null_free_sector(struct nullb *nullb, sector_t sector,
unsigned int sector_bit;
u64 idx;
struct nullb_page *t_page, *ret;
- struct radix_tree_root *root;
+ struct xarray *xa;

- root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+ xa = is_cache ? &nullb->dev->cache : &nullb->dev->data;
idx = sector >> PAGE_SECTORS_SHIFT;
sector_bit = (sector & SECTOR_MASK);

- t_page = radix_tree_lookup(root, idx);
+ t_page = xa_load(xa, idx);
if (t_page) {
__clear_bit(sector_bit, &t_page->bitmap);

if (!t_page->bitmap) {
- ret = radix_tree_delete_item(root, idx, t_page);
+ ret = xa_cmpxchg(xa, idx, t_page, NULL, 0);
WARN_ON(ret != t_page);
null_free_page(ret);
if (is_cache)
@@ -772,47 +773,34 @@ static void null_free_sector(struct nullb *nullb, sector_t sector,
}
}

-static struct nullb_page *null_radix_tree_insert(struct nullb *nullb, u64 idx,
+static struct nullb_page *null_xa_insert(struct nullb *nullb, u64 idx,
struct nullb_page *t_page, bool is_cache)
{
- struct radix_tree_root *root;
+ struct xarray *xa = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+ struct nullb_page *exist;

- root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
-
- if (radix_tree_insert(root, idx, t_page)) {
+ exist = xa_cmpxchg(xa, idx, NULL, t_page, GFP_ATOMIC);
+ if (exist) {
null_free_page(t_page);
- t_page = radix_tree_lookup(root, idx);
- WARN_ON(!t_page || t_page->page->index != idx);
+ t_page = exist;
} else if (is_cache)
nullb->dev->curr_cache += PAGE_SIZE;

+ WARN_ON(t_page->page->index != idx);
return t_page;
}

static void null_free_device_storage(struct nullb_device *dev, bool is_cache)
{
- unsigned long pos = 0;
- int nr_pages;
- struct nullb_page *ret, *t_pages[FREE_BATCH];
- struct radix_tree_root *root;
-
- root = is_cache ? &dev->cache : &dev->data;
-
- do {
- int i;
-
- nr_pages = radix_tree_gang_lookup(root,
- (void **)t_pages, pos, FREE_BATCH);
-
- for (i = 0; i < nr_pages; i++) {
- pos = t_pages[i]->page->index;
- ret = radix_tree_delete_item(root, pos, t_pages[i]);
- WARN_ON(ret != t_pages[i]);
- null_free_page(ret);
- }
+ struct nullb_page *t_page;
+ XA_STATE(xas, is_cache ? &dev->cache : &dev->data, 0);

- pos++;
- } while (nr_pages == FREE_BATCH);
+ xas_lock(&xas);
+ xas_for_each(&xas, t_page, ULONG_MAX) {
+ xas_store(&xas, NULL);
+ null_free_page(t_page);
+ }
+ xas_unlock(&xas);

if (is_cache)
dev->curr_cache = 0;
@@ -824,13 +812,13 @@ static struct nullb_page *__null_lookup_page(struct nullb *nullb,
unsigned int sector_bit;
u64 idx;
struct nullb_page *t_page;
- struct radix_tree_root *root;
+ struct xarray *xa;

idx = sector >> PAGE_SECTORS_SHIFT;
sector_bit = (sector & SECTOR_MASK);

- root = is_cache ? &nullb->dev->cache : &nullb->dev->data;
- t_page = radix_tree_lookup(root, idx);
+ xa = is_cache ? &nullb->dev->cache : &nullb->dev->data;
+ t_page = xa_load(xa, idx);
WARN_ON(t_page && t_page->page->index != idx);

if (t_page && (for_write || test_bit(sector_bit, &t_page->bitmap)))
@@ -854,6 +842,7 @@ static struct nullb_page *null_lookup_page(struct nullb *nullb,
static struct nullb_page *null_insert_page(struct nullb *nullb,
sector_t sector, bool ignore_cache)
{
+ struct xarray *xa;
u64 idx;
struct nullb_page *t_page;

@@ -867,14 +856,14 @@ static struct nullb_page *null_insert_page(struct nullb *nullb,
if (!t_page)
goto out_lock;

- if (radix_tree_preload(GFP_NOIO))
+ idx = sector >> PAGE_SECTORS_SHIFT;
+ xa = ignore_cache ? &nullb->dev->data : &nullb->dev->cache;
+ if (xa_reserve(xa, idx, GFP_NOIO))
goto out_freepage;

spin_lock_irq(&nullb->lock);
- idx = sector >> PAGE_SECTORS_SHIFT;
t_page->page->index = idx;
- t_page = null_radix_tree_insert(nullb, idx, t_page, !ignore_cache);
- radix_tree_preload_end();
+ t_page = null_xa_insert(nullb, idx, t_page, !ignore_cache);

return t_page;
out_freepage:
@@ -900,8 +889,7 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
if (test_bit(NULLB_PAGE_FREE, &c_page->bitmap)) {
null_free_page(c_page);
if (t_page && t_page->bitmap == 0) {
- ret = radix_tree_delete_item(&nullb->dev->data,
- idx, t_page);
+ xa_cmpxchg(&nullb->dev->data, idx, t_page, NULL, 0);
null_free_page(t_page);
}
return 0;
@@ -926,7 +914,7 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
kunmap_atomic(dst);
kunmap_atomic(src);

- ret = radix_tree_delete_item(&nullb->dev->cache, idx, c_page);
+ ret = xa_cmpxchg(&nullb->dev->cache, idx, c_page, NULL, 0);
null_free_page(ret);
nullb->dev->curr_cache -= PAGE_SIZE;

@@ -944,8 +932,9 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
nullb->dev->curr_cache + n || nullb->dev->curr_cache == 0)
return 0;

- nr_pages = radix_tree_gang_lookup(&nullb->dev->cache,
- (void **)c_pages, nullb->cache_flush_pos, FREE_BATCH);
+ nr_pages = xa_extract(&nullb->dev->cache, (void **)c_pages,
+ nullb->cache_flush_pos, ULONG_MAX,
+ FREE_BATCH, XA_PRESENT);
/*
* nullb_flush_cache_page could unlock before using the c_pages. To
* avoid race, we don't allow page free
@@ -1086,7 +1075,7 @@ static int null_handle_flush(struct nullb *nullb)
break;
}

- WARN_ON(!radix_tree_empty(&nullb->dev->cache));
+ WARN_ON(!xa_empty(&nullb->dev->cache));
spin_unlock_irq(&nullb->lock);
return err;
}
--
2.15.1


2018-01-17 20:28:26

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 94/99] f2fs: Convert extent_tree_root to XArray

From: Matthew Wilcox <[email protected]>

Rename it to extent_array and use the xa_lock in place of the
extent_tree_lock mutex.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/extent_cache.c | 59 +++++++++++++++++++++++++-------------------------
fs/f2fs/f2fs.h | 3 +--
2 files changed, 30 insertions(+), 32 deletions(-)

diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
index ff2352a0ed15..da5f3bd1808d 100644
--- a/fs/f2fs/extent_cache.c
+++ b/fs/f2fs/extent_cache.c
@@ -250,25 +250,25 @@ static struct extent_tree *__grab_extent_tree(struct inode *inode)
struct extent_tree *et;
nid_t ino = inode->i_ino;

- mutex_lock(&sbi->extent_tree_lock);
- et = radix_tree_lookup(&sbi->extent_tree_root, ino);
- if (!et) {
- et = f2fs_kmem_cache_alloc(extent_tree_slab, GFP_NOFS);
- f2fs_radix_tree_insert(&sbi->extent_tree_root, ino, et);
- memset(et, 0, sizeof(struct extent_tree));
- et->ino = ino;
- et->root = RB_ROOT;
- et->cached_en = NULL;
- rwlock_init(&et->lock);
- INIT_LIST_HEAD(&et->list);
- atomic_set(&et->node_cnt, 0);
- atomic_inc(&sbi->total_ext_tree);
- } else {
+ et = xa_load(&sbi->extent_array, ino);
+ if (et) {
atomic_dec(&sbi->total_zombie_tree);
list_del_init(&et->list);
+ goto out;
}
- mutex_unlock(&sbi->extent_tree_lock);

+ et = f2fs_kmem_cache_alloc(extent_tree_slab, GFP_NOFS | __GFP_ZERO);
+ et->ino = ino;
+ et->root = RB_ROOT;
+ et->cached_en = NULL;
+ rwlock_init(&et->lock);
+ INIT_LIST_HEAD(&et->list);
+ atomic_set(&et->node_cnt, 0);
+
+ xa_store(&sbi->extent_array, ino, et, GFP_NOFS);
+ atomic_inc(&sbi->total_ext_tree);
+
+out:
/* never died until evict_inode */
F2FS_I(inode)->extent_tree = et;

@@ -622,7 +622,7 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
if (!atomic_read(&sbi->total_zombie_tree))
goto free_node;

- if (!mutex_trylock(&sbi->extent_tree_lock))
+ if (!xa_trylock(&sbi->extent_array))
goto out;

/* 1. remove unreferenced extent tree */
@@ -634,7 +634,7 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
}
f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
list_del_init(&et->list);
- radix_tree_delete(&sbi->extent_tree_root, et->ino);
+ xa_erase(&sbi->extent_array, et->ino);
kmem_cache_free(extent_tree_slab, et);
atomic_dec(&sbi->total_ext_tree);
atomic_dec(&sbi->total_zombie_tree);
@@ -642,13 +642,13 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)

if (node_cnt + tree_cnt >= nr_shrink)
goto unlock_out;
- cond_resched();
+ cond_resched_lock(&sbi->extent_array.xa_lock);
}
- mutex_unlock(&sbi->extent_tree_lock);
+ xa_unlock(&sbi->extent_array);

free_node:
/* 2. remove LRU extent entries */
- if (!mutex_trylock(&sbi->extent_tree_lock))
+ if (!xa_trylock(&sbi->extent_array))
goto out;

remained = nr_shrink - (node_cnt + tree_cnt);
@@ -678,7 +678,7 @@ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
spin_unlock(&sbi->extent_lock);

unlock_out:
- mutex_unlock(&sbi->extent_tree_lock);
+ xa_unlock(&sbi->extent_array);
out:
trace_f2fs_shrink_extent_tree(sbi, node_cnt, tree_cnt);

@@ -725,23 +725,23 @@ void f2fs_destroy_extent_tree(struct inode *inode)

if (inode->i_nlink && !is_bad_inode(inode) &&
atomic_read(&et->node_cnt)) {
- mutex_lock(&sbi->extent_tree_lock);
+ xa_lock(&sbi->extent_array);
list_add_tail(&et->list, &sbi->zombie_list);
atomic_inc(&sbi->total_zombie_tree);
- mutex_unlock(&sbi->extent_tree_lock);
+ xa_unlock(&sbi->extent_array);
return;
}

/* free all extent info belong to this extent tree */
node_cnt = f2fs_destroy_extent_node(inode);

- /* delete extent tree entry in radix tree */
- mutex_lock(&sbi->extent_tree_lock);
+ /* delete extent from array */
+ xa_lock(&sbi->extent_array);
f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
- radix_tree_delete(&sbi->extent_tree_root, inode->i_ino);
- kmem_cache_free(extent_tree_slab, et);
+ __xa_erase(&sbi->extent_array, inode->i_ino);
atomic_dec(&sbi->total_ext_tree);
- mutex_unlock(&sbi->extent_tree_lock);
+ xa_unlock(&sbi->extent_array);
+ kmem_cache_free(extent_tree_slab, et);

F2FS_I(inode)->extent_tree = NULL;

@@ -787,8 +787,7 @@ void f2fs_update_extent_cache_range(struct dnode_of_data *dn,

void init_extent_cache_info(struct f2fs_sb_info *sbi)
{
- INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO);
- mutex_init(&sbi->extent_tree_lock);
+ xa_init(&sbi->extent_array);
INIT_LIST_HEAD(&sbi->extent_list);
spin_lock_init(&sbi->extent_lock);
atomic_set(&sbi->total_ext_tree, 0);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index b3ee784b49bc..4eacef9c7274 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1064,8 +1064,7 @@ struct f2fs_sb_info {
spinlock_t inode_lock[NR_INODE_TYPE]; /* for dirty inode list lock */

/* for extent tree cache */
- struct radix_tree_root extent_tree_root;/* cache extent cache entries */
- struct mutex extent_tree_lock; /* locking extent radix tree */
+ struct xarray extent_array; /* cache extent cache entries */
struct list_head extent_list; /* lru list for shrinker */
spinlock_t extent_lock; /* locking extent lru list */
atomic_t total_ext_tree; /* extent tree count */
--
2.15.1


2018-01-17 20:29:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 93/99] f2fs: Convert ino_root to XArray

From: Matthew Wilcox <[email protected]>

I did a fairly major rewrite of __add_ino_entry(); please check carefully.
Also, we can remove ino_list unless it's important to write out orphan
inodes in the order they were orphaned. It may also make more sense to
combine the array of inode_management structures into a single XArray
with tags, but that would be a job for someone who understands this
filesystem better than I do.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/checkpoint.c | 85 +++++++++++++++++++++++-----------------------------
fs/f2fs/f2fs.h | 3 +-
2 files changed, 38 insertions(+), 50 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 4aa69bc1c70a..04d69679da13 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -403,33 +403,30 @@ static void __add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino,
struct inode_management *im = &sbi->im[type];
struct ino_entry *e, *tmp;

- tmp = f2fs_kmem_cache_alloc(ino_entry_slab, GFP_NOFS);
-
- radix_tree_preload(GFP_NOFS | __GFP_NOFAIL);
-
- spin_lock(&im->ino_lock);
- e = radix_tree_lookup(&im->ino_root, ino);
- if (!e) {
- e = tmp;
- if (unlikely(radix_tree_insert(&im->ino_root, ino, e)))
- f2fs_bug_on(sbi, 1);
-
- memset(e, 0, sizeof(struct ino_entry));
- e->ino = ino;
-
- list_add_tail(&e->list, &im->ino_list);
- if (type != ORPHAN_INO)
- im->ino_num++;
+ xa_lock(&im->ino_root);
+ e = xa_load(&im->ino_root, ino);
+ if (e)
+ goto found;
+ xa_unlock(&im->ino_root);
+
+ tmp = f2fs_kmem_cache_alloc(ino_entry_slab, GFP_NOFS | __GFP_ZERO);
+ xa_lock(&im->ino_root);
+ e = __xa_cmpxchg(&im->ino_root, ino, NULL, tmp,
+ GFP_NOFS | __GFP_NOFAIL);
+ if (e) {
+ kmem_cache_free(ino_entry_slab, tmp);
+ goto found;
}
+ e = tmp;

+ e->ino = ino;
+ list_add_tail(&e->list, &im->ino_list);
+ if (type != ORPHAN_INO)
+ im->ino_num++;
+found:
if (type == FLUSH_INO)
f2fs_set_bit(devidx, (char *)&e->dirty_device);
-
- spin_unlock(&im->ino_lock);
- radix_tree_preload_end();
-
- if (e != tmp)
- kmem_cache_free(ino_entry_slab, tmp);
+ xa_unlock(&im->ino_root);
}

static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
@@ -437,17 +434,14 @@ static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
struct inode_management *im = &sbi->im[type];
struct ino_entry *e;

- spin_lock(&im->ino_lock);
- e = radix_tree_lookup(&im->ino_root, ino);
+ xa_lock(&im->ino_root);
+ e = __xa_erase(&im->ino_root, ino);
if (e) {
list_del(&e->list);
- radix_tree_delete(&im->ino_root, ino);
im->ino_num--;
- spin_unlock(&im->ino_lock);
kmem_cache_free(ino_entry_slab, e);
- return;
}
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);
}

void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
@@ -466,12 +460,8 @@ void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
{
struct inode_management *im = &sbi->im[mode];
- struct ino_entry *e;

- spin_lock(&im->ino_lock);
- e = radix_tree_lookup(&im->ino_root, ino);
- spin_unlock(&im->ino_lock);
- return e ? true : false;
+ return xa_load(&im->ino_root, ino) ? true : false;
}

void release_ino_entry(struct f2fs_sb_info *sbi, bool all)
@@ -482,14 +472,14 @@ void release_ino_entry(struct f2fs_sb_info *sbi, bool all)
for (i = all ? ORPHAN_INO : APPEND_INO; i < MAX_INO_ENTRY; i++) {
struct inode_management *im = &sbi->im[i];

- spin_lock(&im->ino_lock);
+ xa_lock(&im->ino_root);
list_for_each_entry_safe(e, tmp, &im->ino_list, list) {
list_del(&e->list);
- radix_tree_delete(&im->ino_root, e->ino);
+ __xa_erase(&im->ino_root, e->ino);
kmem_cache_free(ino_entry_slab, e);
im->ino_num--;
}
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);
}
}

@@ -506,11 +496,11 @@ bool is_dirty_device(struct f2fs_sb_info *sbi, nid_t ino,
struct ino_entry *e;
bool is_dirty = false;

- spin_lock(&im->ino_lock);
- e = radix_tree_lookup(&im->ino_root, ino);
+ xa_lock(&im->ino_root);
+ e = xa_load(&im->ino_root, ino);
if (e && f2fs_test_bit(devidx, (char *)&e->dirty_device))
is_dirty = true;
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);
return is_dirty;
}

@@ -519,11 +509,11 @@ int acquire_orphan_inode(struct f2fs_sb_info *sbi)
struct inode_management *im = &sbi->im[ORPHAN_INO];
int err = 0;

- spin_lock(&im->ino_lock);
+ xa_lock(&im->ino_root);

#ifdef CONFIG_F2FS_FAULT_INJECTION
if (time_to_inject(sbi, FAULT_ORPHAN)) {
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);
f2fs_show_injection_info(FAULT_ORPHAN);
return -ENOSPC;
}
@@ -532,7 +522,7 @@ int acquire_orphan_inode(struct f2fs_sb_info *sbi)
err = -ENOSPC;
else
im->ino_num++;
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);

return err;
}
@@ -541,10 +531,10 @@ void release_orphan_inode(struct f2fs_sb_info *sbi)
{
struct inode_management *im = &sbi->im[ORPHAN_INO];

- spin_lock(&im->ino_lock);
+ xa_lock(&im->ino_root);
f2fs_bug_on(sbi, im->ino_num == 0);
im->ino_num--;
- spin_unlock(&im->ino_lock);
+ xa_unlock(&im->ino_root);
}

void add_orphan_inode(struct inode *inode)
@@ -677,7 +667,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
orphan_blocks = GET_ORPHAN_BLOCKS(im->ino_num);

/*
- * we don't need to do spin_lock(&im->ino_lock) here, since all the
+ * we don't need to lock the ino_root here, since all the
* orphan inode operations are covered under f2fs_lock_op().
* And, spin_lock should be avoided due to page operations below.
*/
@@ -1433,8 +1423,7 @@ void init_ino_entry_info(struct f2fs_sb_info *sbi)
for (i = 0; i < MAX_INO_ENTRY; i++) {
struct inode_management *im = &sbi->im[i];

- INIT_RADIX_TREE(&im->ino_root, GFP_ATOMIC);
- spin_lock_init(&im->ino_lock);
+ xa_init(&im->ino_root);
INIT_LIST_HEAD(&im->ino_list);
im->ino_num = 0;
}
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 6abf26c31d01..b3ee784b49bc 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -994,8 +994,7 @@ enum inode_type {

/* for inner inode cache management */
struct inode_management {
- struct radix_tree_root ino_root; /* ino entry array */
- spinlock_t ino_lock; /* for ino entry lock */
+ struct xarray ino_root; /* ino entry array */
struct list_head ino_list; /* inode list head */
unsigned long ino_num; /* number of entries */
};
--
2.15.1


2018-01-17 20:29:22

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 91/99] btrfs: Convert name_cache to XArray

From: Matthew Wilcox <[email protected]>

This is a very straightforward conversion. The handling of collisions
in the namecache could be better handled with an hlist, but that's a
patch for another day.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/send.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 20d3300bd268..3891a8e958fa 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -23,7 +23,7 @@
#include <linux/mount.h>
#include <linux/xattr.h>
#include <linux/posix_acl_xattr.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/vmalloc.h>
#include <linux/string.h>
#include <linux/compat.h>
@@ -118,7 +118,7 @@ struct send_ctx {
struct list_head new_refs;
struct list_head deleted_refs;

- struct radix_tree_root name_cache;
+ struct xarray name_cache;
struct list_head name_cache_list;
int name_cache_size;

@@ -2021,8 +2021,7 @@ static int name_cache_insert(struct send_ctx *sctx,
int ret = 0;
struct list_head *nce_head;

- nce_head = radix_tree_lookup(&sctx->name_cache,
- (unsigned long)nce->ino);
+ nce_head = xa_load(&sctx->name_cache, (unsigned long)nce->ino);
if (!nce_head) {
nce_head = kmalloc(sizeof(*nce_head), GFP_KERNEL);
if (!nce_head) {
@@ -2031,7 +2030,8 @@ static int name_cache_insert(struct send_ctx *sctx,
}
INIT_LIST_HEAD(nce_head);

- ret = radix_tree_insert(&sctx->name_cache, nce->ino, nce_head);
+ ret = xa_insert(&sctx->name_cache, nce->ino, nce_head,
+ GFP_KERNEL);
if (ret < 0) {
kfree(nce_head);
kfree(nce);
@@ -2050,8 +2050,7 @@ static void name_cache_delete(struct send_ctx *sctx,
{
struct list_head *nce_head;

- nce_head = radix_tree_lookup(&sctx->name_cache,
- (unsigned long)nce->ino);
+ nce_head = xa_load(&sctx->name_cache, (unsigned long)nce->ino);
if (!nce_head) {
btrfs_err(sctx->send_root->fs_info,
"name_cache_delete lookup failed ino %llu cache size %d, leaking memory",
@@ -2066,7 +2065,7 @@ static void name_cache_delete(struct send_ctx *sctx,
* We may not get to the final release of nce_head if the lookup fails
*/
if (nce_head && list_empty(nce_head)) {
- radix_tree_delete(&sctx->name_cache, (unsigned long)nce->ino);
+ xa_erase(&sctx->name_cache, (unsigned long)nce->ino);
kfree(nce_head);
}
}
@@ -2077,7 +2076,7 @@ static struct name_cache_entry *name_cache_search(struct send_ctx *sctx,
struct list_head *nce_head;
struct name_cache_entry *cur;

- nce_head = radix_tree_lookup(&sctx->name_cache, (unsigned long)ino);
+ nce_head = xa_load(&sctx->name_cache, (unsigned long)ino);
if (!nce_head)
return NULL;

@@ -6526,7 +6525,7 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)

INIT_LIST_HEAD(&sctx->new_refs);
INIT_LIST_HEAD(&sctx->deleted_refs);
- INIT_RADIX_TREE(&sctx->name_cache, GFP_KERNEL);
+ xa_init(&sctx->name_cache);
INIT_LIST_HEAD(&sctx->name_cache_list);

sctx->flags = arg->flags;
--
2.15.1


2018-01-17 20:29:36

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 92/99] f2fs: Convert pids radix tree to XArray

From: Matthew Wilcox <[email protected]>

The XArray API works out rather well for this user.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/super.c | 2 --
fs/f2fs/trace.c | 60 ++++-----------------------------------------------------
fs/f2fs/trace.h | 2 --
3 files changed, 4 insertions(+), 60 deletions(-)

diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 708155d9c2e4..d608edffe69e 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -2831,8 +2831,6 @@ static int __init init_f2fs_fs(void)
{
int err;

- f2fs_build_trace_ios();
-
err = init_inodecache();
if (err)
goto fail;
diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
index bccbbf2616d2..f316a42c547f 100644
--- a/fs/f2fs/trace.c
+++ b/fs/f2fs/trace.c
@@ -16,8 +16,7 @@
#include "f2fs.h"
#include "trace.h"

-static RADIX_TREE(pids, GFP_ATOMIC);
-static spinlock_t pids_lock;
+static DEFINE_XARRAY(pids);
static struct last_io_info last_io;

static inline void __print_last_io(void)
@@ -57,28 +56,13 @@ void f2fs_trace_pid(struct page *page)
{
struct inode *inode = page->mapping->host;
pid_t pid = task_pid_nr(current);
- void *p;

set_page_private(page, (unsigned long)pid);

- if (radix_tree_preload(GFP_NOFS))
- return;
-
- spin_lock(&pids_lock);
- p = radix_tree_lookup(&pids, pid);
- if (p == current)
- goto out;
- if (p)
- radix_tree_delete(&pids, pid);
-
- f2fs_radix_tree_insert(&pids, pid, current);
-
- trace_printk("%3x:%3x %4x %-16s\n",
+ if (xa_store(&pids, pid, current, GFP_NOFS) != current)
+ trace_printk("%3x:%3x %4x %-16s\n",
MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
pid, current->comm);
-out:
- spin_unlock(&pids_lock);
- radix_tree_preload_end();
}

void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
@@ -120,43 +104,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
return;
}

-void f2fs_build_trace_ios(void)
-{
- spin_lock_init(&pids_lock);
-}
-
-#define PIDVEC_SIZE 128
-static unsigned int gang_lookup_pids(pid_t *results, unsigned long first_index,
- unsigned int max_items)
-{
- struct radix_tree_iter iter;
- void **slot;
- unsigned int ret = 0;
-
- if (unlikely(!max_items))
- return 0;
-
- radix_tree_for_each_slot(slot, &pids, &iter, first_index) {
- results[ret] = iter.index;
- if (++ret == max_items)
- break;
- }
- return ret;
-}
-
void f2fs_destroy_trace_ios(void)
{
- pid_t pid[PIDVEC_SIZE];
- pid_t next_pid = 0;
- unsigned int found;
-
- spin_lock(&pids_lock);
- while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
- unsigned idx;
-
- next_pid = pid[found - 1] + 1;
- for (idx = 0; idx < found; idx++)
- radix_tree_delete(&pids, pid[idx]);
- }
- spin_unlock(&pids_lock);
+ xa_destroy(&pids);
}
diff --git a/fs/f2fs/trace.h b/fs/f2fs/trace.h
index 67db24ac1e85..157e4564e48b 100644
--- a/fs/f2fs/trace.h
+++ b/fs/f2fs/trace.h
@@ -34,12 +34,10 @@ struct last_io_info {

extern void f2fs_trace_pid(struct page *);
extern void f2fs_trace_ios(struct f2fs_io_info *, int);
-extern void f2fs_build_trace_ios(void);
extern void f2fs_destroy_trace_ios(void);
#else
#define f2fs_trace_pid(p)
#define f2fs_trace_ios(i, n)
-#define f2fs_build_trace_ios()
#define f2fs_destroy_trace_ios()

#endif
--
2.15.1


2018-01-17 20:30:07

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 89/99] btrfs: Convert buffer_radix to XArray

From: Matthew Wilcox <[email protected]>

Eliminate the buffer_lock as the internal xa_lock provides all the
necessary protection. We can remove the radix_tree_preload calls, but
I can't find a good way to use the 'exists' result from xa_cmpxchg().
We could resort to the advanced API to improve this, but it's a really
unlikely case (nothing in the xarray when we first look; something there
when we try to add the newly-allocated extent buffer), so I think it's
not worth optimising for.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/ctree.h | 5 ++-
fs/btrfs/disk-io.c | 3 +-
fs/btrfs/extent_io.c | 82 ++++++++++++++++++--------------------------
fs/btrfs/tests/btrfs-tests.c | 26 +++-----------
4 files changed, 40 insertions(+), 76 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 272d099bed7e..87984ce3a4c2 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1058,9 +1058,8 @@ struct btrfs_fs_info {
/* readahead works cnt */
atomic_t reada_works_cnt;

- /* Extent buffer radix tree */
- spinlock_t buffer_lock;
- struct radix_tree_root buffer_radix;
+ /* Extent buffer array */
+ struct xarray buffer_array;

/* next backup root to be overwritten */
int backup_root_index;
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 1eae29045d43..650d1350b64d 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2429,7 +2429,7 @@ int open_ctree(struct super_block *sb,
}

xa_init(&fs_info->fs_roots);
- INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
+ xa_init(&fs_info->buffer_array);
INIT_LIST_HEAD(&fs_info->trans_list);
INIT_LIST_HEAD(&fs_info->dead_roots);
INIT_LIST_HEAD(&fs_info->delayed_iputs);
@@ -2442,7 +2442,6 @@ int open_ctree(struct super_block *sb,
spin_lock_init(&fs_info->tree_mod_seq_lock);
spin_lock_init(&fs_info->super_lock);
spin_lock_init(&fs_info->qgroup_op_lock);
- spin_lock_init(&fs_info->buffer_lock);
spin_lock_init(&fs_info->unused_bgs_lock);
rwlock_init(&fs_info->tree_mod_log_lock);
mutex_init(&fs_info->unused_bg_unpin_mutex);
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index fd5e9d887328..2b43fa11c9e2 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4884,8 +4884,7 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
struct extent_buffer *eb;

rcu_read_lock();
- eb = radix_tree_lookup(&fs_info->buffer_radix,
- start >> PAGE_SHIFT);
+ eb = xa_load(&fs_info->buffer_array, start >> PAGE_SHIFT);
if (eb && atomic_inc_not_zero(&eb->refs)) {
rcu_read_unlock();
/*
@@ -4919,31 +4918,24 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
u64 start)
{
- struct extent_buffer *eb, *exists = NULL;
- int ret;
+ struct extent_buffer *exists, *eb = NULL;

- eb = find_extent_buffer(fs_info, start);
- if (eb)
- return eb;
- eb = alloc_dummy_extent_buffer(fs_info, start);
- if (!eb)
- return NULL;
- eb->fs_info = fs_info;
again:
- ret = radix_tree_preload(GFP_NOFS);
- if (ret)
+ exists = find_extent_buffer(fs_info, start);
+ if (exists)
goto free_eb;
- spin_lock(&fs_info->buffer_lock);
- ret = radix_tree_insert(&fs_info->buffer_radix,
- start >> PAGE_SHIFT, eb);
- spin_unlock(&fs_info->buffer_lock);
- radix_tree_preload_end();
- if (ret == -EEXIST) {
- exists = find_extent_buffer(fs_info, start);
- if (exists)
+ if (!eb)
+ eb = alloc_dummy_extent_buffer(fs_info, start);
+ if (!eb)
+ return NULL;
+ exists = xa_cmpxchg(&fs_info->buffer_array, start >> PAGE_SHIFT,
+ NULL, eb, GFP_NOFS);
+ if (unlikely(exists)) {
+ if (xa_is_err(exists)) {
+ exists = NULL;
goto free_eb;
- else
- goto again;
+ }
+ goto again;
}
check_buffer_tree_ref(eb);
set_bit(EXTENT_BUFFER_IN_TREE, &eb->bflags);
@@ -4957,7 +4949,8 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
atomic_inc(&eb->refs);
return eb;
free_eb:
- btrfs_release_extent_buffer(eb);
+ if (eb)
+ btrfs_release_extent_buffer(eb);
return exists;
}
#endif
@@ -4969,22 +4962,24 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
unsigned long num_pages = num_extent_pages(start, len);
unsigned long i;
unsigned long index = start >> PAGE_SHIFT;
- struct extent_buffer *eb;
+ struct extent_buffer *eb = NULL;
struct extent_buffer *exists = NULL;
struct page *p;
struct address_space *mapping = fs_info->btree_inode->i_mapping;
int uptodate = 1;
- int ret;

if (!IS_ALIGNED(start, fs_info->sectorsize)) {
btrfs_err(fs_info, "bad tree block start %llu", start);
return ERR_PTR(-EINVAL);
}

- eb = find_extent_buffer(fs_info, start);
- if (eb)
- return eb;
+again:
+ exists = find_extent_buffer(fs_info, start);
+ if (exists)
+ goto free_eb;

+ if (eb)
+ goto add;
eb = __alloc_extent_buffer(fs_info, start, len);
if (!eb)
return ERR_PTR(-ENOMEM);
@@ -5037,24 +5032,15 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
}
if (uptodate)
set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags);
-again:
- ret = radix_tree_preload(GFP_NOFS);
- if (ret) {
- exists = ERR_PTR(ret);
- goto free_eb;
- }
-
- spin_lock(&fs_info->buffer_lock);
- ret = radix_tree_insert(&fs_info->buffer_radix,
- start >> PAGE_SHIFT, eb);
- spin_unlock(&fs_info->buffer_lock);
- radix_tree_preload_end();
- if (ret == -EEXIST) {
- exists = find_extent_buffer(fs_info, start);
- if (exists)
+add:
+ exists = xa_cmpxchg(&fs_info->buffer_array, start >> PAGE_SHIFT,
+ NULL, eb, GFP_NOFS);
+ if (unlikely(exists)) {
+ if (xa_is_err(exists)) {
+ exists = NULL;
goto free_eb;
- else
- goto again;
+ }
+ goto again;
}
/* add one reference for the tree */
check_buffer_tree_ref(eb);
@@ -5107,10 +5093,8 @@ static int release_extent_buffer(struct extent_buffer *eb)

spin_unlock(&eb->refs_lock);

- spin_lock(&fs_info->buffer_lock);
- radix_tree_delete(&fs_info->buffer_radix,
+ xa_erase(&fs_info->buffer_array,
eb->start >> PAGE_SHIFT);
- spin_unlock(&fs_info->buffer_lock);
} else {
spin_unlock(&eb->refs_lock);
}
diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
index 570bce31a301..f80fd54903e9 100644
--- a/fs/btrfs/tests/btrfs-tests.c
+++ b/fs/btrfs/tests/btrfs-tests.c
@@ -110,7 +110,6 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
return NULL;
}

- spin_lock_init(&fs_info->buffer_lock);
spin_lock_init(&fs_info->qgroup_lock);
spin_lock_init(&fs_info->qgroup_op_lock);
spin_lock_init(&fs_info->super_lock);
@@ -125,7 +124,7 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
INIT_LIST_HEAD(&fs_info->dirty_qgroups);
INIT_LIST_HEAD(&fs_info->dead_roots);
INIT_LIST_HEAD(&fs_info->tree_mod_seq_list);
- INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
+ xa_init(&fs_info->buffer_array);
xa_init(&fs_info->fs_roots);
extent_io_tree_init(&fs_info->freed_extents[0], NULL);
extent_io_tree_init(&fs_info->freed_extents[1], NULL);
@@ -139,8 +138,8 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)

void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
{
- struct radix_tree_iter iter;
- void **slot;
+ struct extent_buffer *eb;
+ unsigned long index = 0;

if (!fs_info)
return;
@@ -151,25 +150,8 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)

test_mnt->mnt_sb->s_fs_info = NULL;

- spin_lock(&fs_info->buffer_lock);
- radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) {
- struct extent_buffer *eb;
-
- eb = radix_tree_deref_slot_protected(slot, &fs_info->buffer_lock);
- if (!eb)
- continue;
- /* Shouldn't happen but that kind of thinking creates CVE's */
- if (radix_tree_exception(eb)) {
- if (radix_tree_deref_retry(eb))
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- slot = radix_tree_iter_resume(slot, &iter);
- spin_unlock(&fs_info->buffer_lock);
+ xa_for_each(&fs_info->buffer_array, eb, index, ULONG_MAX, XA_PRESENT)
free_extent_buffer_stale(eb);
- spin_lock(&fs_info->buffer_lock);
- }
- spin_unlock(&fs_info->buffer_lock);

btrfs_free_qgroup_config(fs_info);
btrfs_free_fs_roots(fs_info);
--
2.15.1


2018-01-17 20:30:58

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 90/99] btrfs: Convert delayed_nodes_tree to XArray

From: Matthew Wilcox <[email protected]>

Rename it to just 'delayed_nodes' and remove it from the protection of
btrfs_root->inode_lock.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/ctree.h | 8 +++---
fs/btrfs/delayed-inode.c | 65 ++++++++++++++++--------------------------------
fs/btrfs/disk-io.c | 2 +-
fs/btrfs/inode.c | 2 +-
4 files changed, 27 insertions(+), 50 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 87984ce3a4c2..9acfdc623d15 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1219,11 +1219,9 @@ struct btrfs_root {
/* red-black tree that keeps track of in-memory inodes */
struct rb_root inode_tree;

- /*
- * radix tree that keeps track of delayed nodes of every inode,
- * protected by inode_lock
- */
- struct radix_tree_root delayed_nodes_tree;
+ /* track delayed nodes of every inode */
+ struct xarray delayed_nodes;
+
/*
* right now this just gets used so that a root has its own devid
* for stat. It may be used for more later
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 056276101c63..156a762f3809 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -86,7 +86,7 @@ static struct btrfs_delayed_node *btrfs_get_delayed_node(
}

spin_lock(&root->inode_lock);
- node = radix_tree_lookup(&root->delayed_nodes_tree, ino);
+ node = xa_load(&root->delayed_nodes, ino);

if (node) {
if (btrfs_inode->delayed_node) {
@@ -131,10 +131,9 @@ static struct btrfs_delayed_node *btrfs_get_delayed_node(
static struct btrfs_delayed_node *btrfs_get_or_create_delayed_node(
struct btrfs_inode *btrfs_inode)
{
- struct btrfs_delayed_node *node;
+ struct btrfs_delayed_node *node, *exists;
struct btrfs_root *root = btrfs_inode->root;
u64 ino = btrfs_ino(btrfs_inode);
- int ret;

again:
node = btrfs_get_delayed_node(btrfs_inode);
@@ -149,23 +148,18 @@ static struct btrfs_delayed_node *btrfs_get_or_create_delayed_node(
/* cached in the btrfs inode and can be accessed */
refcount_set(&node->refs, 2);

- ret = radix_tree_preload(GFP_NOFS);
- if (ret) {
+ xa_lock(&root->delayed_nodes);
+ exists = __xa_cmpxchg(&root->delayed_nodes, ino, NULL, node, GFP_NOFS);
+ if (unlikely(exists)) {
+ int ret = xa_err(exists);
+ xa_unlock(&root->delayed_nodes);
kmem_cache_free(delayed_node_cache, node);
+ if (ret == -EEXIST)
+ goto again;
return ERR_PTR(ret);
}
-
- spin_lock(&root->inode_lock);
- ret = radix_tree_insert(&root->delayed_nodes_tree, ino, node);
- if (ret == -EEXIST) {
- spin_unlock(&root->inode_lock);
- kmem_cache_free(delayed_node_cache, node);
- radix_tree_preload_end();
- goto again;
- }
btrfs_inode->delayed_node = node;
- spin_unlock(&root->inode_lock);
- radix_tree_preload_end();
+ xa_unlock(&root->delayed_nodes);

return node;
}
@@ -278,15 +272,12 @@ static void __btrfs_release_delayed_node(
if (refcount_dec_and_test(&delayed_node->refs)) {
struct btrfs_root *root = delayed_node->root;

- spin_lock(&root->inode_lock);
/*
* Once our refcount goes to zero, nobody is allowed to bump it
* back up. We can delete it now.
*/
ASSERT(refcount_read(&delayed_node->refs) == 0);
- radix_tree_delete(&root->delayed_nodes_tree,
- delayed_node->inode_id);
- spin_unlock(&root->inode_lock);
+ xa_erase(&root->delayed_nodes, delayed_node->inode_id);
kmem_cache_free(delayed_node_cache, delayed_node);
}
}
@@ -1926,31 +1917,19 @@ void btrfs_kill_delayed_inode_items(struct btrfs_inode *inode)

void btrfs_kill_all_delayed_nodes(struct btrfs_root *root)
{
- u64 inode_id = 0;
- struct btrfs_delayed_node *delayed_nodes[8];
- int i, n;
-
- while (1) {
- spin_lock(&root->inode_lock);
- n = radix_tree_gang_lookup(&root->delayed_nodes_tree,
- (void **)delayed_nodes, inode_id,
- ARRAY_SIZE(delayed_nodes));
- if (!n) {
- spin_unlock(&root->inode_lock);
- break;
- }
-
- inode_id = delayed_nodes[n - 1]->inode_id + 1;
-
- for (i = 0; i < n; i++)
- refcount_inc(&delayed_nodes[i]->refs);
- spin_unlock(&root->inode_lock);
+ struct btrfs_delayed_node *node;
+ unsigned long inode_id = 0;

- for (i = 0; i < n; i++) {
- __btrfs_kill_delayed_node(delayed_nodes[i]);
- btrfs_release_delayed_node(delayed_nodes[i]);
- }
+ xa_lock(&root->delayed_nodes);
+ xa_for_each(&root->delayed_nodes, node, inode_id, ULONG_MAX,
+ XA_PRESENT) {
+ refcount_inc(&node->refs);
+ xa_unlock(&root->delayed_nodes);
+ __btrfs_kill_delayed_node(node);
+ btrfs_release_delayed_node(node);
+ xa_lock(&root->delayed_nodes);
}
+ xa_unlock(&root->delayed_nodes);
}

void btrfs_destroy_delayed_inodes(struct btrfs_fs_info *fs_info)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 650d1350b64d..593be6c53fae 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1149,7 +1149,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
root->nr_ordered_extents = 0;
root->name = NULL;
root->inode_tree = RB_ROOT;
- INIT_RADIX_TREE(&root->delayed_nodes_tree, GFP_ATOMIC);
+ xa_init(&root->delayed_nodes);
root->block_rsv = NULL;
root->orphan_block_rsv = NULL;

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d7d2c556d5a2..9b6d08ca6d0c 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -3793,7 +3793,7 @@ static int btrfs_read_locked_inode(struct inode *inode)
* cache.
*
* This is required for both inode re-read from disk and delayed inode
- * in delayed_nodes_tree.
+ * in delayed_nodes.
*/
if (BTRFS_I(inode)->last_trans == fs_info->generation)
set_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
--
2.15.1


2018-01-17 20:31:04

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 86/99] btrfs: Convert reada_zones to XArray

From: Matthew Wilcox <[email protected]>

The use of the reada_lock means we have to use the xa_reserve() API.
If we can avoid using reada_lock to protect this xarray, we can drop
the use of that function.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/reada.c | 54 +++++++++++++++++++-----------------------------------
fs/btrfs/volumes.c | 2 +-
fs/btrfs/volumes.h | 2 +-
3 files changed, 21 insertions(+), 37 deletions(-)

diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index ab852b8e3e37..ef8e84ff2012 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -239,17 +239,16 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
{
struct btrfs_fs_info *fs_info = dev->fs_info;
int ret;
- struct reada_zone *zone;
+ struct reada_zone *curr, *zone;
struct btrfs_block_group_cache *cache = NULL;
u64 start;
u64 end;
+ unsigned long index = logical >> PAGE_SHIFT;
int i;

- zone = NULL;
spin_lock(&fs_info->reada_lock);
- ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone,
- logical >> PAGE_SHIFT, 1);
- if (ret == 1 && logical >= zone->start && logical <= zone->end) {
+ zone = xa_find(&dev->reada_zones, &index, ULONG_MAX, XA_PRESENT);
+ if (zone && logical >= zone->start && logical <= zone->end) {
kref_get(&zone->refcnt);
spin_unlock(&fs_info->reada_lock);
return zone;
@@ -269,7 +268,8 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
if (!zone)
return NULL;

- ret = radix_tree_preload(GFP_KERNEL);
+ ret = xa_reserve(&dev->reada_zones,
+ (unsigned long)(end >> PAGE_SHIFT), GFP_KERNEL);
if (ret) {
kfree(zone);
return NULL;
@@ -290,21 +290,18 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
zone->ndevs = bbio->num_stripes;

spin_lock(&fs_info->reada_lock);
- ret = radix_tree_insert(&dev->reada_zones,
+ curr = xa_cmpxchg(&dev->reada_zones,
(unsigned long)(zone->end >> PAGE_SHIFT),
- zone);
-
- if (ret == -EEXIST) {
+ NULL, zone, GFP_NOWAIT | __GFP_NOWARN);
+ if (curr) {
kfree(zone);
- ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone,
- logical >> PAGE_SHIFT, 1);
- if (ret == 1 && logical >= zone->start && logical <= zone->end)
+ zone = curr;
+ if (logical >= zone->start && logical <= zone->end)
kref_get(&zone->refcnt);
else
zone = NULL;
}
spin_unlock(&fs_info->reada_lock);
- radix_tree_preload_end();

return zone;
}
@@ -537,9 +534,7 @@ static void reada_zone_release(struct kref *kref)
{
struct reada_zone *zone = container_of(kref, struct reada_zone, refcnt);

- radix_tree_delete(&zone->device->reada_zones,
- zone->end >> PAGE_SHIFT);
-
+ xa_erase(&zone->device->reada_zones, zone->end >> PAGE_SHIFT);
kfree(zone);
}

@@ -592,7 +587,7 @@ static void reada_peer_zones_set_lock(struct reada_zone *zone, int lock)

for (i = 0; i < zone->ndevs; ++i) {
struct reada_zone *peer;
- peer = radix_tree_lookup(&zone->devs[i]->reada_zones, index);
+ peer = xa_load(&zone->devs[i]->reada_zones, index);
if (peer && peer->device != zone->device)
peer->locked = lock;
}
@@ -603,12 +598,11 @@ static void reada_peer_zones_set_lock(struct reada_zone *zone, int lock)
*/
static int reada_pick_zone(struct btrfs_device *dev)
{
- struct reada_zone *top_zone = NULL;
+ struct reada_zone *zone, *top_zone = NULL;
struct reada_zone *top_locked_zone = NULL;
u64 top_elems = 0;
u64 top_locked_elems = 0;
unsigned long index = 0;
- int ret;

if (dev->reada_curr_zone) {
reada_peer_zones_set_lock(dev->reada_curr_zone, 0);
@@ -616,14 +610,7 @@ static int reada_pick_zone(struct btrfs_device *dev)
dev->reada_curr_zone = NULL;
}
/* pick the zone with the most elements */
- while (1) {
- struct reada_zone *zone;
-
- ret = radix_tree_gang_lookup(&dev->reada_zones,
- (void **)&zone, index, 1);
- if (ret == 0)
- break;
- index = (zone->end >> PAGE_SHIFT) + 1;
+ xa_for_each(&dev->reada_zones, zone, index, ULONG_MAX, XA_PRESENT) {
if (zone->locked) {
if (zone->elems > top_locked_elems) {
top_locked_elems = zone->elems;
@@ -819,15 +806,13 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)

spin_lock(&fs_info->reada_lock);
list_for_each_entry(device, &fs_devices->devices, dev_list) {
+ struct reada_zone *zone;
+
btrfs_debug(fs_info, "dev %lld has %d in flight", device->devid,
atomic_read(&device->reada_in_flight));
index = 0;
- while (1) {
- struct reada_zone *zone;
- ret = radix_tree_gang_lookup(&device->reada_zones,
- (void **)&zone, index, 1);
- if (ret == 0)
- break;
+ xa_for_each(&dev->reada_zones, zone, index, ULONG_MAX,
+ XA_PRESENT) {
pr_debug(" zone %llu-%llu elems %llu locked %d devs",
zone->start, zone->end, zone->elems,
zone->locked);
@@ -839,7 +824,6 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
pr_cont(" curr off %llu",
device->reada_next - zone->start);
pr_cont("\n");
- index = (zone->end >> PAGE_SHIFT) + 1;
}
cnt = 0;
index = 0;
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index cba286183ff9..8e683799b436 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -247,7 +247,7 @@ static struct btrfs_device *__alloc_device(void)
atomic_set(&dev->reada_in_flight, 0);
atomic_set(&dev->dev_stats_ccnt, 0);
btrfs_device_data_ordered_init(dev);
- INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ xa_init(&dev->reada_zones);
INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);

return dev;
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 335fd1590458..aeabe03d3e44 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -139,7 +139,7 @@ struct btrfs_device {
atomic_t reada_in_flight;
u64 reada_next;
struct reada_zone *reada_curr_zone;
- struct radix_tree_root reada_zones;
+ struct xarray reada_zones;
struct radix_tree_root reada_extents;

/* disk I/O failure stats. For detailed description refer to
--
2.15.1


2018-01-17 20:31:23

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 87/99] btrfs: Convert reada_extents to XArray

From: Matthew Wilcox <[email protected]>

Straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/reada.c | 32 +++++++++++++++++---------------
fs/btrfs/volumes.c | 2 +-
fs/btrfs/volumes.h | 2 +-
3 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index ef8e84ff2012..8100f1565250 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -438,13 +438,14 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
continue;
}
prev_dev = dev;
- ret = radix_tree_insert(&dev->reada_extents, index, re);
+ ret = xa_insert(&dev->reada_extents, index, re,
+ GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
if (ret) {
while (--nzones >= 0) {
dev = re->zones[nzones]->device;
BUG_ON(dev == NULL);
/* ignore whether the entry was inserted */
- radix_tree_delete(&dev->reada_extents, index);
+ xa_erase(&dev->reada_extents, index);
}
radix_tree_delete(&fs_info->reada_tree, index);
spin_unlock(&fs_info->reada_lock);
@@ -504,7 +505,7 @@ static void reada_extent_put(struct btrfs_fs_info *fs_info,
for (i = 0; i < re->nzones; ++i) {
struct reada_zone *zone = re->zones[i];

- radix_tree_delete(&zone->device->reada_extents, index);
+ xa_erase(&zone->device->reada_extents, index);
}

spin_unlock(&fs_info->reada_lock);
@@ -644,6 +645,7 @@ static int reada_start_machine_dev(struct btrfs_device *dev)
int mirror_num = 0;
struct extent_buffer *eb = NULL;
u64 logical;
+ unsigned long index;
int ret;
int i;

@@ -660,19 +662,19 @@ static int reada_start_machine_dev(struct btrfs_device *dev)
* a contiguous block of extents, we could also coagulate them or use
* plugging to speed things up
*/
- ret = radix_tree_gang_lookup(&dev->reada_extents, (void **)&re,
- dev->reada_next >> PAGE_SHIFT, 1);
- if (ret == 0 || re->logical > dev->reada_curr_zone->end) {
+ index = dev->reada_next >> PAGE_SHIFT;
+ re = xa_find(&dev->reada_extents, &index, ULONG_MAX, XA_PRESENT);
+ if (!re || re->logical > dev->reada_curr_zone->end) {
ret = reada_pick_zone(dev);
if (!ret) {
spin_unlock(&fs_info->reada_lock);
return 0;
}
- re = NULL;
- ret = radix_tree_gang_lookup(&dev->reada_extents, (void **)&re,
- dev->reada_next >> PAGE_SHIFT, 1);
+ index = dev->reada_next >> PAGE_SHIFT;
+ re = xa_find(&dev->reada_extents, &index, ULONG_MAX,
+ XA_PRESENT);
}
- if (ret == 0) {
+ if (!re) {
spin_unlock(&fs_info->reada_lock);
return 0;
}
@@ -828,11 +830,11 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
cnt = 0;
index = 0;
while (all) {
- struct reada_extent *re = NULL;
+ struct reada_extent *re;

- ret = radix_tree_gang_lookup(&device->reada_extents,
- (void **)&re, index, 1);
- if (ret == 0)
+ re = xa_find(&device->reada_extents, &index, ULONG_MAX,
+ XA_PRESENT);
+ if (!re)
break;
pr_debug(" re: logical %llu size %u empty %d scheduled %d",
re->logical, fs_info->nodesize,
@@ -848,7 +850,7 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
}
}
pr_cont("\n");
- index = (re->logical >> PAGE_SHIFT) + 1;
+ index++;
if (++cnt > 15)
break;
}
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 8e683799b436..304c2ef4c557 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -248,7 +248,7 @@ static struct btrfs_device *__alloc_device(void)
atomic_set(&dev->dev_stats_ccnt, 0);
btrfs_device_data_ordered_init(dev);
xa_init(&dev->reada_zones);
- INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ xa_init(&dev->reada_extents);

return dev;
}
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index aeabe03d3e44..0e0c04e2613c 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -140,7 +140,7 @@ struct btrfs_device {
u64 reada_next;
struct reada_zone *reada_curr_zone;
struct xarray reada_zones;
- struct radix_tree_root reada_extents;
+ struct xarray reada_extents;

/* disk I/O failure stats. For detailed description refer to
* enum btrfs_dev_stat_values in ioctl.h */
--
2.15.1


2018-01-17 20:31:58

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 88/99] btrfs: Convert reada_tree to XArray

From: Matthew Wilcox <[email protected]>

Rename reada_tree to reada_array. Use the xa_lock in reada_array to
replace reada_lock. This has to use a nested spinlock as we take the
xa_lock of the reada_extents and reada_zones xarrays while holding
the reada_lock.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/ctree.h | 15 +++++--
fs/btrfs/disk-io.c | 3 +-
fs/btrfs/reada.c | 119 +++++++++++++++++++++++++----------------------------
3 files changed, 70 insertions(+), 67 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 173d72dfaab6..272d099bed7e 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1052,9 +1052,8 @@ struct btrfs_fs_info {

struct btrfs_delayed_root *delayed_root;

- /* readahead tree */
- spinlock_t reada_lock;
- struct radix_tree_root reada_tree;
+ /* readahead extents */
+ struct xarray reada_array;

/* readahead works cnt */
atomic_t reada_works_cnt;
@@ -1102,6 +1101,16 @@ struct btrfs_fs_info {
#endif
};

+static inline void reada_lock(struct btrfs_fs_info *fs_info)
+{
+ spin_lock_nested(&fs_info->reada_array.xa_lock, SINGLE_DEPTH_NESTING);
+}
+
+static inline void reada_unlock(struct btrfs_fs_info *fs_info)
+{
+ spin_unlock(&fs_info->reada_array.xa_lock);
+}
+
static inline struct btrfs_fs_info *btrfs_sb(struct super_block *sb)
{
return sb->s_fs_info;
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 62995a55d112..1eae29045d43 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2478,8 +2478,7 @@ int open_ctree(struct super_block *sb,
fs_info->commit_interval = BTRFS_DEFAULT_COMMIT_INTERVAL;
fs_info->avg_delayed_ref_runtime = NSEC_PER_SEC >> 6; /* div by 64 */
/* readahead state */
- INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
- spin_lock_init(&fs_info->reada_lock);
+ xa_init(&fs_info->reada_array);
btrfs_init_ref_verify(fs_info);

fs_info->thread_pool_size = min_t(unsigned long,
diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index 8100f1565250..89ba0063903f 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -215,12 +215,11 @@ int btree_readahead_hook(struct extent_buffer *eb, int err)
struct reada_extent *re;

/* find extent */
- spin_lock(&fs_info->reada_lock);
- re = radix_tree_lookup(&fs_info->reada_tree,
- eb->start >> PAGE_SHIFT);
+ reada_lock(fs_info);
+ re = xa_load(&fs_info->reada_array, eb->start >> PAGE_SHIFT);
if (re)
re->refcnt++;
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
if (!re) {
ret = -1;
goto start_machine;
@@ -246,15 +245,15 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
unsigned long index = logical >> PAGE_SHIFT;
int i;

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
zone = xa_find(&dev->reada_zones, &index, ULONG_MAX, XA_PRESENT);
if (zone && logical >= zone->start && logical <= zone->end) {
kref_get(&zone->refcnt);
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
return zone;
}

- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);

cache = btrfs_lookup_block_group(fs_info, logical);
if (!cache)
@@ -289,7 +288,7 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
}
zone->ndevs = bbio->num_stripes;

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
curr = xa_cmpxchg(&dev->reada_zones,
(unsigned long)(zone->end >> PAGE_SHIFT),
NULL, zone, GFP_NOWAIT | __GFP_NOWARN);
@@ -301,7 +300,7 @@ static struct reada_zone *reada_find_zone(struct btrfs_device *dev, u64 logical,
else
zone = NULL;
}
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);

return zone;
}
@@ -323,11 +322,11 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
int dev_replace_is_ongoing;
int have_zone = 0;

- spin_lock(&fs_info->reada_lock);
- re = radix_tree_lookup(&fs_info->reada_tree, index);
+ reada_lock(fs_info);
+ re = xa_load(&fs_info->reada_array, index);
if (re)
re->refcnt++;
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);

if (re)
return re;
@@ -378,38 +377,32 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
kref_get(&zone->refcnt);
++zone->elems;
spin_unlock(&zone->lock);
- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
kref_put(&zone->refcnt, reada_zone_release);
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
}
if (re->nzones == 0) {
/* not a single zone found, error and out */
goto error;
}

- ret = radix_tree_preload(GFP_KERNEL);
- if (ret)
- goto error;
-
- /* insert extent in reada_tree + all per-device trees, all or nothing */
+ /*
+ * Insert extent in reada_array and all per-device arrays,
+ * all or nothing
+ */
btrfs_dev_replace_lock(&fs_info->dev_replace, 0);
- spin_lock(&fs_info->reada_lock);
- ret = radix_tree_insert(&fs_info->reada_tree, index, re);
- if (ret == -EEXIST) {
- re_exist = radix_tree_lookup(&fs_info->reada_tree, index);
- re_exist->refcnt++;
- spin_unlock(&fs_info->reada_lock);
- btrfs_dev_replace_unlock(&fs_info->dev_replace, 0);
- radix_tree_preload_end();
- goto error;
- }
- if (ret) {
- spin_unlock(&fs_info->reada_lock);
+ reada_lock(fs_info);
+ re_exist = __xa_cmpxchg(&fs_info->reada_array, index, NULL, re,
+ GFP_KERNEL);
+ if (re_exist) {
+ if (xa_is_err(re_exist))
+ re_exist = NULL;
+ else
+ re_exist->refcnt++;
+ reada_unlock(fs_info);
btrfs_dev_replace_unlock(&fs_info->dev_replace, 0);
- radix_tree_preload_end();
goto error;
}
- radix_tree_preload_end();
prev_dev = NULL;
dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(
&fs_info->dev_replace);
@@ -447,14 +440,14 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
/* ignore whether the entry was inserted */
xa_erase(&dev->reada_extents, index);
}
- radix_tree_delete(&fs_info->reada_tree, index);
- spin_unlock(&fs_info->reada_lock);
+ __xa_erase(&fs_info->reada_array, index);
+ reada_unlock(fs_info);
btrfs_dev_replace_unlock(&fs_info->dev_replace, 0);
goto error;
}
have_zone = 1;
}
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
btrfs_dev_replace_unlock(&fs_info->dev_replace, 0);

if (!have_zone)
@@ -473,16 +466,16 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
--zone->elems;
if (zone->elems == 0) {
/*
- * no fs_info->reada_lock needed, as this can't be
- * the last ref
+ * no fs_info->reada_array lock needed, as this
+ * can't be the last ref
*/
kref_put(&zone->refcnt, reada_zone_release);
}
spin_unlock(&zone->lock);

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
kref_put(&zone->refcnt, reada_zone_release);
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
}
btrfs_put_bbio(bbio);
kfree(re);
@@ -495,20 +488,20 @@ static void reada_extent_put(struct btrfs_fs_info *fs_info,
int i;
unsigned long index = re->logical >> PAGE_SHIFT;

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
if (--re->refcnt) {
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
return;
}

- radix_tree_delete(&fs_info->reada_tree, index);
+ __xa_erase(&fs_info->reada_array, index);
for (i = 0; i < re->nzones; ++i) {
struct reada_zone *zone = re->zones[i];

xa_erase(&zone->device->reada_extents, index);
}

- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);

for (i = 0; i < re->nzones; ++i) {
struct reada_zone *zone = re->zones[i];
@@ -517,15 +510,17 @@ static void reada_extent_put(struct btrfs_fs_info *fs_info,
spin_lock(&zone->lock);
--zone->elems;
if (zone->elems == 0) {
- /* no fs_info->reada_lock needed, as this can't be
- * the last ref */
+ /*
+ * no fs_info->reada_array lock needed, as this
+ * can't be the last ref
+ */
kref_put(&zone->refcnt, reada_zone_release);
}
spin_unlock(&zone->lock);

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
kref_put(&zone->refcnt, reada_zone_release);
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
}

kfree(re);
@@ -579,7 +574,7 @@ static int reada_add_block(struct reada_control *rc, u64 logical,
}

/*
- * called with fs_info->reada_lock held
+ * called with fs_info->reada_array lock held
*/
static void reada_peer_zones_set_lock(struct reada_zone *zone, int lock)
{
@@ -595,7 +590,7 @@ static void reada_peer_zones_set_lock(struct reada_zone *zone, int lock)
}

/*
- * called with fs_info->reada_lock held
+ * called with fs_info->reada_array lock held
*/
static int reada_pick_zone(struct btrfs_device *dev)
{
@@ -649,11 +644,11 @@ static int reada_start_machine_dev(struct btrfs_device *dev)
int ret;
int i;

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
if (dev->reada_curr_zone == NULL) {
ret = reada_pick_zone(dev);
if (!ret) {
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
return 0;
}
}
@@ -667,7 +662,7 @@ static int reada_start_machine_dev(struct btrfs_device *dev)
if (!re || re->logical > dev->reada_curr_zone->end) {
ret = reada_pick_zone(dev);
if (!ret) {
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
return 0;
}
index = dev->reada_next >> PAGE_SHIFT;
@@ -675,13 +670,13 @@ static int reada_start_machine_dev(struct btrfs_device *dev)
XA_PRESENT);
}
if (!re) {
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
return 0;
}
dev->reada_next = re->logical + fs_info->nodesize;
re->refcnt++;

- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);

spin_lock(&re->lock);
if (re->scheduled || list_empty(&re->extctl)) {
@@ -806,7 +801,7 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
int j;
int cnt;

- spin_lock(&fs_info->reada_lock);
+ reada_lock(fs_info);
list_for_each_entry(device, &fs_devices->devices, dev_list) {
struct reada_zone *zone;

@@ -859,11 +854,11 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
index = 0;
cnt = 0;
while (all) {
- struct reada_extent *re = NULL;
+ struct reada_extent *re;

- ret = radix_tree_gang_lookup(&fs_info->reada_tree, (void **)&re,
- index, 1);
- if (ret == 0)
+ re = xa_find(&fs_info->reada_tree, &index, ULONG_MAX,
+ XA_PRESENT);
+ if (!re)
break;
if (!re->scheduled) {
index = (re->logical >> PAGE_SHIFT) + 1;
@@ -882,9 +877,9 @@ static void dump_devs(struct btrfs_fs_info *fs_info, int all)
}
}
pr_cont("\n");
- index = (re->logical >> PAGE_SHIFT) + 1;
+ index++;
}
- spin_unlock(&fs_info->reada_lock);
+ reada_unlock(fs_info);
}
#endif

--
2.15.1


2018-01-17 20:33:37

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 85/99] btrfs: Remove unused spinlock

From: Matthew Wilcox <[email protected]>

The reada_lock in struct btrfs_device was only initialised, and not
actually used. That's good because there's another lock also called
reada_lock in the btrfs_fs_info that was quite heavily used. Remove
this one.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/volumes.c | 1 -
fs/btrfs/volumes.h | 1 -
2 files changed, 2 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a25684287501..cba286183ff9 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -244,7 +244,6 @@ static struct btrfs_device *__alloc_device(void)

spin_lock_init(&dev->io_lock);

- spin_lock_init(&dev->reada_lock);
atomic_set(&dev->reada_in_flight, 0);
atomic_set(&dev->dev_stats_ccnt, 0);
btrfs_device_data_ordered_init(dev);
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index ff15208344a7..335fd1590458 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -136,7 +136,6 @@ struct btrfs_device {
struct work_struct rcu_work;

/* readahead state */
- spinlock_t reada_lock;
atomic_t reada_in_flight;
u64 reada_next;
struct reada_zone *reada_curr_zone;
--
2.15.1


2018-01-17 20:33:39

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 83/99] hwspinlock: Convert to XArray

From: Matthew Wilcox <[email protected]>

I had to mess with the locking a bit as I converted the code from a
mutex to the xa_lock.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/hwspinlock/hwspinlock_core.c | 151 ++++++++++++-----------------------
1 file changed, 52 insertions(+), 99 deletions(-)

diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
index 4074441444fe..acb6e315925f 100644
--- a/drivers/hwspinlock/hwspinlock_core.c
+++ b/drivers/hwspinlock/hwspinlock_core.c
@@ -23,43 +23,32 @@
#include <linux/types.h>
#include <linux/err.h>
#include <linux/jiffies.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/hwspinlock.h>
#include <linux/pm_runtime.h>
-#include <linux/mutex.h>
#include <linux/of.h>

#include "hwspinlock_internal.h"

-/* radix tree tags */
-#define HWSPINLOCK_UNUSED (0) /* tags an hwspinlock as unused */
+#define HWSPINLOCK_UNUSED XA_TAG_0

/*
- * A radix tree is used to maintain the available hwspinlock instances.
- * The tree associates hwspinlock pointers with their integer key id,
+ * An xarray is used to maintain the available hwspinlock instances.
+ * The array associates hwspinlock pointers with their integer key id,
* and provides easy-to-use API which makes the hwspinlock core code simple
* and easy to read.
*
- * Radix trees are quick on lookups, and reasonably efficient in terms of
+ * The XArray is quick on lookups, and reasonably efficient in terms of
* storage, especially with high density usages such as this framework
* requires (a continuous range of integer keys, beginning with zero, is
* used as the ID's of the hwspinlock instances).
*
- * The radix tree API supports tagging items in the tree, which this
- * framework uses to mark unused hwspinlock instances (see the
- * HWSPINLOCK_UNUSED tag above). As a result, the process of querying the
- * tree, looking for an unused hwspinlock instance, is now reduced to a
- * single radix tree API call.
+ * The xarray API supports tagging items, which this framework uses to mark
+ * unused hwspinlock instances (see the HWSPINLOCK_UNUSED tag above). As a
+ * result, the process of querying the array, looking for an unused
+ * hwspinlock instance, is reduced to a single call.
*/
-static RADIX_TREE(hwspinlock_tree, GFP_KERNEL);
-
-/*
- * Synchronization of access to the tree is achieved using this mutex,
- * as the radix-tree API requires that users provide all synchronisation.
- * A mutex is needed because we're using non-atomic radix tree allocations.
- */
-static DEFINE_MUTEX(hwspinlock_tree_lock);
-
+static DEFINE_XARRAY(hwspinlock_xa);

/**
* __hwspin_trylock() - attempt to lock a specific hwspinlock
@@ -294,10 +283,9 @@ of_hwspin_lock_simple_xlate(const struct of_phandle_args *hwlock_spec)
*/
int of_hwspin_lock_get_id(struct device_node *np, int index)
{
+ XA_STATE(xas, &hwspinlock_xa, 0);
struct of_phandle_args args;
struct hwspinlock *hwlock;
- struct radix_tree_iter iter;
- void **slot;
int id;
int ret;

@@ -309,22 +297,15 @@ int of_hwspin_lock_get_id(struct device_node *np, int index)
/* Find the hwspinlock device: we need its base_id */
ret = -EPROBE_DEFER;
rcu_read_lock();
- radix_tree_for_each_slot(slot, &hwspinlock_tree, &iter, 0) {
- hwlock = radix_tree_deref_slot(slot);
- if (unlikely(!hwlock))
- continue;
- if (radix_tree_deref_retry(hwlock)) {
- slot = radix_tree_iter_retry(&iter);
+ xas_for_each(&xas, hwlock, ULONG_MAX) {
+ if (xas_retry(&xas, hwlock))
continue;
- }

- if (hwlock->bank->dev->of_node == args.np) {
- ret = 0;
+ if (hwlock->bank->dev->of_node == args.np)
break;
- }
}
rcu_read_unlock();
- if (ret < 0)
+ if (!hwlock)
goto out;

id = of_hwspin_lock_simple_xlate(&args);
@@ -332,6 +313,7 @@ int of_hwspin_lock_get_id(struct device_node *np, int index)
ret = -EINVAL;
goto out;
}
+ ret = 0;
id += hwlock->bank->base_id;

out:
@@ -342,26 +324,19 @@ EXPORT_SYMBOL_GPL(of_hwspin_lock_get_id);

static int hwspin_lock_register_single(struct hwspinlock *hwlock, int id)
{
- struct hwspinlock *tmp;
- int ret;
+ void *curr;

- mutex_lock(&hwspinlock_tree_lock);
-
- ret = radix_tree_insert(&hwspinlock_tree, id, hwlock);
- if (ret) {
- if (ret == -EEXIST)
+ curr = xa_cmpxchg(&hwspinlock_xa, id, NULL, hwlock, GFP_KERNEL);
+ if (curr) {
+ if (!xa_is_err(curr))
pr_err("hwspinlock id %d already exists!\n", id);
goto out;
}

/* mark this hwspinlock as available */
- tmp = radix_tree_tag_set(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
-
- /* self-sanity check which should never fail */
- WARN_ON(tmp != hwlock);
+ xa_set_tag(&hwspinlock_xa, id, HWSPINLOCK_UNUSED);

out:
- mutex_unlock(&hwspinlock_tree_lock);
return 0;
}

@@ -370,23 +345,16 @@ static struct hwspinlock *hwspin_lock_unregister_single(unsigned int id)
struct hwspinlock *hwlock = NULL;
int ret;

- mutex_lock(&hwspinlock_tree_lock);
-
/* make sure the hwspinlock is not in use (tag is set) */
- ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
+ ret = xa_get_tag(&hwspinlock_xa, id, HWSPINLOCK_UNUSED);
if (ret == 0) {
pr_err("hwspinlock %d still in use (or not present)\n", id);
goto out;
}

- hwlock = radix_tree_delete(&hwspinlock_tree, id);
- if (!hwlock) {
- pr_err("failed to delete hwspinlock %d\n", id);
- goto out;
- }
+ hwlock = xa_erase(&hwspinlock_xa, id);

out:
- mutex_unlock(&hwspinlock_tree_lock);
return hwlock;
}

@@ -477,8 +445,7 @@ EXPORT_SYMBOL_GPL(hwspin_lock_unregister);
* __hwspin_lock_request() - tag an hwspinlock as used and power it up
*
* This is an internal function that prepares an hwspinlock instance
- * before it is given to the user. The function assumes that
- * hwspinlock_tree_lock is taken.
+ * before it is given to the user.
*
* Returns 0 or positive to indicate success, and a negative value to
* indicate an error (with the appropriate error code)
@@ -486,7 +453,6 @@ EXPORT_SYMBOL_GPL(hwspin_lock_unregister);
static int __hwspin_lock_request(struct hwspinlock *hwlock)
{
struct device *dev = hwlock->bank->dev;
- struct hwspinlock *tmp;
int ret;

/* prevent underlying implementation from being removed */
@@ -501,16 +467,7 @@ static int __hwspin_lock_request(struct hwspinlock *hwlock)
dev_err(dev, "%s: can't power on device\n", __func__);
pm_runtime_put_noidle(dev);
module_put(dev->driver->owner);
- return ret;
}
-
- /* mark hwspinlock as used, should not fail */
- tmp = radix_tree_tag_clear(&hwspinlock_tree, hwlock_to_id(hwlock),
- HWSPINLOCK_UNUSED);
-
- /* self-sanity check that should never fail */
- WARN_ON(tmp != hwlock);
-
return ret;
}

@@ -548,29 +505,28 @@ struct hwspinlock *hwspin_lock_request(void)
{
struct hwspinlock *hwlock;
int ret;
+ unsigned long index = 0;

- mutex_lock(&hwspinlock_tree_lock);
+ xa_lock(&hwspinlock_xa);

/* look for an unused lock */
- ret = radix_tree_gang_lookup_tag(&hwspinlock_tree, (void **)&hwlock,
- 0, 1, HWSPINLOCK_UNUSED);
- if (ret == 0) {
+ hwlock = xa_find(&hwspinlock_xa, &index, ULONG_MAX, HWSPINLOCK_UNUSED);
+ if (!hwlock) {
pr_warn("a free hwspinlock is not available\n");
- hwlock = NULL;
- goto out;
+ xa_unlock(&hwspinlock_xa);
+ return NULL;
}

- /* sanity check that should never fail */
- WARN_ON(ret > 1);
-
/* mark as used and power up */
+ __xa_clear_tag(&hwspinlock_xa, index, HWSPINLOCK_UNUSED);
+ xa_unlock(&hwspinlock_xa);
+
ret = __hwspin_lock_request(hwlock);
- if (ret < 0)
- hwlock = NULL;
+ if (ret == 0)
+ return hwlock;

-out:
- mutex_unlock(&hwspinlock_tree_lock);
- return hwlock;
+ xa_set_tag(&hwspinlock_xa, index, HWSPINLOCK_UNUSED);
+ return NULL;
}
EXPORT_SYMBOL_GPL(hwspin_lock_request);

@@ -592,10 +548,10 @@ struct hwspinlock *hwspin_lock_request_specific(unsigned int id)
struct hwspinlock *hwlock;
int ret;

- mutex_lock(&hwspinlock_tree_lock);
+ xa_lock(&hwspinlock_xa);

/* make sure this hwspinlock exists */
- hwlock = radix_tree_lookup(&hwspinlock_tree, id);
+ hwlock = xa_load(&hwspinlock_xa, id);
if (!hwlock) {
pr_warn("hwspinlock %u does not exist\n", id);
goto out;
@@ -605,21 +561,25 @@ struct hwspinlock *hwspin_lock_request_specific(unsigned int id)
WARN_ON(hwlock_to_id(hwlock) != id);

/* make sure this hwspinlock is unused */
- ret = radix_tree_tag_get(&hwspinlock_tree, id, HWSPINLOCK_UNUSED);
+ ret = xa_get_tag(&hwspinlock_xa, id, HWSPINLOCK_UNUSED);
if (ret == 0) {
pr_warn("hwspinlock %u is already in use\n", id);
- hwlock = NULL;
goto out;
}

/* mark as used and power up */
+ __xa_clear_tag(&hwspinlock_xa, id, HWSPINLOCK_UNUSED);
+ xa_unlock(&hwspinlock_xa);
+
ret = __hwspin_lock_request(hwlock);
- if (ret < 0)
- hwlock = NULL;
+ if (ret == 0)
+ return hwlock;

+ xa_set_tag(&hwspinlock_xa, id, HWSPINLOCK_UNUSED);
+ return NULL;
out:
- mutex_unlock(&hwspinlock_tree_lock);
- return hwlock;
+ xa_unlock(&hwspinlock_xa);
+ return NULL;
}
EXPORT_SYMBOL_GPL(hwspin_lock_request_specific);

@@ -638,7 +598,6 @@ EXPORT_SYMBOL_GPL(hwspin_lock_request_specific);
int hwspin_lock_free(struct hwspinlock *hwlock)
{
struct device *dev;
- struct hwspinlock *tmp;
int ret;

if (!hwlock) {
@@ -647,12 +606,11 @@ int hwspin_lock_free(struct hwspinlock *hwlock)
}

dev = hwlock->bank->dev;
- mutex_lock(&hwspinlock_tree_lock);

/* make sure the hwspinlock is used */
- ret = radix_tree_tag_get(&hwspinlock_tree, hwlock_to_id(hwlock),
+ ret = xa_get_tag(&hwspinlock_xa, hwlock_to_id(hwlock),
HWSPINLOCK_UNUSED);
- if (ret == 1) {
+ if (ret) {
dev_err(dev, "%s: hwlock is already free\n", __func__);
dump_stack();
ret = -EINVAL;
@@ -665,16 +623,11 @@ int hwspin_lock_free(struct hwspinlock *hwlock)
goto out;

/* mark this hwspinlock as available */
- tmp = radix_tree_tag_set(&hwspinlock_tree, hwlock_to_id(hwlock),
- HWSPINLOCK_UNUSED);
-
- /* sanity check (this shouldn't happen) */
- WARN_ON(tmp != hwlock);
+ xa_set_tag(&hwspinlock_xa, hwlock_to_id(hwlock), HWSPINLOCK_UNUSED);

module_put(dev->driver->owner);

out:
- mutex_unlock(&hwspinlock_tree_lock);
return ret;
}
EXPORT_SYMBOL_GPL(hwspin_lock_free);
--
2.15.1


2018-01-17 20:33:47

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 84/99] btrfs: Convert fs_roots_radix to XArray

From: Matthew Wilcox <[email protected]>

Most of the gang lookups being done can be expressed just as efficiently
and somewhat more naturally as xa_for_each() loops. I opted not to
change the one in btrfs_cleanup_fs_roots() as it's using SRCU which is
subtle and quick to anger.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/ctree.h | 3 +-
fs/btrfs/disk-io.c | 65 +++++++++++----------------------
fs/btrfs/tests/btrfs-tests.c | 3 +-
fs/btrfs/transaction.c | 87 ++++++++++++++++++--------------------------
4 files changed, 59 insertions(+), 99 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 13c260b525a1..173d72dfaab6 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -741,8 +741,7 @@ struct btrfs_fs_info {
/* the log root tree is a directory of all the other log roots */
struct btrfs_root *log_root_tree;

- spinlock_t fs_roots_radix_lock;
- struct radix_tree_root fs_roots_radix;
+ struct xarray fs_roots;

/* block group cache stuff */
spinlock_t block_group_cache_lock;
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a8ecccfc36de..62995a55d112 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1519,13 +1519,7 @@ int btrfs_init_fs_root(struct btrfs_root *root)
struct btrfs_root *btrfs_lookup_fs_root(struct btrfs_fs_info *fs_info,
u64 root_id)
{
- struct btrfs_root *root;
-
- spin_lock(&fs_info->fs_roots_radix_lock);
- root = radix_tree_lookup(&fs_info->fs_roots_radix,
- (unsigned long)root_id);
- spin_unlock(&fs_info->fs_roots_radix_lock);
- return root;
+ return xa_load(&fs_info->fs_roots, (unsigned long)root_id);
}

int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info,
@@ -1533,18 +1527,13 @@ int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info,
{
int ret;

- ret = radix_tree_preload(GFP_NOFS);
- if (ret)
- return ret;
-
- spin_lock(&fs_info->fs_roots_radix_lock);
- ret = radix_tree_insert(&fs_info->fs_roots_radix,
+ xa_lock(&fs_info->fs_roots);
+ ret = __xa_insert(&fs_info->fs_roots,
(unsigned long)root->root_key.objectid,
- root);
+ root, GFP_NOFS);
if (ret == 0)
set_bit(BTRFS_ROOT_IN_RADIX, &root->state);
- spin_unlock(&fs_info->fs_roots_radix_lock);
- radix_tree_preload_end();
+ xa_unlock(&fs_info->fs_roots);

return ret;
}
@@ -2079,33 +2068,25 @@ static void free_root_pointers(struct btrfs_fs_info *info, int chunk_root)

void btrfs_free_fs_roots(struct btrfs_fs_info *fs_info)
{
- int ret;
- struct btrfs_root *gang[8];
- int i;
+ struct btrfs_root *root;
+ unsigned long i = 0;

while (!list_empty(&fs_info->dead_roots)) {
- gang[0] = list_entry(fs_info->dead_roots.next,
+ root = list_entry(fs_info->dead_roots.next,
struct btrfs_root, root_list);
- list_del(&gang[0]->root_list);
+ list_del(&root->root_list);

- if (test_bit(BTRFS_ROOT_IN_RADIX, &gang[0]->state)) {
- btrfs_drop_and_free_fs_root(fs_info, gang[0]);
+ if (test_bit(BTRFS_ROOT_IN_RADIX, &root->state)) {
+ btrfs_drop_and_free_fs_root(fs_info, root);
} else {
- free_extent_buffer(gang[0]->node);
- free_extent_buffer(gang[0]->commit_root);
- btrfs_put_fs_root(gang[0]);
+ free_extent_buffer(root->node);
+ free_extent_buffer(root->commit_root);
+ btrfs_put_fs_root(root);
}
}

- while (1) {
- ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
- (void **)gang, 0,
- ARRAY_SIZE(gang));
- if (!ret)
- break;
- for (i = 0; i < ret; i++)
- btrfs_drop_and_free_fs_root(fs_info, gang[i]);
- }
+ xa_for_each(&fs_info->fs_roots, root, i, ULONG_MAX, XA_PRESENT)
+ btrfs_drop_and_free_fs_root(fs_info, root);

if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
btrfs_free_log_root_tree(NULL, fs_info);
@@ -2447,7 +2428,7 @@ int open_ctree(struct super_block *sb,
goto fail_delalloc_bytes;
}

- INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
+ xa_init(&fs_info->fs_roots);
INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
INIT_LIST_HEAD(&fs_info->trans_list);
INIT_LIST_HEAD(&fs_info->dead_roots);
@@ -2456,7 +2437,6 @@ int open_ctree(struct super_block *sb,
INIT_LIST_HEAD(&fs_info->caching_block_groups);
spin_lock_init(&fs_info->delalloc_root_lock);
spin_lock_init(&fs_info->trans_lock);
- spin_lock_init(&fs_info->fs_roots_radix_lock);
spin_lock_init(&fs_info->delayed_iput_lock);
spin_lock_init(&fs_info->defrag_inodes_lock);
spin_lock_init(&fs_info->tree_mod_seq_lock);
@@ -3573,10 +3553,7 @@ int write_all_supers(struct btrfs_fs_info *fs_info, int max_mirrors)
void btrfs_drop_and_free_fs_root(struct btrfs_fs_info *fs_info,
struct btrfs_root *root)
{
- spin_lock(&fs_info->fs_roots_radix_lock);
- radix_tree_delete(&fs_info->fs_roots_radix,
- (unsigned long)root->root_key.objectid);
- spin_unlock(&fs_info->fs_roots_radix_lock);
+ xa_erase(&fs_info->fs_roots, (unsigned long)root->root_key.objectid);

if (btrfs_root_refs(&root->root_item) == 0)
synchronize_srcu(&fs_info->subvol_srcu);
@@ -3632,9 +3609,9 @@ int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info)

while (1) {
index = srcu_read_lock(&fs_info->subvol_srcu);
- ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
- (void **)gang, root_objectid,
- ARRAY_SIZE(gang));
+ ret = xa_extract(&fs_info->fs_roots, (void **)gang,
+ root_objectid, ULONG_MAX, ARRAY_SIZE(gang),
+ XA_PRESENT);
if (!ret) {
srcu_read_unlock(&fs_info->subvol_srcu, index);
break;
diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
index d3f25376a0f8..570bce31a301 100644
--- a/fs/btrfs/tests/btrfs-tests.c
+++ b/fs/btrfs/tests/btrfs-tests.c
@@ -114,7 +114,6 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
spin_lock_init(&fs_info->qgroup_lock);
spin_lock_init(&fs_info->qgroup_op_lock);
spin_lock_init(&fs_info->super_lock);
- spin_lock_init(&fs_info->fs_roots_radix_lock);
spin_lock_init(&fs_info->tree_mod_seq_lock);
mutex_init(&fs_info->qgroup_ioctl_lock);
mutex_init(&fs_info->qgroup_rescan_lock);
@@ -127,7 +126,7 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
INIT_LIST_HEAD(&fs_info->dead_roots);
INIT_LIST_HEAD(&fs_info->tree_mod_seq_list);
INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
- INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
+ xa_init(&fs_info->fs_roots);
extent_io_tree_init(&fs_info->freed_extents[0], NULL);
extent_io_tree_init(&fs_info->freed_extents[1], NULL);
fs_info->pinned_extents = &fs_info->freed_extents[0];
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 5a8c2649af2f..2d6606df0fa3 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -33,7 +33,7 @@
#include "dev-replace.h"
#include "qgroup.h"

-#define BTRFS_ROOT_TRANS_TAG 0
+#define BTRFS_ROOT_TRANS_TAG XA_TAG_0

static const unsigned int btrfs_blocked_trans_types[TRANS_STATE_MAX] = {
[TRANS_STATE_RUNNING] = 0U,
@@ -333,15 +333,15 @@ static int record_root_in_trans(struct btrfs_trans_handle *trans,
*/
smp_wmb();

- spin_lock(&fs_info->fs_roots_radix_lock);
+ xa_lock(&fs_info->fs_roots);
if (root->last_trans == trans->transid && !force) {
- spin_unlock(&fs_info->fs_roots_radix_lock);
+ xa_unlock(&fs_info->fs_roots);
return 0;
}
- radix_tree_tag_set(&fs_info->fs_roots_radix,
+ __xa_set_tag(&fs_info->fs_roots,
(unsigned long)root->root_key.objectid,
BTRFS_ROOT_TRANS_TAG);
- spin_unlock(&fs_info->fs_roots_radix_lock);
+ xa_unlock(&fs_info->fs_roots);
root->last_trans = trans->transid;

/* this is pretty tricky. We don't want to
@@ -383,11 +383,8 @@ void btrfs_add_dropped_root(struct btrfs_trans_handle *trans,
spin_unlock(&cur_trans->dropped_roots_lock);

/* Make sure we don't try to update the root at commit time */
- spin_lock(&fs_info->fs_roots_radix_lock);
- radix_tree_tag_clear(&fs_info->fs_roots_radix,
- (unsigned long)root->root_key.objectid,
+ xa_clear_tag(&fs_info->fs_roots, (unsigned long)root->root_key.objectid,
BTRFS_ROOT_TRANS_TAG);
- spin_unlock(&fs_info->fs_roots_radix_lock);
}

int btrfs_record_root_in_trans(struct btrfs_trans_handle *trans,
@@ -1255,53 +1252,41 @@ void btrfs_add_dead_root(struct btrfs_root *root)
static noinline int commit_fs_roots(struct btrfs_trans_handle *trans,
struct btrfs_fs_info *fs_info)
{
- struct btrfs_root *gang[8];
- int i;
- int ret;
+ struct btrfs_root *root;
+ unsigned long index = 0;
int err = 0;

- spin_lock(&fs_info->fs_roots_radix_lock);
- while (1) {
- ret = radix_tree_gang_lookup_tag(&fs_info->fs_roots_radix,
- (void **)gang, 0,
- ARRAY_SIZE(gang),
- BTRFS_ROOT_TRANS_TAG);
- if (ret == 0)
- break;
- for (i = 0; i < ret; i++) {
- struct btrfs_root *root = gang[i];
- radix_tree_tag_clear(&fs_info->fs_roots_radix,
- (unsigned long)root->root_key.objectid,
- BTRFS_ROOT_TRANS_TAG);
- spin_unlock(&fs_info->fs_roots_radix_lock);
-
- btrfs_free_log(trans, root);
- btrfs_update_reloc_root(trans, root);
- btrfs_orphan_commit_root(trans, root);
-
- btrfs_save_ino_cache(root, trans);
-
- /* see comments in should_cow_block() */
- clear_bit(BTRFS_ROOT_FORCE_COW, &root->state);
- smp_mb__after_atomic();
-
- if (root->commit_root != root->node) {
- list_add_tail(&root->dirty_list,
- &trans->transaction->switch_commits);
- btrfs_set_root_node(&root->root_item,
- root->node);
- }
+ xa_lock(&fs_info->fs_roots);
+ xa_for_each(&fs_info->fs_roots, root, index, ULONG_MAX,
+ BTRFS_ROOT_TRANS_TAG) {
+ __xa_clear_tag(&fs_info->fs_roots, index, BTRFS_ROOT_TRANS_TAG);
+ xa_unlock(&fs_info->fs_roots);

- err = btrfs_update_root(trans, fs_info->tree_root,
- &root->root_key,
- &root->root_item);
- spin_lock(&fs_info->fs_roots_radix_lock);
- if (err)
- break;
- btrfs_qgroup_free_meta_all(root);
+ btrfs_free_log(trans, root);
+ btrfs_update_reloc_root(trans, root);
+ btrfs_orphan_commit_root(trans, root);
+
+ btrfs_save_ino_cache(root, trans);
+
+ /* see comments in should_cow_block() */
+ clear_bit(BTRFS_ROOT_FORCE_COW, &root->state);
+ smp_mb__after_atomic();
+
+ if (root->commit_root != root->node) {
+ list_add_tail(&root->dirty_list,
+ &trans->transaction->switch_commits);
+ btrfs_set_root_node(&root->root_item, root->node);
}
+
+ err = btrfs_update_root(trans, fs_info->tree_root,
+ &root->root_key, &root->root_item);
+ xa_lock(&fs_info->fs_roots);
+ if (err)
+ break;
+ btrfs_qgroup_free_meta_all(root);
+ index = 0;
}
- spin_unlock(&fs_info->fs_roots_radix_lock);
+ xa_unlock(&fs_info->fs_roots);
return err;
}

--
2.15.1


2018-01-17 20:33:54

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 28/99] page cache: Remove stray radix comment

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index d2a0031d61f5..2536fcacb5bc 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2606,7 +2606,7 @@ static struct page *do_read_cache_page(struct address_space *mapping,
put_page(page);
if (err == -EEXIST)
goto repeat;
- /* Presumably ENOMEM for radix tree node */
+ /* Presumably ENOMEM for xarray node */
return ERR_PTR(err);
}

--
2.15.1


2018-01-17 20:34:04

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 78/99] sh: intc: Convert to XArray

From: Matthew Wilcox <[email protected]>

The radix tree was being protected by a raw spinlock. I believe that
was not necessary, and the new internal regular spinlock will be
adequate for this array.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/sh/intc/core.c | 9 ++----
drivers/sh/intc/internals.h | 5 ++--
drivers/sh/intc/virq.c | 72 +++++++++++++--------------------------------
3 files changed, 25 insertions(+), 61 deletions(-)

diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c
index 8e72bcbd3d6d..356a423d9dcb 100644
--- a/drivers/sh/intc/core.c
+++ b/drivers/sh/intc/core.c
@@ -30,7 +30,6 @@
#include <linux/syscore_ops.h>
#include <linux/list.h>
#include <linux/spinlock.h>
-#include <linux/radix-tree.h>
#include <linux/export.h>
#include <linux/sort.h>
#include "internals.h"
@@ -78,11 +77,8 @@ static void __init intc_register_irq(struct intc_desc *desc,
struct intc_handle_int *hp;
struct irq_data *irq_data;
unsigned int data[2], primary;
- unsigned long flags;

- raw_spin_lock_irqsave(&intc_big_lock, flags);
- radix_tree_insert(&d->tree, enum_id, intc_irq_xlate_get(irq));
- raw_spin_unlock_irqrestore(&intc_big_lock, flags);
+ xa_store(&d->array, enum_id, intc_irq_xlate_get(irq), GFP_ATOMIC);

/*
* Prefer single interrupt source bitmap over other combinations:
@@ -196,8 +192,7 @@ int __init register_intc_controller(struct intc_desc *desc)
INIT_LIST_HEAD(&d->list);
list_add_tail(&d->list, &intc_list);

- raw_spin_lock_init(&d->lock);
- INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
+ xa_init(&d->array);

d->index = nr_intc_controllers;

diff --git a/drivers/sh/intc/internals.h b/drivers/sh/intc/internals.h
index fa73c173b56a..9b6fd07e99a6 100644
--- a/drivers/sh/intc/internals.h
+++ b/drivers/sh/intc/internals.h
@@ -5,7 +5,7 @@
#include <linux/list.h>
#include <linux/kernel.h>
#include <linux/types.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/device.h>

#define _INTC_MK(fn, mode, addr_e, addr_d, width, shift) \
@@ -54,8 +54,7 @@ struct intc_subgroup_entry {
struct intc_desc_int {
struct list_head list;
struct device dev;
- struct radix_tree_root tree;
- raw_spinlock_t lock;
+ struct xarray array;
unsigned int index;
unsigned long *reg;
#ifdef CONFIG_SMP
diff --git a/drivers/sh/intc/virq.c b/drivers/sh/intc/virq.c
index a638c3048207..801c9c8b7556 100644
--- a/drivers/sh/intc/virq.c
+++ b/drivers/sh/intc/virq.c
@@ -12,7 +12,6 @@
#include <linux/slab.h>
#include <linux/irq.h>
#include <linux/list.h>
-#include <linux/radix-tree.h>
#include <linux/spinlock.h>
#include <linux/export.h>
#include "internals.h"
@@ -27,10 +26,7 @@ struct intc_virq_list {
#define for_each_virq(entry, head) \
for (entry = head; entry; entry = entry->next)

-/*
- * Tags for the radix tree
- */
-#define INTC_TAG_VIRQ_NEEDS_ALLOC 0
+#define INTC_TAG_VIRQ_NEEDS_ALLOC XA_TAG_0

void intc_irq_xlate_set(unsigned int irq, intc_enum id, struct intc_desc_int *d)
{
@@ -54,23 +50,18 @@ int intc_irq_lookup(const char *chipname, intc_enum enum_id)
int irq = -1;

list_for_each_entry(d, &intc_list, list) {
- int tagged;
-
if (strcmp(d->chip.name, chipname) != 0)
continue;

/*
* Catch early lookups for subgroup VIRQs that have not
- * yet been allocated an IRQ. This already includes a
- * fast-path out if the tree is untagged, so there is no
- * need to explicitly test the root tree.
+ * yet been allocated an IRQ.
*/
- tagged = radix_tree_tag_get(&d->tree, enum_id,
- INTC_TAG_VIRQ_NEEDS_ALLOC);
- if (unlikely(tagged))
+ if (unlikely(xa_get_tag(&d->array, enum_id,
+ INTC_TAG_VIRQ_NEEDS_ALLOC)))
break;

- ptr = radix_tree_lookup(&d->tree, enum_id);
+ ptr = xa_load(&d->array, enum_id);
if (ptr) {
irq = ptr - intc_irq_xlate;
break;
@@ -148,22 +139,16 @@ static void __init intc_subgroup_init_one(struct intc_desc *desc,
{
struct intc_map_entry *mapped;
unsigned int pirq;
- unsigned long flags;
int i;

- mapped = radix_tree_lookup(&d->tree, subgroup->parent_id);
- if (!mapped) {
- WARN_ON(1);
+ mapped = xa_load(&d->array, subgroup->parent_id);
+ if (WARN_ON(!mapped))
return;
- }

pirq = mapped - intc_irq_xlate;

- raw_spin_lock_irqsave(&d->lock, flags);
-
for (i = 0; i < ARRAY_SIZE(subgroup->enum_ids); i++) {
struct intc_subgroup_entry *entry;
- int err;

if (!subgroup->enum_ids[i])
continue;
@@ -176,15 +161,14 @@ static void __init intc_subgroup_init_one(struct intc_desc *desc,
entry->enum_id = subgroup->enum_ids[i];
entry->handle = intc_subgroup_data(subgroup, d, i);

- err = radix_tree_insert(&d->tree, entry->enum_id, entry);
- if (unlikely(err < 0))
+ if (xa_err(xa_store(&d->array, entry->enum_id, entry,
+ GFP_NOWAIT))) {
+ kfree(entry);
break;
-
- radix_tree_tag_set(&d->tree, entry->enum_id,
+ }
+ xa_set_tag(&d->array, entry->enum_id,
INTC_TAG_VIRQ_NEEDS_ALLOC);
}
-
- raw_spin_unlock_irqrestore(&d->lock, flags);
}

void __init intc_subgroup_init(struct intc_desc *desc, struct intc_desc_int *d)
@@ -201,28 +185,16 @@ void __init intc_subgroup_init(struct intc_desc *desc, struct intc_desc_int *d)
static void __init intc_subgroup_map(struct intc_desc_int *d)
{
struct intc_subgroup_entry *entries[32];
- unsigned long flags;
unsigned int nr_found;
int i;

- raw_spin_lock_irqsave(&d->lock, flags);
-
-restart:
- nr_found = radix_tree_gang_lookup_tag_slot(&d->tree,
- (void ***)entries, 0, ARRAY_SIZE(entries),
- INTC_TAG_VIRQ_NEEDS_ALLOC);
+ nr_found = xa_extract(&d->array, (void **)entries, 0, ULONG_MAX,
+ ARRAY_SIZE(entries), INTC_TAG_VIRQ_NEEDS_ALLOC);

for (i = 0; i < nr_found; i++) {
- struct intc_subgroup_entry *entry;
- int irq;
+ struct intc_subgroup_entry *entry = entries[i];
+ int irq = irq_alloc_desc(numa_node_id());

- entry = radix_tree_deref_slot((void **)entries[i]);
- if (unlikely(!entry))
- continue;
- if (radix_tree_deref_retry(entry))
- goto restart;
-
- irq = irq_alloc_desc(numa_node_id());
if (unlikely(irq < 0)) {
pr_err("no more free IRQs, bailing..\n");
break;
@@ -250,13 +222,11 @@ static void __init intc_subgroup_map(struct intc_desc_int *d)
add_virq_to_pirq(entry->pirq, irq);
irq_set_chained_handler(entry->pirq, intc_virq_handler);

- radix_tree_tag_clear(&d->tree, entry->enum_id,
- INTC_TAG_VIRQ_NEEDS_ALLOC);
- radix_tree_replace_slot(&d->tree, (void **)entries[i],
- &intc_irq_xlate[irq]);
+ xa_store(&d->array, entry->enum_id, &intc_irq_xlate[irq],
+ GFP_NOWAIT);
+ xa_clear_tag(&d->array, entry->enum_id,
+ INTC_TAG_VIRQ_NEEDS_ALLOC);
}
-
- raw_spin_unlock_irqrestore(&d->lock, flags);
}

void __init intc_finalize(void)
@@ -264,6 +234,6 @@ void __init intc_finalize(void)
struct intc_desc_int *d;

list_for_each_entry(d, &intc_list, list)
- if (radix_tree_tagged(&d->tree, INTC_TAG_VIRQ_NEEDS_ALLOC))
+ if (xa_tagged(&d->array, INTC_TAG_VIRQ_NEEDS_ALLOC))
intc_subgroup_map(d);
}
--
2.15.1


2018-01-17 20:34:57

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 79/99] blk-cgroup: Convert to XArray

From: Matthew Wilcox <[email protected]>

This call to radix_tree_preload is awkward. At the point of allocation,
we're under not only a local lock, but also under the queue lock. So we
can't back out, drop the lock and retry the allocation. Replace this
preload call with a call to xa_reserve() which will ensure the memory is
allocated.

Signed-off-by: Matthew Wilcox <[email protected]>
---
block/bfq-cgroup.c | 4 ++--
block/blk-cgroup.c | 52 ++++++++++++++++++++++------------------------
block/cfq-iosched.c | 4 ++--
include/linux/blk-cgroup.h | 5 ++---
4 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index da1525ec4c87..0648aaa6498b 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -860,7 +860,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css,
return ret;

ret = 0;
- spin_lock_irq(&blkcg->lock);
+ xa_lock_irq(&blkcg->blkg_array);
bfqgd->weight = (unsigned short)val;
hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
struct bfq_group *bfqg = blkg_to_bfqg(blkg);
@@ -894,7 +894,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css,
bfqg->entity.prio_changed = 1;
}
}
- spin_unlock_irq(&blkcg->lock);
+ xa_unlock_irq(&blkcg->blkg_array);

return ret;
}
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 4117524ca45b..37962d52f1a8 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -146,12 +146,12 @@ struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg,
struct blkcg_gq *blkg;

/*
- * Hint didn't match. Look up from the radix tree. Note that the
+ * Hint didn't match. Fetch from the xarray. Note that the
* hint can only be updated under queue_lock as otherwise @blkg
- * could have already been removed from blkg_tree. The caller is
+ * could have already been removed from blkg_array. The caller is
* responsible for grabbing queue_lock if @update_hint.
*/
- blkg = radix_tree_lookup(&blkcg->blkg_tree, q->id);
+ blkg = xa_load(&blkcg->blkg_array, q->id);
if (blkg && blkg->q == q) {
if (update_hint) {
lockdep_assert_held(q->queue_lock);
@@ -223,8 +223,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
}

/* insert */
- spin_lock(&blkcg->lock);
- ret = radix_tree_insert(&blkcg->blkg_tree, q->id, blkg);
+ xa_lock(&blkcg->blkg_array);
+ ret = xa_err(__xa_store(&blkcg->blkg_array, q->id, blkg, GFP_NOWAIT));
if (likely(!ret)) {
hlist_add_head_rcu(&blkg->blkcg_node, &blkcg->blkg_list);
list_add(&blkg->q_node, &q->blkg_list);
@@ -237,7 +237,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
}
}
blkg->online = true;
- spin_unlock(&blkcg->lock);
+ xa_unlock(&blkcg->blkg_array);

if (!ret)
return blkg;
@@ -314,7 +314,7 @@ static void blkg_destroy(struct blkcg_gq *blkg)
int i;

lockdep_assert_held(blkg->q->queue_lock);
- lockdep_assert_held(&blkcg->lock);
+ lockdep_assert_held(&blkcg->blkg_array.xa_lock);

/* Something wrong if we are trying to remove same group twice */
WARN_ON_ONCE(list_empty(&blkg->q_node));
@@ -334,7 +334,7 @@ static void blkg_destroy(struct blkcg_gq *blkg)

blkg->online = false;

- radix_tree_delete(&blkcg->blkg_tree, blkg->q->id);
+ xa_erase(&blkcg->blkg_array, blkg->q->id);
list_del_init(&blkg->q_node);
hlist_del_init_rcu(&blkg->blkcg_node);

@@ -368,9 +368,9 @@ static void blkg_destroy_all(struct request_queue *q)
list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
struct blkcg *blkcg = blkg->blkcg;

- spin_lock(&blkcg->lock);
+ xa_lock(&blkcg->blkg_array);
blkg_destroy(blkg);
- spin_unlock(&blkcg->lock);
+ xa_unlock(&blkcg->blkg_array);
}

q->root_blkg = NULL;
@@ -443,7 +443,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
int i;

mutex_lock(&blkcg_pol_mutex);
- spin_lock_irq(&blkcg->lock);
+ xa_lock_irq(&blkcg->blkg_array);

/*
* Note that stat reset is racy - it doesn't synchronize against
@@ -462,7 +462,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
}
}

- spin_unlock_irq(&blkcg->lock);
+ xa_unlock_irq(&blkcg->blkg_array);
mutex_unlock(&blkcg_pol_mutex);
return 0;
}
@@ -1012,7 +1012,7 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css)
{
struct blkcg *blkcg = css_to_blkcg(css);

- spin_lock_irq(&blkcg->lock);
+ xa_lock_irq(&blkcg->blkg_array);

while (!hlist_empty(&blkcg->blkg_list)) {
struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
@@ -1023,13 +1023,13 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css)
blkg_destroy(blkg);
spin_unlock(q->queue_lock);
} else {
- spin_unlock_irq(&blkcg->lock);
+ xa_unlock_irq(&blkcg->blkg_array);
cpu_relax();
- spin_lock_irq(&blkcg->lock);
+ xa_lock_irq(&blkcg->blkg_array);
}
}

- spin_unlock_irq(&blkcg->lock);
+ xa_unlock_irq(&blkcg->blkg_array);

wb_blkcg_offline(blkcg);
}
@@ -1096,8 +1096,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css)
pol->cpd_init_fn(cpd);
}

- spin_lock_init(&blkcg->lock);
- INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN);
+ xa_init_flags(&blkcg->blkg_array, XA_FLAGS_LOCK_IRQ);
INIT_HLIST_HEAD(&blkcg->blkg_list);
#ifdef CONFIG_CGROUP_WRITEBACK
INIT_LIST_HEAD(&blkcg->cgwb_list);
@@ -1132,14 +1131,14 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css)
int blkcg_init_queue(struct request_queue *q)
{
struct blkcg_gq *new_blkg, *blkg;
- bool preloaded;
int ret;

new_blkg = blkg_alloc(&blkcg_root, q, GFP_KERNEL);
if (!new_blkg)
return -ENOMEM;

- preloaded = !radix_tree_preload(GFP_KERNEL);
+ if (xa_reserve(&blkcg_root.blkg_array, q->id, GFP_KERNEL))
+ return -ENOMEM;

/*
* Make sure the root blkg exists and count the existing blkgs. As
@@ -1152,11 +1151,10 @@ int blkcg_init_queue(struct request_queue *q)
spin_unlock_irq(q->queue_lock);
rcu_read_unlock();

- if (preloaded)
- radix_tree_preload_end();
-
- if (IS_ERR(blkg))
+ if (IS_ERR(blkg)) {
+ xa_erase(&blkcg_root.blkg_array, q->id);
return PTR_ERR(blkg);
+ }

q->root_blkg = blkg;
q->root_rl.blkg = blkg;
@@ -1374,8 +1372,8 @@ void blkcg_deactivate_policy(struct request_queue *q,
__clear_bit(pol->plid, q->blkcg_pols);

list_for_each_entry(blkg, &q->blkg_list, q_node) {
- /* grab blkcg lock too while removing @pd from @blkg */
- spin_lock(&blkg->blkcg->lock);
+ /* grab xa_lock too while removing @pd from @blkg */
+ xa_lock(&blkg->blkcg->blkg_array);

if (blkg->pd[pol->plid]) {
if (pol->pd_offline_fn)
@@ -1384,7 +1382,7 @@ void blkcg_deactivate_policy(struct request_queue *q,
blkg->pd[pol->plid] = NULL;
}

- spin_unlock(&blkg->blkcg->lock);
+ xa_unlock(&blkg->blkcg->blkg_array);
}

spin_unlock_irq(q->queue_lock);
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 9f342ef1ad42..a51bef7af8df 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1827,7 +1827,7 @@ static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val,
if (val < min || val > max)
return -ERANGE;

- spin_lock_irq(&blkcg->lock);
+ xa_lock_irq(&blkcg->blkg_array);
cfqgd = blkcg_to_cfqgd(blkcg);
if (!cfqgd) {
ret = -EINVAL;
@@ -1859,7 +1859,7 @@ static int __cfq_set_weight(struct cgroup_subsys_state *css, u64 val,
}

out:
- spin_unlock_irq(&blkcg->lock);
+ xa_unlock_irq(&blkcg->blkg_array);
return ret;
}

diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index e9825ff57b15..6278c49d3997 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -17,7 +17,7 @@
#include <linux/cgroup.h>
#include <linux/percpu_counter.h>
#include <linux/seq_file.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/blkdev.h>
#include <linux/atomic.h>
#include <linux/kthread.h>
@@ -44,9 +44,8 @@ struct blkcg_gq;

struct blkcg {
struct cgroup_subsys_state css;
- spinlock_t lock;

- struct radix_tree_root blkg_tree;
+ struct xarray blkg_array;
struct blkcg_gq __rcu *blkg_hint;
struct hlist_head blkg_list;

--
2.15.1


2018-01-17 20:35:05

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 76/99] irqdomain: Convert to XArray

From: Matthew Wilcox <[email protected]>

In a non-critical path, irqdomain wants to know how many entries are
stored in the xarray, so add xa_count(). This is a pretty straightforward
conversion; mostly just removing now-redundant locking. The only thing
of note is just how much simpler irq_domain_fix_revmap() becomes.

Signed-off-by: Matthew Wilcox <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
---
include/linux/irqdomain.h | 10 ++++------
include/linux/xarray.h | 1 +
kernel/irq/irqdomain.c | 39 ++++++++++-----------------------------
lib/xarray.c | 25 +++++++++++++++++++++++++
4 files changed, 40 insertions(+), 35 deletions(-)

diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index 48c7e86bb556..6c69d9141709 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -33,8 +33,7 @@
#include <linux/types.h>
#include <linux/irqhandler.h>
#include <linux/of.h>
-#include <linux/mutex.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>

struct device_node;
struct irq_domain;
@@ -151,7 +150,7 @@ struct irq_domain_chip_generic;
* @revmap_direct_max_irq: The largest hwirq that can be set for controllers that
* support direct mapping
* @revmap_size: Size of the linear map table @linear_revmap[]
- * @revmap_tree: Radix map tree for hwirqs that don't fit in the linear map
+ * @revmap_array: hwirqs that don't fit in the linear map
* @linear_revmap: Linear table of hwirq->virq reverse mappings
*/
struct irq_domain {
@@ -177,8 +176,7 @@ struct irq_domain {
irq_hw_number_t hwirq_max;
unsigned int revmap_direct_max_irq;
unsigned int revmap_size;
- struct radix_tree_root revmap_tree;
- struct mutex revmap_tree_mutex;
+ struct xarray revmap_array;
unsigned int linear_revmap[];
};

@@ -378,7 +376,7 @@ extern void irq_dispose_mapping(unsigned int virq);
* This is a fast path alternative to irq_find_mapping() that can be
* called directly by irq controller code to save a handful of
* instructions. It is always safe to call, but won't find irqs mapped
- * using the radix tree.
+ * using the xarray.
*/
static inline unsigned int irq_linear_revmap(struct irq_domain *domain,
irq_hw_number_t hwirq)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index c3f7405c5517..892288fe9595 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -269,6 +269,7 @@ void *xa_find_after(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
unsigned long max, unsigned int n, xa_tag_t);
+unsigned long xa_count(struct xarray *);
void xa_destroy(struct xarray *);

/**
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 62068ad46930..d6da3a8eadd2 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -114,7 +114,7 @@ EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
/**
* __irq_domain_add() - Allocate a new irq_domain data structure
* @fwnode: firmware node for the interrupt controller
- * @size: Size of linear map; 0 for radix mapping only
+ * @size: Size of linear map; 0 for xarray mapping only
* @hwirq_max: Maximum number of interrupts supported by controller
* @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
* direct mapping
@@ -209,8 +209,7 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size,
of_node_get(of_node);

/* Fill structure */
- INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL);
- mutex_init(&domain->revmap_tree_mutex);
+ xa_init(&domain->revmap_array);
domain->ops = ops;
domain->host_data = host_data;
domain->hwirq_max = hwirq_max;
@@ -241,7 +240,7 @@ void irq_domain_remove(struct irq_domain *domain)
mutex_lock(&irq_domain_mutex);
debugfs_remove_domain_dir(domain);

- WARN_ON(!radix_tree_empty(&domain->revmap_tree));
+ WARN_ON(!xa_empty(&domain->revmap_array));

list_del(&domain->link);

@@ -462,9 +461,7 @@ static void irq_domain_clear_mapping(struct irq_domain *domain,
if (hwirq < domain->revmap_size) {
domain->linear_revmap[hwirq] = 0;
} else {
- mutex_lock(&domain->revmap_tree_mutex);
- radix_tree_delete(&domain->revmap_tree, hwirq);
- mutex_unlock(&domain->revmap_tree_mutex);
+ xa_erase(&domain->revmap_array, hwirq);
}
}

@@ -475,9 +472,7 @@ static void irq_domain_set_mapping(struct irq_domain *domain,
if (hwirq < domain->revmap_size) {
domain->linear_revmap[hwirq] = irq_data->irq;
} else {
- mutex_lock(&domain->revmap_tree_mutex);
- radix_tree_insert(&domain->revmap_tree, hwirq, irq_data);
- mutex_unlock(&domain->revmap_tree_mutex);
+ xa_store(&domain->revmap_array, hwirq, irq_data, GFP_KERNEL);
}
}

@@ -585,7 +580,7 @@ EXPORT_SYMBOL_GPL(irq_domain_associate_many);
* This routine is used for irq controllers which can choose the hardware
* interrupt numbers they generate. In such a case it's simplest to use
* the linux irq as the hardware interrupt number. It still uses the linear
- * or radix tree to store the mapping, but the irq controller can optimize
+ * or xarray to store the mapping, but the irq controller can optimize
* the revmap path by using the hwirq directly.
*/
unsigned int irq_create_direct_mapping(struct irq_domain *domain)
@@ -890,9 +885,7 @@ unsigned int irq_find_mapping(struct irq_domain *domain,
if (hwirq < domain->revmap_size)
return domain->linear_revmap[hwirq];

- rcu_read_lock();
- data = radix_tree_lookup(&domain->revmap_tree, hwirq);
- rcu_read_unlock();
+ data = xa_load(&domain->revmap_array, hwirq);
return data ? data->irq : 0;
}
EXPORT_SYMBOL_GPL(irq_find_mapping);
@@ -943,8 +936,6 @@ static int virq_debug_show(struct seq_file *m, void *private)
unsigned long flags;
struct irq_desc *desc;
struct irq_domain *domain;
- struct radix_tree_iter iter;
- void __rcu **slot;
int i;

seq_printf(m, " %-16s %-6s %-10s %-10s %s\n",
@@ -953,7 +944,6 @@ static int virq_debug_show(struct seq_file *m, void *private)
list_for_each_entry(domain, &irq_domain_list, link) {
struct device_node *of_node;
const char *name;
-
int count = 0;

of_node = irq_domain_get_of_node(domain);
@@ -965,8 +955,7 @@ static int virq_debug_show(struct seq_file *m, void *private)
else
name = "";

- radix_tree_for_each_slot(slot, &domain->revmap_tree, &iter, 0)
- count++;
+ count = xa_count(&domain->revmap_array);
seq_printf(m, "%c%-16s %6u %10u %10u %s\n",
domain == irq_default_domain ? '*' : ' ', domain->name,
domain->revmap_size + count, domain->revmap_size,
@@ -1452,17 +1441,9 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
/* The irq_data was moved, fix the revmap to refer to the new location */
static void irq_domain_fix_revmap(struct irq_data *d)
{
- void __rcu **slot;
-
if (d->hwirq < d->domain->revmap_size)
- return; /* Not using radix tree. */
-
- /* Fix up the revmap. */
- mutex_lock(&d->domain->revmap_tree_mutex);
- slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq);
- if (slot)
- radix_tree_replace_slot(&d->domain->revmap_tree, slot, d);
- mutex_unlock(&d->domain->revmap_tree_mutex);
+ return;
+ xa_store(&d->domain->revmap_array, d->hwirq, d, GFP_KERNEL);
}

/**
diff --git a/lib/xarray.c b/lib/xarray.c
index b4dec8e2d202..62642e5508ee 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1625,6 +1625,31 @@ unsigned int xa_extract(struct xarray *xa, void **dst, unsigned long start,
}
EXPORT_SYMBOL(xa_extract);

+/**
+ * xa_count() - Count the number of present entries in the XArray
+ * @xa: XArray.
+ *
+ * This function walks the XArray counting how many entries are present.
+ * If every entry in the XArray is full, this function will return 0. If
+ * this is a theoretical possibility, check xa_empty() first.
+ *
+ * This is a naive implementation; faster implementations are possible.
+ * If speed is important, consider maintaining a count variable in your
+ * own data structure.
+ */
+unsigned long xa_count(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *p;
+ unsigned long count = 0;
+
+ xas_for_each(&xas, p, ULONG_MAX)
+ count++;
+
+ return count;
+}
+EXPORT_SYMBOL(xa_count);
+
/**
* xa_destroy() - Free all internal data structures.
* @xa: XArray.
--
2.15.1


2018-01-17 20:35:28

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 77/99] fscache: Convert to XArray

From: Matthew Wilcox <[email protected]>

Removes another user of radix_tree_preload().

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/fscache/cookie.c | 6 +-
fs/fscache/internal.h | 2 +-
fs/fscache/object.c | 2 +-
fs/fscache/page.c | 152 +++++++++++++++++++++---------------------------
fs/fscache/stats.c | 6 +-
include/linux/fscache.h | 8 +--
6 files changed, 76 insertions(+), 100 deletions(-)

diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
index e9054e0c1a49..6d45134d609e 100644
--- a/fs/fscache/cookie.c
+++ b/fs/fscache/cookie.c
@@ -109,9 +109,7 @@ struct fscache_cookie *__fscache_acquire_cookie(
cookie->netfs_data = netfs_data;
cookie->flags = (1 << FSCACHE_COOKIE_NO_DATA_YET);

- /* radix tree insertion won't use the preallocation pool unless it's
- * told it may not wait */
- INIT_RADIX_TREE(&cookie->stores, GFP_NOFS & ~__GFP_DIRECT_RECLAIM);
+ xa_init(&cookie->stores);

switch (cookie->def->type) {
case FSCACHE_COOKIE_TYPE_INDEX:
@@ -608,7 +606,7 @@ void __fscache_relinquish_cookie(struct fscache_cookie *cookie, bool retire)
/* Clear pointers back to the netfs */
cookie->netfs_data = NULL;
cookie->def = NULL;
- BUG_ON(!radix_tree_empty(&cookie->stores));
+ BUG_ON(!xa_empty(&cookie->stores));

if (cookie->parent) {
ASSERTCMP(atomic_read(&cookie->parent->usage), >, 0);
diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
index 0ff4b49a0037..468d9bd7f8c3 100644
--- a/fs/fscache/internal.h
+++ b/fs/fscache/internal.h
@@ -200,7 +200,7 @@ extern atomic_t fscache_n_stores_oom;
extern atomic_t fscache_n_store_ops;
extern atomic_t fscache_n_store_calls;
extern atomic_t fscache_n_store_pages;
-extern atomic_t fscache_n_store_radix_deletes;
+extern atomic_t fscache_n_store_xarray_deletes;
extern atomic_t fscache_n_store_pages_over_limit;

extern atomic_t fscache_n_store_vmscan_not_storing;
diff --git a/fs/fscache/object.c b/fs/fscache/object.c
index aa0e71f02c33..ed165736a358 100644
--- a/fs/fscache/object.c
+++ b/fs/fscache/object.c
@@ -956,7 +956,7 @@ static const struct fscache_state *_fscache_invalidate_object(struct fscache_obj
* retire the object instead.
*/
if (!fscache_use_cookie(object)) {
- ASSERT(radix_tree_empty(&object->cookie->stores));
+ ASSERT(xa_empty(&object->cookie->stores));
set_bit(FSCACHE_OBJECT_RETIRED, &object->flags);
_leave(" [no cookie]");
return transit_to(KILL_OBJECT);
diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 961029e04027..315e2745f822 100644
--- a/fs/fscache/page.c
+++ b/fs/fscache/page.c
@@ -22,13 +22,7 @@
*/
bool __fscache_check_page_write(struct fscache_cookie *cookie, struct page *page)
{
- void *val;
-
- rcu_read_lock();
- val = radix_tree_lookup(&cookie->stores, page->index);
- rcu_read_unlock();
-
- return val != NULL;
+ return xa_load(&cookie->stores, page->index) != NULL;
}
EXPORT_SYMBOL(__fscache_check_page_write);

@@ -64,15 +58,15 @@ bool __fscache_maybe_release_page(struct fscache_cookie *cookie,
struct page *page,
gfp_t gfp)
{
+ XA_STATE(xas, &cookie->stores, page->index);
struct page *xpage;
- void *val;

_enter("%p,%p,%x", cookie, page, gfp);

try_again:
rcu_read_lock();
- val = radix_tree_lookup(&cookie->stores, page->index);
- if (!val) {
+ xpage = xas_load(&xas);
+ if (!xpage) {
rcu_read_unlock();
fscache_stat(&fscache_n_store_vmscan_not_storing);
__fscache_uncache_page(cookie, page);
@@ -81,31 +75,32 @@ bool __fscache_maybe_release_page(struct fscache_cookie *cookie,

/* see if the page is actually undergoing storage - if so we can't get
* rid of it till the cache has finished with it */
- if (radix_tree_tag_get(&cookie->stores, page->index,
- FSCACHE_COOKIE_STORING_TAG)) {
+ if (xas_get_tag(&xas, FSCACHE_COOKIE_STORING_TAG)) {
rcu_read_unlock();
+ xas_retry(&xas, XA_RETRY_ENTRY);
goto page_busy;
}

/* the page is pending storage, so we attempt to cancel the store and
* discard the store request so that the page can be reclaimed */
- spin_lock(&cookie->stores_lock);
+ xas_retry(&xas, XA_RETRY_ENTRY);
+ xas_lock(&xas);
rcu_read_unlock();

- if (radix_tree_tag_get(&cookie->stores, page->index,
- FSCACHE_COOKIE_STORING_TAG)) {
+ xpage = xas_load(&xas);
+ if (xas_get_tag(&xas, FSCACHE_COOKIE_STORING_TAG)) {
/* the page started to undergo storage whilst we were looking,
* so now we can only wait or return */
spin_unlock(&cookie->stores_lock);
goto page_busy;
}

- xpage = radix_tree_delete(&cookie->stores, page->index);
+ xas_store(&xas, NULL);
spin_unlock(&cookie->stores_lock);

if (xpage) {
fscache_stat(&fscache_n_store_vmscan_cancelled);
- fscache_stat(&fscache_n_store_radix_deletes);
+ fscache_stat(&fscache_n_store_xarray_deletes);
ASSERTCMP(xpage, ==, page);
} else {
fscache_stat(&fscache_n_store_vmscan_gone);
@@ -149,17 +144,19 @@ static void fscache_end_page_write(struct fscache_object *object,
spin_lock(&object->lock);
cookie = object->cookie;
if (cookie) {
+ XA_STATE(xas, &cookie->stores, page->index);
/* delete the page from the tree if it is now no longer
* pending */
- spin_lock(&cookie->stores_lock);
- radix_tree_tag_clear(&cookie->stores, page->index,
- FSCACHE_COOKIE_STORING_TAG);
- if (!radix_tree_tag_get(&cookie->stores, page->index,
- FSCACHE_COOKIE_PENDING_TAG)) {
- fscache_stat(&fscache_n_store_radix_deletes);
- xpage = radix_tree_delete(&cookie->stores, page->index);
+ xas_lock(&xas);
+ xpage = xas_load(&xas);
+ xas_clear_tag(&xas, FSCACHE_COOKIE_STORING_TAG);
+ if (xas_get_tag(&xas, FSCACHE_COOKIE_PENDING_TAG)) {
+ xpage = NULL;
+ } else {
+ fscache_stat(&fscache_n_store_xarray_deletes);
+ xas_store(&xas, NULL);
}
- spin_unlock(&cookie->stores_lock);
+ xas_unlock(&xas);
wake_up_bit(&cookie->flags, 0);
}
spin_unlock(&object->lock);
@@ -765,13 +762,12 @@ static void fscache_release_write_op(struct fscache_operation *_op)
*/
static void fscache_write_op(struct fscache_operation *_op)
{
+ XA_STATE(xas, NULL, 0);
struct fscache_storage *op =
container_of(_op, struct fscache_storage, op);
struct fscache_object *object = op->op.object;
struct fscache_cookie *cookie;
struct page *page;
- unsigned n;
- void *results[1];
int ret;

_enter("{OP%x,%d}", op->op.debug_id, atomic_read(&op->op.usage));
@@ -804,29 +800,25 @@ static void fscache_write_op(struct fscache_operation *_op)
return;
}

- spin_lock(&cookie->stores_lock);
+ xas.xa = &cookie->stores;
+ xas_lock(&xas);

fscache_stat(&fscache_n_store_calls);

/* find a page to store */
- page = NULL;
- n = radix_tree_gang_lookup_tag(&cookie->stores, results, 0, 1,
- FSCACHE_COOKIE_PENDING_TAG);
- if (n != 1)
+ page = xas_find_tag(&xas, ULONG_MAX, FSCACHE_COOKIE_PENDING_TAG);
+ if (!page)
goto superseded;
- page = results[0];
- _debug("gang %d [%lx]", n, page->index);
+ _debug("found %lx", page->index);
if (page->index >= op->store_limit) {
fscache_stat(&fscache_n_store_pages_over_limit);
goto superseded;
}

- radix_tree_tag_set(&cookie->stores, page->index,
- FSCACHE_COOKIE_STORING_TAG);
- radix_tree_tag_clear(&cookie->stores, page->index,
- FSCACHE_COOKIE_PENDING_TAG);
+ xas_set_tag(&xas, FSCACHE_COOKIE_STORING_TAG);
+ xas_clear_tag(&xas, FSCACHE_COOKIE_PENDING_TAG);
+ xas_unlock(&xas);

- spin_unlock(&cookie->stores_lock);
spin_unlock(&object->lock);

fscache_stat(&fscache_n_store_pages);
@@ -848,7 +840,7 @@ static void fscache_write_op(struct fscache_operation *_op)
/* this writer is going away and there aren't any more things to
* write */
_debug("cease");
- spin_unlock(&cookie->stores_lock);
+ xas_unlock(&xas);
clear_bit(FSCACHE_OBJECT_PENDING_WRITE, &object->flags);
spin_unlock(&object->lock);
fscache_op_complete(&op->op, true);
@@ -860,32 +852,25 @@ static void fscache_write_op(struct fscache_operation *_op)
*/
void fscache_invalidate_writes(struct fscache_cookie *cookie)
{
+ XA_STATE(xas, &cookie->stores, 0);
+ unsigned int cleared = 0;
struct page *page;
- void *results[16];
- int n, i;

_enter("");

- for (;;) {
- spin_lock(&cookie->stores_lock);
- n = radix_tree_gang_lookup_tag(&cookie->stores, results, 0,
- ARRAY_SIZE(results),
- FSCACHE_COOKIE_PENDING_TAG);
- if (n == 0) {
- spin_unlock(&cookie->stores_lock);
- break;
- }
-
- for (i = n - 1; i >= 0; i--) {
- page = results[i];
- radix_tree_delete(&cookie->stores, page->index);
- }
+ xas_lock(&xas);
+ xas_for_each_tag(&xas, page, ULONG_MAX, FSCACHE_COOKIE_PENDING_TAG) {
+ xas_store(&xas, NULL);
+ put_page(page);
+ if (++cleared % XA_CHECK_SCHED)
+ continue;

- spin_unlock(&cookie->stores_lock);
-
- for (i = n - 1; i >= 0; i--)
- put_page(results[i]);
+ xas_pause(&xas);
+ xas_unlock(&xas);
+ cond_resched();
+ xas_lock(&xas);
}
+ xas_unlock(&xas);

wake_up_bit(&cookie->flags, 0);

@@ -925,9 +910,11 @@ int __fscache_write_page(struct fscache_cookie *cookie,
struct page *page,
gfp_t gfp)
{
+ XA_STATE(xas, &cookie->stores, page->index);
struct fscache_storage *op;
struct fscache_object *object;
bool wake_cookie = false;
+ struct page *xpage;
int ret;

_enter("%p,%x,", cookie, (u32) page->flags);
@@ -952,10 +939,7 @@ int __fscache_write_page(struct fscache_cookie *cookie,
(1 << FSCACHE_OP_WAITING) |
(1 << FSCACHE_OP_UNUSE_COOKIE);

- ret = radix_tree_maybe_preload(gfp & ~__GFP_HIGHMEM);
- if (ret < 0)
- goto nomem_free;
-
+retry:
ret = -ENOBUFS;
spin_lock(&cookie->lock);

@@ -967,23 +951,19 @@ int __fscache_write_page(struct fscache_cookie *cookie,
if (test_bit(FSCACHE_IOERROR, &object->cache->flags))
goto nobufs;

- /* add the page to the pending-storage radix tree on the backing
- * object */
+ /* add the page to the pending-storage xarray on the backing object */
spin_lock(&object->lock);
- spin_lock(&cookie->stores_lock);
+ xas_lock(&xas);

_debug("store limit %llx", (unsigned long long) object->store_limit);

- ret = radix_tree_insert(&cookie->stores, page->index, page);
- if (ret < 0) {
- if (ret == -EEXIST)
- goto already_queued;
- _debug("insert failed %d", ret);
+ xpage = xas_create(&xas);
+ if (xpage)
+ goto already_queued;
+ if (xas_error(&xas))
goto nobufs_unlock_obj;
- }
-
- radix_tree_tag_set(&cookie->stores, page->index,
- FSCACHE_COOKIE_PENDING_TAG);
+ xas_store(&xas, page);
+ xas_set_tag(&xas, FSCACHE_COOKIE_PENDING_TAG);
get_page(page);

/* we only want one writer at a time, but we do need to queue new
@@ -991,7 +971,7 @@ int __fscache_write_page(struct fscache_cookie *cookie,
if (test_and_set_bit(FSCACHE_OBJECT_PENDING_WRITE, &object->flags))
goto already_pending;

- spin_unlock(&cookie->stores_lock);
+ xas_unlock(&xas);
spin_unlock(&object->lock);

op->op.debug_id = atomic_inc_return(&fscache_op_debug_id);
@@ -1002,7 +982,6 @@ int __fscache_write_page(struct fscache_cookie *cookie,
goto submit_failed;

spin_unlock(&cookie->lock);
- radix_tree_preload_end();
fscache_stat(&fscache_n_store_ops);
fscache_stat(&fscache_n_stores_ok);

@@ -1014,30 +993,31 @@ int __fscache_write_page(struct fscache_cookie *cookie,
already_queued:
fscache_stat(&fscache_n_stores_again);
already_pending:
- spin_unlock(&cookie->stores_lock);
+ xas_unlock(&xas);
spin_unlock(&object->lock);
spin_unlock(&cookie->lock);
- radix_tree_preload_end();
fscache_put_operation(&op->op);
fscache_stat(&fscache_n_stores_ok);
_leave(" = 0");
return 0;

submit_failed:
- spin_lock(&cookie->stores_lock);
- radix_tree_delete(&cookie->stores, page->index);
- spin_unlock(&cookie->stores_lock);
+ xa_erase(&cookie->stores, page->index);
wake_cookie = __fscache_unuse_cookie(cookie);
put_page(page);
ret = -ENOBUFS;
goto nobufs;

nobufs_unlock_obj:
- spin_unlock(&cookie->stores_lock);
+ xas_unlock(&xas);
spin_unlock(&object->lock);
+ spin_unlock(&cookie->lock);
+ if (xas_nomem(&xas, gfp))
+ goto retry;
+ goto nobufs2;
nobufs:
spin_unlock(&cookie->lock);
- radix_tree_preload_end();
+nobufs2:
fscache_put_operation(&op->op);
if (wake_cookie)
__fscache_wake_unused_cookie(cookie);
@@ -1045,8 +1025,6 @@ int __fscache_write_page(struct fscache_cookie *cookie,
_leave(" = -ENOBUFS");
return -ENOBUFS;

-nomem_free:
- fscache_put_operation(&op->op);
nomem:
fscache_stat(&fscache_n_stores_oom);
_leave(" = -ENOMEM");
diff --git a/fs/fscache/stats.c b/fs/fscache/stats.c
index 7ac6e839b065..9c012b4229cd 100644
--- a/fs/fscache/stats.c
+++ b/fs/fscache/stats.c
@@ -63,7 +63,7 @@ atomic_t fscache_n_stores_oom;
atomic_t fscache_n_store_ops;
atomic_t fscache_n_store_calls;
atomic_t fscache_n_store_pages;
-atomic_t fscache_n_store_radix_deletes;
+atomic_t fscache_n_store_xarray_deletes;
atomic_t fscache_n_store_pages_over_limit;

atomic_t fscache_n_store_vmscan_not_storing;
@@ -232,11 +232,11 @@ static int fscache_stats_show(struct seq_file *m, void *v)
atomic_read(&fscache_n_stores_again),
atomic_read(&fscache_n_stores_nobufs),
atomic_read(&fscache_n_stores_oom));
- seq_printf(m, "Stores : ops=%u run=%u pgs=%u rxd=%u olm=%u\n",
+ seq_printf(m, "Stores : ops=%u run=%u pgs=%u xar=%u olm=%u\n",
atomic_read(&fscache_n_store_ops),
atomic_read(&fscache_n_store_calls),
atomic_read(&fscache_n_store_pages),
- atomic_read(&fscache_n_store_radix_deletes),
+ atomic_read(&fscache_n_store_xarray_deletes),
atomic_read(&fscache_n_store_pages_over_limit));

seq_printf(m, "VmScan : nos=%u gon=%u bsy=%u can=%u wt=%u\n",
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index e7cc90aeac3d..9a3b83b295a3 100644
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -22,7 +22,7 @@
#include <linux/list.h>
#include <linux/pagemap.h>
#include <linux/pagevec.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>

#if defined(CONFIG_FSCACHE) || defined(CONFIG_FSCACHE_MODULE)
#define fscache_available() (1)
@@ -175,9 +175,9 @@ struct fscache_cookie {
const struct fscache_cookie_def *def; /* definition */
struct fscache_cookie *parent; /* parent of this entry */
void *netfs_data; /* back pointer to netfs */
- struct radix_tree_root stores; /* pages to be stored on this cookie */
-#define FSCACHE_COOKIE_PENDING_TAG 0 /* pages tag: pending write to cache */
-#define FSCACHE_COOKIE_STORING_TAG 1 /* pages tag: writing to cache */
+ struct xarray stores; /* pages to be stored on this cookie */
+#define FSCACHE_COOKIE_PENDING_TAG XA_TAG_0 /* pages tag: pending write to cache */
+#define FSCACHE_COOKIE_STORING_TAG XA_TAG_1 /* pages tag: writing to cache */

unsigned long flags;
#define FSCACHE_COOKIE_LOOKING_UP 0 /* T if non-index cookie being looked up still */
--
2.15.1


2018-01-17 20:35:51

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 75/99] md: Convert raid5-cache to XArray

From: Matthew Wilcox <[email protected]>

This is the first user of the radix tree I've converted which was
storing numbers rather than pointers. I'm fairly pleased with how
well it came out. There's less boiler-plate involved than there was
with the radix tree, so that's a win. It does use the advanced API,
and I think that's a signal that there needs to be a separate API for
using the XArray for only integers.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/md/raid5-cache.c | 119 ++++++++++++++++-------------------------------
1 file changed, 40 insertions(+), 79 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 39f31f07ffe9..2c8ad0ed9b48 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -158,9 +158,8 @@ struct r5l_log {
/* to disable write back during in degraded mode */
struct work_struct disable_writeback_work;

- /* to for chunk_aligned_read in writeback mode, details below */
- spinlock_t tree_lock;
- struct radix_tree_root big_stripe_tree;
+ /* for chunk_aligned_read in writeback mode, details below */
+ struct xarray big_stripe;
};

/*
@@ -170,9 +169,8 @@ struct r5l_log {
* chunk contains 64 4kB-page, so this chunk contain 64 stripes). For
* chunk_aligned_read, these stripes are grouped into one "big_stripe".
* For each big_stripe, we count how many stripes of this big_stripe
- * are in the write back cache. These data are tracked in a radix tree
- * (big_stripe_tree). We use radix_tree item pointer as the counter.
- * r5c_tree_index() is used to calculate keys for the radix tree.
+ * are in the write back cache. This counter is tracked in an xarray
+ * (big_stripe). r5c_index() is used to calculate the index.
*
* chunk_aligned_read() calls r5c_big_stripe_cached() to look up
* big_stripe of each chunk in the tree. If this big_stripe is in the
@@ -180,9 +178,9 @@ struct r5l_log {
* rcu_read_lock().
*
* It is necessary to remember whether a stripe is counted in
- * big_stripe_tree. Instead of adding new flag, we reuses existing flags:
+ * big_stripe. Instead of adding new flag, we reuses existing flags:
* STRIPE_R5C_PARTIAL_STRIPE and STRIPE_R5C_FULL_STRIPE. If either of these
- * two flags are set, the stripe is counted in big_stripe_tree. This
+ * two flags are set, the stripe is counted in big_stripe. This
* requires moving set_bit(STRIPE_R5C_PARTIAL_STRIPE) to
* r5c_try_caching_write(); and moving clear_bit of
* STRIPE_R5C_PARTIAL_STRIPE and STRIPE_R5C_FULL_STRIPE to
@@ -190,23 +188,13 @@ struct r5l_log {
*/

/*
- * radix tree requests lowest 2 bits of data pointer to be 2b'00.
- * So it is necessary to left shift the counter by 2 bits before using it
- * as data pointer of the tree.
- */
-#define R5C_RADIX_COUNT_SHIFT 2
-
-/*
- * calculate key for big_stripe_tree
+ * calculate key for big_stripe
*
* sect: align_bi->bi_iter.bi_sector or sh->sector
*/
-static inline sector_t r5c_tree_index(struct r5conf *conf,
- sector_t sect)
+static inline sector_t r5c_index(struct r5conf *conf, sector_t sect)
{
- sector_t offset;
-
- offset = sector_div(sect, conf->chunk_sectors);
+ sector_div(sect, conf->chunk_sectors);
return sect;
}

@@ -2646,10 +2634,6 @@ int r5c_try_caching_write(struct r5conf *conf,
int i;
struct r5dev *dev;
int to_cache = 0;
- void **pslot;
- sector_t tree_index;
- int ret;
- uintptr_t refcount;

BUG_ON(!r5c_is_writeback(log));

@@ -2697,39 +2681,29 @@ int r5c_try_caching_write(struct r5conf *conf,
}
}

- /* if the stripe is not counted in big_stripe_tree, add it now */
+ /* if the stripe is not counted in big_stripe, add it now */
if (!test_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state) &&
!test_bit(STRIPE_R5C_FULL_STRIPE, &sh->state)) {
- tree_index = r5c_tree_index(conf, sh->sector);
- spin_lock(&log->tree_lock);
- pslot = radix_tree_lookup_slot(&log->big_stripe_tree,
- tree_index);
- if (pslot) {
- refcount = (uintptr_t)radix_tree_deref_slot_protected(
- pslot, &log->tree_lock) >>
- R5C_RADIX_COUNT_SHIFT;
- radix_tree_replace_slot(
- &log->big_stripe_tree, pslot,
- (void *)((refcount + 1) << R5C_RADIX_COUNT_SHIFT));
- } else {
- /*
- * this radix_tree_insert can fail safely, so no
- * need to call radix_tree_preload()
- */
- ret = radix_tree_insert(
- &log->big_stripe_tree, tree_index,
- (void *)(1 << R5C_RADIX_COUNT_SHIFT));
- if (ret) {
- spin_unlock(&log->tree_lock);
- r5c_make_stripe_write_out(sh);
- return -EAGAIN;
- }
+ XA_STATE(xas, &log->big_stripe, r5c_index(conf, sh->sector));
+ void *entry;
+
+ /* Caller would rather handle failures than supply GFP flags */
+ xas_lock(&xas);
+ entry = xas_create(&xas);
+ if (entry)
+ entry = xa_mk_value(xa_to_value(entry) + 1);
+ else
+ entry = xa_mk_value(1);
+ xas_store(&xas, entry);
+ xas_unlock(&xas);
+ if (xas_error(&xas)) {
+ r5c_make_stripe_write_out(sh);
+ return -EAGAIN;
}
- spin_unlock(&log->tree_lock);

/*
* set STRIPE_R5C_PARTIAL_STRIPE, this shows the stripe is
- * counted in the radix tree
+ * counted in big_stripe
*/
set_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state);
atomic_inc(&conf->r5c_cached_partial_stripes);
@@ -2812,9 +2786,6 @@ void r5c_finish_stripe_write_out(struct r5conf *conf,
struct r5l_log *log = conf->log;
int i;
int do_wakeup = 0;
- sector_t tree_index;
- void **pslot;
- uintptr_t refcount;

if (!log || !test_bit(R5_InJournal, &sh->dev[sh->pd_idx].flags))
return;
@@ -2852,24 +2823,21 @@ void r5c_finish_stripe_write_out(struct r5conf *conf,
atomic_dec(&log->stripe_in_journal_count);
r5c_update_log_state(log);

- /* stop counting this stripe in big_stripe_tree */
+ /* stop counting this stripe in big_stripe */
if (test_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state) ||
test_bit(STRIPE_R5C_FULL_STRIPE, &sh->state)) {
- tree_index = r5c_tree_index(conf, sh->sector);
- spin_lock(&log->tree_lock);
- pslot = radix_tree_lookup_slot(&log->big_stripe_tree,
- tree_index);
- BUG_ON(pslot == NULL);
- refcount = (uintptr_t)radix_tree_deref_slot_protected(
- pslot, &log->tree_lock) >>
- R5C_RADIX_COUNT_SHIFT;
- if (refcount == 1)
- radix_tree_delete(&log->big_stripe_tree, tree_index);
+ XA_STATE(xas, &log->big_stripe, r5c_index(conf, sh->sector));
+ void *entry;
+
+ xas_lock(&xas);
+ entry = xas_load(&xas);
+ BUG_ON(!entry);
+ if (entry == xa_mk_value(1))
+ entry = NULL;
else
- radix_tree_replace_slot(
- &log->big_stripe_tree, pslot,
- (void *)((refcount - 1) << R5C_RADIX_COUNT_SHIFT));
- spin_unlock(&log->tree_lock);
+ entry = xa_mk_value(xa_to_value(entry) - 1);
+ xas_store(&xas, entry);
+ xas_unlock(&xas);
}

if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state)) {
@@ -2949,16 +2917,10 @@ int r5c_cache_data(struct r5l_log *log, struct stripe_head *sh)
bool r5c_big_stripe_cached(struct r5conf *conf, sector_t sect)
{
struct r5l_log *log = conf->log;
- sector_t tree_index;
- void *slot;

if (!log)
return false;
-
- WARN_ON_ONCE(!rcu_read_lock_held());
- tree_index = r5c_tree_index(conf, sect);
- slot = radix_tree_lookup(&log->big_stripe_tree, tree_index);
- return slot != NULL;
+ return xa_load(&log->big_stripe, r5c_index(conf, sect)) != NULL;
}

static int r5l_load_log(struct r5l_log *log)
@@ -3112,8 +3074,7 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
if (!log->meta_pool)
goto out_mempool;

- spin_lock_init(&log->tree_lock);
- INIT_RADIX_TREE(&log->big_stripe_tree, GFP_NOWAIT | __GFP_NOWARN);
+ xa_init(&log->big_stripe);

log->reclaim_thread = md_register_thread(r5l_reclaim_thread,
log->rdev->mddev, "reclaim");
--
2.15.1


2018-01-17 20:37:12

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 72/99] xfs: Convert xfs dquot to XArray

From: Matthew Wilcox <[email protected]>

This is a pretty straight-forward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/xfs/xfs_dquot.c | 38 +++++++++++++++++++++-----------------
fs/xfs/xfs_qm.c | 36 ++++++++++++++++++------------------
fs/xfs/xfs_qm.h | 18 +++++++++---------
3 files changed, 48 insertions(+), 44 deletions(-)

diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
index e2a466df5dd1..c6832db23ca8 100644
--- a/fs/xfs/xfs_dquot.c
+++ b/fs/xfs/xfs_dquot.c
@@ -44,7 +44,7 @@
* Lock order:
*
* ip->i_lock
- * qi->qi_tree_lock
+ * qi->qi_xa_lock
* dquot->q_qlock (xfs_dqlock() and friends)
* dquot->q_flush (xfs_dqflock() and friends)
* qi->qi_lru_lock
@@ -752,8 +752,8 @@ xfs_qm_dqget(
xfs_dquot_t **O_dqpp) /* OUT : locked incore dquot */
{
struct xfs_quotainfo *qi = mp->m_quotainfo;
- struct radix_tree_root *tree = xfs_dquot_tree(qi, type);
- struct xfs_dquot *dqp;
+ struct xarray *xa = xfs_dquot_xa(qi, type);
+ struct xfs_dquot *dqp, *duplicate;
int error;

ASSERT(XFS_IS_QUOTA_RUNNING(mp));
@@ -772,23 +772,24 @@ xfs_qm_dqget(
}

restart:
- mutex_lock(&qi->qi_tree_lock);
- dqp = radix_tree_lookup(tree, id);
+ mutex_lock(&qi->qi_xa_lock);
+ dqp = xa_load(xa, id);
+found:
if (dqp) {
xfs_dqlock(dqp);
if (dqp->dq_flags & XFS_DQ_FREEING) {
xfs_dqunlock(dqp);
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);
trace_xfs_dqget_freeing(dqp);
delay(1);
goto restart;
}

- /* uninit / unused quota found in radix tree, keep looking */
+ /* uninit / unused quota found, keep looking */
if (flags & XFS_QMOPT_DQNEXT) {
if (XFS_IS_DQUOT_UNINITIALIZED(dqp)) {
xfs_dqunlock(dqp);
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);
error = xfs_dq_get_next_id(mp, type, &id);
if (error)
return error;
@@ -797,14 +798,14 @@ xfs_qm_dqget(
}

dqp->q_nrefs++;
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);

trace_xfs_dqget_hit(dqp);
XFS_STATS_INC(mp, xs_qm_dqcachehits);
*O_dqpp = dqp;
return 0;
}
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);
XFS_STATS_INC(mp, xs_qm_dqcachemisses);

/*
@@ -854,20 +855,23 @@ xfs_qm_dqget(
}
}

- mutex_lock(&qi->qi_tree_lock);
- error = radix_tree_insert(tree, id, dqp);
- if (unlikely(error)) {
- WARN_ON(error != -EEXIST);
+ mutex_lock(&qi->qi_xa_lock);
+ duplicate = xa_cmpxchg(xa, id, NULL, dqp, GFP_NOFS);
+ if (unlikely(duplicate)) {
+ if (xa_is_err(duplicate)) {
+ mutex_unlock(&qi->qi_xa_lock);
+ return xa_err(duplicate);
+ }

/*
* Duplicate found. Just throw away the new dquot and start
* over.
*/
- mutex_unlock(&qi->qi_tree_lock);
trace_xfs_dqget_dup(dqp);
xfs_qm_dqdestroy(dqp);
XFS_STATS_INC(mp, xs_qm_dquot_dups);
- goto restart;
+ dqp = duplicate;
+ goto found;
}

/*
@@ -877,7 +881,7 @@ xfs_qm_dqget(
dqp->q_nrefs = 1;

qi->qi_dquots++;
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);

/* If we are asked to find next active id, keep looking */
if (flags & XFS_QMOPT_DQNEXT) {
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index b897b11afb2c..000b207762d6 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -67,7 +67,7 @@ xfs_qm_dquot_walk(
void *data)
{
struct xfs_quotainfo *qi = mp->m_quotainfo;
- struct radix_tree_root *tree = xfs_dquot_tree(qi, type);
+ struct xarray *xa = xfs_dquot_xa(qi, type);
uint32_t next_index;
int last_error = 0;
int skipped;
@@ -83,11 +83,11 @@ xfs_qm_dquot_walk(
int error = 0;
int i;

- mutex_lock(&qi->qi_tree_lock);
- nr_found = radix_tree_gang_lookup(tree, (void **)batch,
- next_index, XFS_DQ_LOOKUP_BATCH);
+ mutex_lock(&qi->qi_xa_lock);
+ nr_found = xa_extract(xa, (void **)batch, next_index,
+ ULONG_MAX, XFS_DQ_LOOKUP_BATCH, XA_PRESENT);
if (!nr_found) {
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);
break;
}

@@ -105,7 +105,7 @@ xfs_qm_dquot_walk(
last_error = error;
}

- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);

/* bail out if the filesystem is corrupted. */
if (last_error == -EFSCORRUPTED) {
@@ -178,8 +178,8 @@ xfs_qm_dqpurge(
xfs_dqfunlock(dqp);
xfs_dqunlock(dqp);

- radix_tree_delete(xfs_dquot_tree(qi, dqp->q_core.d_flags),
- be32_to_cpu(dqp->q_core.d_id));
+ xa_store(xfs_dquot_xa(qi, dqp->q_core.d_flags),
+ be32_to_cpu(dqp->q_core.d_id), NULL, GFP_NOWAIT);
qi->qi_dquots--;

/*
@@ -623,10 +623,10 @@ xfs_qm_init_quotainfo(
if (error)
goto out_free_lru;

- INIT_RADIX_TREE(&qinf->qi_uquota_tree, GFP_NOFS);
- INIT_RADIX_TREE(&qinf->qi_gquota_tree, GFP_NOFS);
- INIT_RADIX_TREE(&qinf->qi_pquota_tree, GFP_NOFS);
- mutex_init(&qinf->qi_tree_lock);
+ xa_init(&qinf->qi_uquota_xa);
+ xa_init(&qinf->qi_gquota_xa);
+ xa_init(&qinf->qi_pquota_xa);
+ mutex_init(&qinf->qi_xa_lock);

/* mutex used to serialize quotaoffs */
mutex_init(&qinf->qi_quotaofflock);
@@ -704,7 +704,7 @@ xfs_qm_init_quotainfo(

out_free_inos:
mutex_destroy(&qinf->qi_quotaofflock);
- mutex_destroy(&qinf->qi_tree_lock);
+ mutex_destroy(&qinf->qi_xa_lock);
xfs_qm_destroy_quotainos(qinf);
out_free_lru:
list_lru_destroy(&qinf->qi_lru);
@@ -731,7 +731,7 @@ xfs_qm_destroy_quotainfo(
unregister_shrinker(&qi->qi_shrinker);
list_lru_destroy(&qi->qi_lru);
xfs_qm_destroy_quotainos(qi);
- mutex_destroy(&qi->qi_tree_lock);
+ mutex_destroy(&qi->qi_xa_lock);
mutex_destroy(&qi->qi_quotaofflock);
kmem_free(qi);
mp->m_quotainfo = NULL;
@@ -1620,12 +1620,12 @@ xfs_qm_dqfree_one(
struct xfs_mount *mp = dqp->q_mount;
struct xfs_quotainfo *qi = mp->m_quotainfo;

- mutex_lock(&qi->qi_tree_lock);
- radix_tree_delete(xfs_dquot_tree(qi, dqp->q_core.d_flags),
- be32_to_cpu(dqp->q_core.d_id));
+ mutex_lock(&qi->qi_xa_lock);
+ xa_store(xfs_dquot_xa(qi, dqp->q_core.d_flags),
+ be32_to_cpu(dqp->q_core.d_id), NULL, GFP_NOWAIT);

qi->qi_dquots--;
- mutex_unlock(&qi->qi_tree_lock);
+ mutex_unlock(&qi->qi_xa_lock);

xfs_qm_dqdestroy(dqp);
}
diff --git a/fs/xfs/xfs_qm.h b/fs/xfs/xfs_qm.h
index 2975a822e9f0..946f929f7bfb 100644
--- a/fs/xfs/xfs_qm.h
+++ b/fs/xfs/xfs_qm.h
@@ -67,10 +67,10 @@ struct xfs_def_quota {
* The mount structure keeps a pointer to this.
*/
typedef struct xfs_quotainfo {
- struct radix_tree_root qi_uquota_tree;
- struct radix_tree_root qi_gquota_tree;
- struct radix_tree_root qi_pquota_tree;
- struct mutex qi_tree_lock;
+ struct xarray qi_uquota_xa;
+ struct xarray qi_gquota_xa;
+ struct xarray qi_pquota_xa;
+ struct mutex qi_xa_lock;
struct xfs_inode *qi_uquotaip; /* user quota inode */
struct xfs_inode *qi_gquotaip; /* group quota inode */
struct xfs_inode *qi_pquotaip; /* project quota inode */
@@ -91,18 +91,18 @@ typedef struct xfs_quotainfo {
struct shrinker qi_shrinker;
} xfs_quotainfo_t;

-static inline struct radix_tree_root *
-xfs_dquot_tree(
+static inline struct xarray *
+xfs_dquot_xa(
struct xfs_quotainfo *qi,
int type)
{
switch (type) {
case XFS_DQ_USER:
- return &qi->qi_uquota_tree;
+ return &qi->qi_uquota_xa;
case XFS_DQ_GROUP:
- return &qi->qi_gquota_tree;
+ return &qi->qi_gquota_xa;
case XFS_DQ_PROJ:
- return &qi->qi_pquota_tree;
+ return &qi->qi_pquota_xa;
default:
ASSERT(0);
}
--
2.15.1


2018-01-17 20:38:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 73/99] xfs: Convert mru cache to XArray

From: Matthew Wilcox <[email protected]>

This eliminates a call to radix_tree_preload().

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/xfs/xfs_mru_cache.c | 72 +++++++++++++++++++++++---------------------------
1 file changed, 33 insertions(+), 39 deletions(-)

diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c
index f8a674d7f092..2179bede5396 100644
--- a/fs/xfs/xfs_mru_cache.c
+++ b/fs/xfs/xfs_mru_cache.c
@@ -101,10 +101,9 @@
* an infinite loop in the code.
*/
struct xfs_mru_cache {
- struct radix_tree_root store; /* Core storage data structure. */
+ struct xarray store; /* Core storage data structure. */
struct list_head *lists; /* Array of lists, one per grp. */
struct list_head reap_list; /* Elements overdue for reaping. */
- spinlock_t lock; /* Lock to protect this struct. */
unsigned int grp_count; /* Number of discrete groups. */
unsigned int grp_time; /* Time period spanned by grps. */
unsigned int lru_grp; /* Group containing time zero. */
@@ -232,22 +231,21 @@ _xfs_mru_cache_list_insert(
* data store, removing it from the reap list, calling the client's free
* function and deleting the element from the element zone.
*
- * We get called holding the mru->lock, which we drop and then reacquire.
- * Sparse need special help with this to tell it we know what we are doing.
+ * We get called holding the mru->store lock, which we drop and then reacquire.
+ * Sparse needs special help with this to tell it we know what we are doing.
*/
STATIC void
_xfs_mru_cache_clear_reap_list(
struct xfs_mru_cache *mru)
- __releases(mru->lock) __acquires(mru->lock)
+ __releases(mru->store) __acquires(mru->store)
{
struct xfs_mru_cache_elem *elem, *next;
struct list_head tmp;

INIT_LIST_HEAD(&tmp);
list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) {
-
/* Remove the element from the data store. */
- radix_tree_delete(&mru->store, elem->key);
+ __xa_erase(&mru->store, elem->key);

/*
* remove to temp list so it can be freed without
@@ -255,14 +253,14 @@ _xfs_mru_cache_clear_reap_list(
*/
list_move(&elem->list_node, &tmp);
}
- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);

list_for_each_entry_safe(elem, next, &tmp, list_node) {
list_del_init(&elem->list_node);
mru->free_func(elem);
}

- spin_lock(&mru->lock);
+ xa_lock(&mru->store);
}

/*
@@ -284,7 +282,7 @@ _xfs_mru_cache_reap(
if (!mru || !mru->lists)
return;

- spin_lock(&mru->lock);
+ xa_lock(&mru->store);
next = _xfs_mru_cache_migrate(mru, jiffies);
_xfs_mru_cache_clear_reap_list(mru);

@@ -298,7 +296,7 @@ _xfs_mru_cache_reap(
queue_delayed_work(xfs_mru_reap_wq, &mru->work, next);
}

- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);
}

int
@@ -358,13 +356,8 @@ xfs_mru_cache_create(
for (grp = 0; grp < mru->grp_count; grp++)
INIT_LIST_HEAD(mru->lists + grp);

- /*
- * We use GFP_KERNEL radix tree preload and do inserts under a
- * spinlock so GFP_ATOMIC is appropriate for the radix tree itself.
- */
- INIT_RADIX_TREE(&mru->store, GFP_ATOMIC);
+ xa_init(&mru->store);
INIT_LIST_HEAD(&mru->reap_list);
- spin_lock_init(&mru->lock);
INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap);

mru->grp_time = grp_time;
@@ -394,17 +387,17 @@ xfs_mru_cache_flush(
if (!mru || !mru->lists)
return;

- spin_lock(&mru->lock);
+ xa_lock(&mru->store);
if (mru->queued) {
- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);
cancel_delayed_work_sync(&mru->work);
- spin_lock(&mru->lock);
+ xa_lock(&mru->store);
}

_xfs_mru_cache_migrate(mru, jiffies + mru->grp_count * mru->grp_time);
_xfs_mru_cache_clear_reap_list(mru);

- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);
}

void
@@ -431,24 +424,24 @@ xfs_mru_cache_insert(
unsigned long key,
struct xfs_mru_cache_elem *elem)
{
+ XA_STATE(xas, &mru->store, key);
int error;

ASSERT(mru && mru->lists);
if (!mru || !mru->lists)
return -EINVAL;

- if (radix_tree_preload(GFP_NOFS))
- return -ENOMEM;
-
INIT_LIST_HEAD(&elem->list_node);
elem->key = key;

- spin_lock(&mru->lock);
- error = radix_tree_insert(&mru->store, key, elem);
- radix_tree_preload_end();
- if (!error)
- _xfs_mru_cache_list_insert(mru, elem);
- spin_unlock(&mru->lock);
+ do {
+ xas_lock(&xas);
+ xas_store(&xas, elem);
+ error = xas_error(&xas);
+ if (!error)
+ _xfs_mru_cache_list_insert(mru, elem);
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, GFP_NOFS));

return error;
}
@@ -470,11 +463,11 @@ xfs_mru_cache_remove(
if (!mru || !mru->lists)
return NULL;

- spin_lock(&mru->lock);
- elem = radix_tree_delete(&mru->store, key);
+ xa_lock(&mru->store);
+ elem = __xa_erase(&mru->store, key);
if (elem)
list_del(&elem->list_node);
- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);

return elem;
}
@@ -520,20 +513,21 @@ xfs_mru_cache_lookup(
struct xfs_mru_cache *mru,
unsigned long key)
{
+ XA_STATE(xas, &mru->store, key);
struct xfs_mru_cache_elem *elem;

ASSERT(mru && mru->lists);
if (!mru || !mru->lists)
return NULL;

- spin_lock(&mru->lock);
- elem = radix_tree_lookup(&mru->store, key);
+ xas_lock(&xas);
+ elem = xas_load(&xas);
if (elem) {
list_del(&elem->list_node);
_xfs_mru_cache_list_insert(mru, elem);
- __release(mru_lock); /* help sparse not be stupid */
+ __release(&xas); /* help sparse not be stupid */
} else
- spin_unlock(&mru->lock);
+ xas_unlock(&xas);

return elem;
}
@@ -546,7 +540,7 @@ xfs_mru_cache_lookup(
void
xfs_mru_cache_done(
struct xfs_mru_cache *mru)
- __releases(mru->lock)
+ __releases(mru->store)
{
- spin_unlock(&mru->lock);
+ xa_unlock(&mru->store);
}
--
2.15.1


2018-01-17 20:38:30

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 74/99] usb: Convert xhci-mem to XArray

From: Matthew Wilcox <[email protected]>

The XArray API is a slightly better fit for xhci_insert_segment_mapping()
than the radix tree API was.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/usb/host/xhci-mem.c | 68 +++++++++++++++++++--------------------------
drivers/usb/host/xhci.h | 6 ++--
2 files changed, 32 insertions(+), 42 deletions(-)

diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index 3a29b32a3bd0..a2e15a9abc30 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -149,70 +149,60 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring,
}

/*
- * We need a radix tree for mapping physical addresses of TRBs to which stream
- * ID they belong to. We need to do this because the host controller won't tell
+ * We need to map physical addresses of TRBs to the stream ID they belong to.
+ * We need to do this because the host controller won't tell
* us which stream ring the TRB came from. We could store the stream ID in an
* event data TRB, but that doesn't help us for the cancellation case, since the
* endpoint may stop before it reaches that event data TRB.
*
- * The radix tree maps the upper portion of the TRB DMA address to a ring
+ * The xarray maps the upper portion of the TRB DMA address to a ring
* segment that has the same upper portion of DMA addresses. For example, say I
* have segments of size 1KB, that are always 1KB aligned. A segment may
* start at 0x10c91000 and end at 0x10c913f0. If I use the upper 10 bits, the
- * key to the stream ID is 0x43244. I can use the DMA address of the TRB to
- * pass the radix tree a key to get the right stream ID:
+ * index of the stream ID is 0x43244. I can use the DMA address of the TRB as
+ * the xarray index to get the right stream ID:
*
* 0x10c90fff >> 10 = 0x43243
* 0x10c912c0 >> 10 = 0x43244
* 0x10c91400 >> 10 = 0x43245
*
* Obviously, only those TRBs with DMA addresses that are within the segment
- * will make the radix tree return the stream ID for that ring.
+ * will make the xarray return the stream ID for that ring.
*
- * Caveats for the radix tree:
+ * Caveats for the xarray:
*
- * The radix tree uses an unsigned long as a key pair. On 32-bit systems, an
+ * The xarray uses an unsigned long for the index. On 32-bit systems, an
* unsigned long will be 32-bits; on a 64-bit system an unsigned long will be
* 64-bits. Since we only request 32-bit DMA addresses, we can use that as the
- * key on 32-bit or 64-bit systems (it would also be fine if we asked for 64-bit
- * PCI DMA addresses on a 64-bit system). There might be a problem on 32-bit
- * extended systems (where the DMA address can be bigger than 32-bits),
+ * index on 32-bit or 64-bit systems (it would also be fine if we asked for
+ * 64-bit PCI DMA addresses on a 64-bit system). There might be a problem on
+ * 32-bit extended systems (where the DMA address can be bigger than 32-bits),
* if we allow the PCI dma mask to be bigger than 32-bits. So don't do that.
*/
-static int xhci_insert_segment_mapping(struct radix_tree_root *trb_address_map,
+
+static unsigned long trb_index(dma_addr_t dma)
+{
+ return (unsigned long)(dma >> TRB_SEGMENT_SHIFT);
+}
+
+static int xhci_insert_segment_mapping(struct xarray *trb_address_map,
struct xhci_ring *ring,
struct xhci_segment *seg,
- gfp_t mem_flags)
+ gfp_t gfp)
{
- unsigned long key;
- int ret;
-
- key = (unsigned long)(seg->dma >> TRB_SEGMENT_SHIFT);
/* Skip any segments that were already added. */
- if (radix_tree_lookup(trb_address_map, key))
- return 0;
-
- ret = radix_tree_maybe_preload(mem_flags);
- if (ret)
- return ret;
- ret = radix_tree_insert(trb_address_map,
- key, ring);
- radix_tree_preload_end();
- return ret;
+ return xa_err(xa_cmpxchg(trb_address_map, trb_index(seg->dma), NULL,
+ ring, gfp));
}

-static void xhci_remove_segment_mapping(struct radix_tree_root *trb_address_map,
+static void xhci_remove_segment_mapping(struct xarray *trb_address_map,
struct xhci_segment *seg)
{
- unsigned long key;
-
- key = (unsigned long)(seg->dma >> TRB_SEGMENT_SHIFT);
- if (radix_tree_lookup(trb_address_map, key))
- radix_tree_delete(trb_address_map, key);
+ xa_erase(trb_address_map, trb_index(seg->dma));
}

static int xhci_update_stream_segment_mapping(
- struct radix_tree_root *trb_address_map,
+ struct xarray *trb_address_map,
struct xhci_ring *ring,
struct xhci_segment *first_seg,
struct xhci_segment *last_seg,
@@ -574,8 +564,8 @@ struct xhci_ring *xhci_dma_to_transfer_ring(
u64 address)
{
if (ep->ep_state & EP_HAS_STREAMS)
- return radix_tree_lookup(&ep->stream_info->trb_address_map,
- address >> TRB_SEGMENT_SHIFT);
+ return xa_load(&ep->stream_info->trb_address_map,
+ trb_index(address));
return ep->ring;
}

@@ -654,10 +644,10 @@ struct xhci_stream_info *xhci_alloc_stream_info(struct xhci_hcd *xhci,
if (!stream_info->free_streams_command)
goto cleanup_ctx;

- INIT_RADIX_TREE(&stream_info->trb_address_map, GFP_ATOMIC);
+ xa_init(&stream_info->trb_address_map);

/* Allocate rings for all the streams that the driver will use,
- * and add their segment DMA addresses to the radix tree.
+ * and add their segment DMA addresses to the map.
* Stream 0 is reserved.
*/

@@ -2376,7 +2366,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
* Initialize the ring segment pool. The ring must be a contiguous
* structure comprised of TRBs. The TRBs must be 16 byte aligned,
* however, the command ring segment needs 64-byte aligned segments
- * and our use of dma addresses in the trb_address_map radix tree needs
+ * and our use of dma addresses in the trb_address_map xarray needs
* TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
*/
xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 054ce74524af..e8208a3eee3c 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -15,7 +15,7 @@
#include <linux/usb.h>
#include <linux/timer.h>
#include <linux/kernel.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/usb/hcd.h>
#include <linux/io-64-nonatomic-lo-hi.h>

@@ -837,7 +837,7 @@ struct xhci_stream_info {
unsigned int num_stream_ctxs;
dma_addr_t ctx_array_dma;
/* For mapping physical TRB addresses to segments in stream rings */
- struct radix_tree_root trb_address_map;
+ struct xarray trb_address_map;
struct xhci_command *free_streams_command;
};

@@ -1584,7 +1584,7 @@ struct xhci_ring {
unsigned int bounce_buf_len;
enum xhci_ring_type type;
bool last_td_was_short;
- struct radix_tree_root *trb_address_map;
+ struct xarray *trb_address_map;
};

struct xhci_erst_entry {
--
2.15.1


2018-01-17 20:38:34

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 70/99] xfs: Convert m_perag_tree to XArray

From: Matthew Wilcox <[email protected]>

Getting rid of the m_perag_lock lets us also get rid of the call to
radix_tree_preload(). This is a relatively naive conversion; we could
improve performance over the radix tree implementation by passing around
xa_state pointers instead of indices, possibly at the expense of extending
rcu_read_lock() periods.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/xfs/libxfs/xfs_sb.c | 9 ++++-----
fs/xfs/xfs_icache.c | 35 +++++++++--------------------------
fs/xfs/xfs_icache.h | 6 +++---
fs/xfs/xfs_mount.c | 19 ++++---------------
fs/xfs/xfs_mount.h | 3 +--
5 files changed, 21 insertions(+), 51 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
index 9b5aae2bcc0b..3b0b65eb8224 100644
--- a/fs/xfs/libxfs/xfs_sb.c
+++ b/fs/xfs/libxfs/xfs_sb.c
@@ -59,7 +59,7 @@ xfs_perag_get(
int ref = 0;

rcu_read_lock();
- pag = radix_tree_lookup(&mp->m_perag_tree, agno);
+ pag = xa_load(&mp->m_perag_xa, agno);
if (pag) {
ASSERT(atomic_read(&pag->pag_ref) >= 0);
ref = atomic_inc_return(&pag->pag_ref);
@@ -78,14 +78,13 @@ xfs_perag_get_tag(
xfs_agnumber_t first,
int tag)
{
+ XA_STATE(xas, &mp->m_perag_xa, first);
struct xfs_perag *pag;
- int found;
int ref;

rcu_read_lock();
- found = radix_tree_gang_lookup_tag(&mp->m_perag_tree,
- (void **)&pag, first, 1, tag);
- if (found <= 0) {
+ pag = xas_find_tag(&xas, ULONG_MAX, tag);
+ if (!pag) {
rcu_read_unlock();
return NULL;
}
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 3861d61fb265..65a8b91b2e70 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -156,13 +156,10 @@ static void
xfs_reclaim_work_queue(
struct xfs_mount *mp)
{
-
- rcu_read_lock();
- if (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_RECLAIM_TAG)) {
+ if (xa_tagged(&mp->m_perag_xa, XFS_ICI_RECLAIM_TAG)) {
queue_delayed_work(mp->m_reclaim_workqueue, &mp->m_reclaim_work,
msecs_to_jiffies(xfs_syncd_centisecs / 6 * 10));
}
- rcu_read_unlock();
}

/*
@@ -194,10 +191,7 @@ xfs_perag_set_reclaim_tag(
return;

/* propagate the reclaim tag up into the perag radix tree */
- spin_lock(&mp->m_perag_lock);
- radix_tree_tag_set(&mp->m_perag_tree, pag->pag_agno,
- XFS_ICI_RECLAIM_TAG);
- spin_unlock(&mp->m_perag_lock);
+ xa_set_tag(&mp->m_perag_xa, pag->pag_agno, XFS_ICI_RECLAIM_TAG);

/* schedule periodic background inode reclaim */
xfs_reclaim_work_queue(mp);
@@ -216,10 +210,7 @@ xfs_perag_clear_reclaim_tag(
return;

/* clear the reclaim tag from the perag radix tree */
- spin_lock(&mp->m_perag_lock);
- radix_tree_tag_clear(&mp->m_perag_tree, pag->pag_agno,
- XFS_ICI_RECLAIM_TAG);
- spin_unlock(&mp->m_perag_lock);
+ xa_clear_tag(&mp->m_perag_xa, pag->pag_agno, XFS_ICI_RECLAIM_TAG);
trace_xfs_perag_clear_reclaim(mp, pag->pag_agno, -1, _RET_IP_);
}

@@ -847,12 +838,10 @@ void
xfs_queue_eofblocks(
struct xfs_mount *mp)
{
- rcu_read_lock();
- if (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_EOFBLOCKS_TAG))
+ if (xa_tagged(&mp->m_perag_xa, XFS_ICI_EOFBLOCKS_TAG))
queue_delayed_work(mp->m_eofblocks_workqueue,
&mp->m_eofblocks_work,
msecs_to_jiffies(xfs_eofb_secs * 1000));
- rcu_read_unlock();
}

void
@@ -874,12 +863,10 @@ void
xfs_queue_cowblocks(
struct xfs_mount *mp)
{
- rcu_read_lock();
- if (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_COWBLOCKS_TAG))
+ if (xa_tagged(&mp->m_perag_xa, XFS_ICI_COWBLOCKS_TAG))
queue_delayed_work(mp->m_eofblocks_workqueue,
&mp->m_cowblocks_work,
msecs_to_jiffies(xfs_cowb_secs * 1000));
- rcu_read_unlock();
}

void
@@ -1557,7 +1544,7 @@ __xfs_inode_set_blocks_tag(
void (*execute)(struct xfs_mount *mp),
void (*set_tp)(struct xfs_mount *mp, xfs_agnumber_t agno,
int error, unsigned long caller_ip),
- int tag)
+ xa_tag_t tag)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_perag *pag;
@@ -1581,11 +1568,9 @@ __xfs_inode_set_blocks_tag(
XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino), tag);
if (!tagged) {
/* propagate the eofblocks tag up into the perag radix tree */
- spin_lock(&ip->i_mount->m_perag_lock);
- radix_tree_tag_set(&ip->i_mount->m_perag_tree,
+ xa_set_tag(&ip->i_mount->m_perag_xa,
XFS_INO_TO_AGNO(ip->i_mount, ip->i_ino),
tag);
- spin_unlock(&ip->i_mount->m_perag_lock);

/* kick off background trimming */
execute(ip->i_mount);
@@ -1612,7 +1597,7 @@ __xfs_inode_clear_blocks_tag(
xfs_inode_t *ip,
void (*clear_tp)(struct xfs_mount *mp, xfs_agnumber_t agno,
int error, unsigned long caller_ip),
- int tag)
+ xa_tag_t tag)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_perag *pag;
@@ -1628,11 +1613,9 @@ __xfs_inode_clear_blocks_tag(
XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino), tag);
if (!radix_tree_tagged(&pag->pag_ici_root, tag)) {
/* clear the eofblocks tag from the perag radix tree */
- spin_lock(&ip->i_mount->m_perag_lock);
- radix_tree_tag_clear(&ip->i_mount->m_perag_tree,
+ xa_clear_tag(&ip->i_mount->m_perag_xa,
XFS_INO_TO_AGNO(ip->i_mount, ip->i_ino),
tag);
- spin_unlock(&ip->i_mount->m_perag_lock);
clear_tp(ip->i_mount, pag->pag_agno, -1, _RET_IP_);
}

diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h
index d4a77588eca1..dfbf13b530bc 100644
--- a/fs/xfs/xfs_icache.h
+++ b/fs/xfs/xfs_icache.h
@@ -37,9 +37,9 @@ struct xfs_eofblocks {
*/
#define XFS_ICI_NO_TAG (-1) /* special flag for an untagged lookup
in xfs_inode_ag_iterator */
-#define XFS_ICI_RECLAIM_TAG 0 /* inode is to be reclaimed */
-#define XFS_ICI_EOFBLOCKS_TAG 1 /* inode has blocks beyond EOF */
-#define XFS_ICI_COWBLOCKS_TAG 2 /* inode can have cow blocks to gc */
+#define XFS_ICI_RECLAIM_TAG XA_TAG_0 /* inode is to be reclaimed */
+#define XFS_ICI_EOFBLOCKS_TAG XA_TAG_1 /* inode has blocks beyond EOF */
+#define XFS_ICI_COWBLOCKS_TAG XA_TAG_2 /* inode can have cow blocks to gc */

/*
* Flags for xfs_iget()
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index c879b517cc94..0541aeb8449c 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -156,9 +156,7 @@ xfs_free_perag(
struct xfs_perag *pag;

for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) {
- spin_lock(&mp->m_perag_lock);
- pag = radix_tree_delete(&mp->m_perag_tree, agno);
- spin_unlock(&mp->m_perag_lock);
+ pag = xa_erase(&mp->m_perag_xa, agno);
ASSERT(pag);
ASSERT(atomic_read(&pag->pag_ref) == 0);
xfs_buf_hash_destroy(pag);
@@ -219,19 +217,11 @@ xfs_initialize_perag(
goto out_free_pag;
init_waitqueue_head(&pag->pagb_wait);

- if (radix_tree_preload(GFP_NOFS))
- goto out_hash_destroy;
-
- spin_lock(&mp->m_perag_lock);
- if (radix_tree_insert(&mp->m_perag_tree, index, pag)) {
+ if (xa_store(&mp->m_perag_xa, index, pag, GFP_NOFS)) {
BUG();
- spin_unlock(&mp->m_perag_lock);
- radix_tree_preload_end();
error = -EEXIST;
goto out_hash_destroy;
}
- spin_unlock(&mp->m_perag_lock);
- radix_tree_preload_end();
/* first new pag is fully initialized */
if (first_initialised == NULLAGNUMBER)
first_initialised = index;
@@ -252,7 +242,7 @@ xfs_initialize_perag(
out_unwind_new_pags:
/* unwind any prior newly initialized pags */
for (index = first_initialised; index < agcount; index++) {
- pag = radix_tree_delete(&mp->m_perag_tree, index);
+ pag = xa_erase(&mp->m_perag_xa, index);
if (!pag)
break;
xfs_buf_hash_destroy(pag);
@@ -816,8 +806,7 @@ xfs_mountfs(
/*
* Allocate and initialize the per-ag data.
*/
- spin_lock_init(&mp->m_perag_lock);
- INIT_RADIX_TREE(&mp->m_perag_tree, GFP_ATOMIC);
+ xa_init(&mp->m_perag_xa);
error = xfs_initialize_perag(mp, sbp->sb_agcount, &mp->m_maxagi);
if (error) {
xfs_warn(mp, "Failed per-ag init: %d", error);
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index e0792d036be2..6e5ad7b26f46 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -134,8 +134,7 @@ typedef struct xfs_mount {
xfs_extlen_t m_ag_prealloc_blocks; /* reserved ag blocks */
uint m_alloc_set_aside; /* space we can't use */
uint m_ag_max_usable; /* max space per AG */
- struct radix_tree_root m_perag_tree; /* per-ag accounting info */
- spinlock_t m_perag_lock; /* lock for m_perag_tree */
+ struct xarray m_perag_xa; /* per-ag accounting info */
struct mutex m_growlock; /* growfs mutex */
int m_fixedfsid[2]; /* unchanged for life of FS */
uint m_dmevmask; /* DMI events for this FS */
--
2.15.1


2018-01-17 20:39:32

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 68/99] vmalloc: Convert to XArray

From: Matthew Wilcox <[email protected]>

The radix tree of vmap blocks is simpler to express as an XArray.
Saves a couple of hundred bytes of text and eliminates a user of the
radix tree preload API.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/vmalloc.c | 39 +++++++++++++--------------------------
1 file changed, 13 insertions(+), 26 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 673942094328..b6c138633592 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -23,7 +23,7 @@
#include <linux/list.h>
#include <linux/notifier.h>
#include <linux/rbtree.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/rcupdate.h>
#include <linux/pfn.h>
#include <linux/kmemleak.h>
@@ -821,12 +821,11 @@ struct vmap_block {
static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);

/*
- * Radix tree of vmap blocks, indexed by address, to quickly find a vmap block
+ * XArray of vmap blocks, indexed by address, to quickly find a vmap block
* in the free path. Could get rid of this if we change the API to return a
* "cookie" from alloc, to be passed to free. But no big deal yet.
*/
-static DEFINE_SPINLOCK(vmap_block_tree_lock);
-static RADIX_TREE(vmap_block_tree, GFP_ATOMIC);
+static DEFINE_XARRAY(vmap_block_tree);

/*
* We should probably have a fallback mechanism to allocate virtual memory
@@ -865,8 +864,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
struct vmap_block *vb;
struct vmap_area *va;
unsigned long vb_idx;
- int node, err;
- void *vaddr;
+ int node;
+ void *ret, *vaddr;

node = numa_node_id();

@@ -883,13 +882,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
return ERR_CAST(va);
}

- err = radix_tree_preload(gfp_mask);
- if (unlikely(err)) {
- kfree(vb);
- free_vmap_area(va);
- return ERR_PTR(err);
- }
-
vaddr = vmap_block_vaddr(va->va_start, 0);
spin_lock_init(&vb->lock);
vb->va = va;
@@ -902,11 +894,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
INIT_LIST_HEAD(&vb->free_list);

vb_idx = addr_to_vb_idx(va->va_start);
- spin_lock(&vmap_block_tree_lock);
- err = radix_tree_insert(&vmap_block_tree, vb_idx, vb);
- spin_unlock(&vmap_block_tree_lock);
- BUG_ON(err);
- radix_tree_preload_end();
+ ret = xa_store(&vmap_block_tree, vb_idx, vb, gfp_mask);
+ if (xa_is_err(ret)) {
+ kfree(vb);
+ free_vmap_area(va);
+ return ERR_PTR(xa_err(ret));
+ }

vbq = &get_cpu_var(vmap_block_queue);
spin_lock(&vbq->lock);
@@ -923,9 +916,7 @@ static void free_vmap_block(struct vmap_block *vb)
unsigned long vb_idx;

vb_idx = addr_to_vb_idx(vb->va->va_start);
- spin_lock(&vmap_block_tree_lock);
- tmp = radix_tree_delete(&vmap_block_tree, vb_idx);
- spin_unlock(&vmap_block_tree_lock);
+ tmp = xa_erase(&vmap_block_tree, vb_idx);
BUG_ON(tmp != vb);

free_vmap_area_noflush(vb->va);
@@ -1031,7 +1022,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
static void vb_free(const void *addr, unsigned long size)
{
unsigned long offset;
- unsigned long vb_idx;
unsigned int order;
struct vmap_block *vb;

@@ -1045,10 +1035,7 @@ static void vb_free(const void *addr, unsigned long size)
offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1);
offset >>= PAGE_SHIFT;

- vb_idx = addr_to_vb_idx((unsigned long)addr);
- rcu_read_lock();
- vb = radix_tree_lookup(&vmap_block_tree, vb_idx);
- rcu_read_unlock();
+ vb = xa_load(&vmap_block_tree, addr_to_vb_idx((unsigned long)addr));
BUG_ON(!vb);

vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);
--
2.15.1


2018-01-17 20:39:32

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 71/99] xfs: Convert pag_ici_root to XArray

From: Matthew Wilcox <[email protected]>

Rename pag_ici_root to pag_ici_xa and use XArray APIs instead of radix
tree APIs. Shorter code, typechecking on tag numbers, better error
checking in xfs_reclaim_inode(), and eliminates a call to
radix_tree_preload().

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/xfs/libxfs/xfs_sb.c | 2 +-
fs/xfs/libxfs/xfs_sb.h | 2 +-
fs/xfs/xfs_icache.c | 111 +++++++++++++++++++------------------------------
fs/xfs/xfs_icache.h | 5 +--
fs/xfs/xfs_inode.c | 24 ++++-------
fs/xfs/xfs_mount.c | 3 +-
fs/xfs/xfs_mount.h | 3 +-
7 files changed, 56 insertions(+), 94 deletions(-)

diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
index 3b0b65eb8224..8fb7c216c761 100644
--- a/fs/xfs/libxfs/xfs_sb.c
+++ b/fs/xfs/libxfs/xfs_sb.c
@@ -76,7 +76,7 @@ struct xfs_perag *
xfs_perag_get_tag(
struct xfs_mount *mp,
xfs_agnumber_t first,
- int tag)
+ xa_tag_t tag)
{
XA_STATE(xas, &mp->m_perag_xa, first);
struct xfs_perag *pag;
diff --git a/fs/xfs/libxfs/xfs_sb.h b/fs/xfs/libxfs/xfs_sb.h
index 961e6475a309..d2de90b8f39c 100644
--- a/fs/xfs/libxfs/xfs_sb.h
+++ b/fs/xfs/libxfs/xfs_sb.h
@@ -23,7 +23,7 @@
*/
extern struct xfs_perag *xfs_perag_get(struct xfs_mount *, xfs_agnumber_t);
extern struct xfs_perag *xfs_perag_get_tag(struct xfs_mount *, xfs_agnumber_t,
- int tag);
+ xa_tag_t tag);
extern void xfs_perag_put(struct xfs_perag *pag);
extern int xfs_initialize_perag_data(struct xfs_mount *, xfs_agnumber_t);

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 65a8b91b2e70..10c76209227b 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -186,7 +186,7 @@ xfs_perag_set_reclaim_tag(
{
struct xfs_mount *mp = pag->pag_mount;

- lockdep_assert_held(&pag->pag_ici_lock);
+ lockdep_assert_held(&pag->pag_ici_xa.xa_lock);
if (pag->pag_ici_reclaimable++)
return;

@@ -205,7 +205,7 @@ xfs_perag_clear_reclaim_tag(
{
struct xfs_mount *mp = pag->pag_mount;

- lockdep_assert_held(&pag->pag_ici_lock);
+ lockdep_assert_held(&pag->pag_ici_xa.xa_lock);
if (--pag->pag_ici_reclaimable)
return;

@@ -228,16 +228,16 @@ xfs_inode_set_reclaim_tag(
struct xfs_perag *pag;

pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
- spin_lock(&pag->pag_ici_lock);
+ xa_lock(&pag->pag_ici_xa);
spin_lock(&ip->i_flags_lock);

- radix_tree_tag_set(&pag->pag_ici_root, XFS_INO_TO_AGINO(mp, ip->i_ino),
+ __xa_set_tag(&pag->pag_ici_xa, XFS_INO_TO_AGINO(mp, ip->i_ino),
XFS_ICI_RECLAIM_TAG);
xfs_perag_set_reclaim_tag(pag);
__xfs_iflags_set(ip, XFS_IRECLAIMABLE);

spin_unlock(&ip->i_flags_lock);
- spin_unlock(&pag->pag_ici_lock);
+ xa_unlock(&pag->pag_ici_xa);
xfs_perag_put(pag);
}

@@ -246,7 +246,7 @@ xfs_inode_clear_reclaim_tag(
struct xfs_perag *pag,
xfs_ino_t ino)
{
- radix_tree_tag_clear(&pag->pag_ici_root,
+ __xa_clear_tag(&pag->pag_ici_xa,
XFS_INO_TO_AGINO(pag->pag_mount, ino),
XFS_ICI_RECLAIM_TAG);
xfs_perag_clear_reclaim_tag(pag);
@@ -367,8 +367,8 @@ xfs_iget_cache_hit(
/*
* We need to set XFS_IRECLAIM to prevent xfs_reclaim_inode
* from stomping over us while we recycle the inode. We can't
- * clear the radix tree reclaimable tag yet as it requires
- * pag_ici_lock to be held exclusive.
+ * clear the xarray reclaimable tag yet as it requires
+ * pag_ici_xa.xa_lock to be held exclusive.
*/
ip->i_flags |= XFS_IRECLAIM;

@@ -393,7 +393,7 @@ xfs_iget_cache_hit(
goto out_error;
}

- spin_lock(&pag->pag_ici_lock);
+ xa_lock(&pag->pag_ici_xa);
spin_lock(&ip->i_flags_lock);

/*
@@ -410,7 +410,7 @@ xfs_iget_cache_hit(
init_rwsem(&inode->i_rwsem);

spin_unlock(&ip->i_flags_lock);
- spin_unlock(&pag->pag_ici_lock);
+ xa_unlock(&pag->pag_ici_xa);
} else {
/* If the VFS inode is being torn down, pause and try again. */
if (!igrab(inode)) {
@@ -471,17 +471,6 @@ xfs_iget_cache_miss(
goto out_destroy;
}

- /*
- * Preload the radix tree so we can insert safely under the
- * write spinlock. Note that we cannot sleep inside the preload
- * region. Since we can be called from transaction context, don't
- * recurse into the file system.
- */
- if (radix_tree_preload(GFP_NOFS)) {
- error = -EAGAIN;
- goto out_destroy;
- }
-
/*
* Because the inode hasn't been added to the radix-tree yet it can't
* be found by another thread, so we can do the non-sleeping lock here.
@@ -509,23 +498,18 @@ xfs_iget_cache_miss(
xfs_iflags_set(ip, iflags);

/* insert the new inode */
- spin_lock(&pag->pag_ici_lock);
- error = radix_tree_insert(&pag->pag_ici_root, agino, ip);
- if (unlikely(error)) {
- WARN_ON(error != -EEXIST);
- XFS_STATS_INC(mp, xs_ig_dup);
- error = -EAGAIN;
- goto out_preload_end;
- }
- spin_unlock(&pag->pag_ici_lock);
- radix_tree_preload_end();
+ error = xa_insert(&pag->pag_ici_xa, agino, ip, GFP_NOFS);
+ if (error)
+ goto out_unlock;

*ipp = ip;
return 0;

-out_preload_end:
- spin_unlock(&pag->pag_ici_lock);
- radix_tree_preload_end();
+out_unlock:
+ if (error == -EEXIST) {
+ error = -EAGAIN;
+ XFS_STATS_INC(mp, xs_ig_dup);
+ }
if (lock_flags)
xfs_iunlock(ip, lock_flags);
out_destroy:
@@ -592,7 +576,7 @@ xfs_iget(
again:
error = 0;
rcu_read_lock();
- ip = radix_tree_lookup(&pag->pag_ici_root, agino);
+ ip = xa_load(&pag->pag_ici_xa, agino);

if (ip) {
error = xfs_iget_cache_hit(pag, ip, ino, flags, lock_flags);
@@ -731,7 +715,7 @@ xfs_inode_ag_walk(
void *args),
int flags,
void *args,
- int tag,
+ xa_tag_t tag,
int iter_flags)
{
uint32_t first_index;
@@ -752,15 +736,8 @@ xfs_inode_ag_walk(

rcu_read_lock();

- if (tag == -1)
- nr_found = radix_tree_gang_lookup(&pag->pag_ici_root,
- (void **)batch, first_index,
- XFS_LOOKUP_BATCH);
- else
- nr_found = radix_tree_gang_lookup_tag(
- &pag->pag_ici_root,
- (void **) batch, first_index,
- XFS_LOOKUP_BATCH, tag);
+ nr_found = xa_extract(&pag->pag_ici_xa, (void **)batch,
+ first_index, ULONG_MAX, XFS_LOOKUP_BATCH, tag);

if (!nr_found) {
rcu_read_unlock();
@@ -896,8 +873,8 @@ xfs_inode_ag_iterator_flags(
ag = 0;
while ((pag = xfs_perag_get(mp, ag))) {
ag = pag->pag_agno + 1;
- error = xfs_inode_ag_walk(mp, pag, execute, flags, args, -1,
- iter_flags);
+ error = xfs_inode_ag_walk(mp, pag, execute, flags, args,
+ XFS_ICI_ALL, iter_flags);
xfs_perag_put(pag);
if (error) {
last_error = error;
@@ -926,7 +903,7 @@ xfs_inode_ag_iterator_tag(
void *args),
int flags,
void *args,
- int tag)
+ xa_tag_t tag)
{
struct xfs_perag *pag;
int error = 0;
@@ -1040,7 +1017,7 @@ xfs_reclaim_inode(
int sync_mode)
{
struct xfs_buf *bp = NULL;
- xfs_ino_t ino = ip->i_ino; /* for radix_tree_delete */
+ xfs_ino_t ino = ip->i_ino;
int error;

restart:
@@ -1128,16 +1105,15 @@ xfs_reclaim_inode(
/*
* Remove the inode from the per-AG radix tree.
*
- * Because radix_tree_delete won't complain even if the item was never
- * added to the tree assert that it's been there before to catch
- * problems with the inode life time early on.
+ * Check that it was there before to catch problems with the
+ * inode life time early on.
*/
- spin_lock(&pag->pag_ici_lock);
- if (!radix_tree_delete(&pag->pag_ici_root,
- XFS_INO_TO_AGINO(ip->i_mount, ino)))
+ xa_lock(&pag->pag_ici_xa);
+ if (__xa_erase(&pag->pag_ici_xa,
+ XFS_INO_TO_AGINO(ip->i_mount, ino)) != ip)
ASSERT(0);
xfs_perag_clear_reclaim_tag(pag);
- spin_unlock(&pag->pag_ici_lock);
+ xa_unlock(&pag->pag_ici_xa);

/*
* Here we do an (almost) spurious inode lock in order to coordinate
@@ -1213,10 +1189,9 @@ xfs_reclaim_inodes_ag(
int i;

rcu_read_lock();
- nr_found = radix_tree_gang_lookup_tag(
- &pag->pag_ici_root,
+ nr_found = xa_extract(&pag->pag_ici_xa,
(void **)batch, first_index,
- XFS_LOOKUP_BATCH,
+ ULONG_MAX, XFS_LOOKUP_BATCH,
XFS_ICI_RECLAIM_TAG);
if (!nr_found) {
done = 1;
@@ -1450,7 +1425,7 @@ __xfs_icache_free_eofblocks(
struct xfs_eofblocks *eofb,
int (*execute)(struct xfs_inode *ip, int flags,
void *args),
- int tag)
+ xa_tag_t tag)
{
int flags = SYNC_TRYLOCK;

@@ -1561,10 +1536,10 @@ __xfs_inode_set_blocks_tag(
spin_unlock(&ip->i_flags_lock);

pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
- spin_lock(&pag->pag_ici_lock);
+ xa_lock(&pag->pag_ici_xa);

- tagged = radix_tree_tagged(&pag->pag_ici_root, tag);
- radix_tree_tag_set(&pag->pag_ici_root,
+ tagged = xa_tagged(&pag->pag_ici_xa, tag);
+ __xa_set_tag(&pag->pag_ici_xa,
XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino), tag);
if (!tagged) {
/* propagate the eofblocks tag up into the perag radix tree */
@@ -1578,7 +1553,7 @@ __xfs_inode_set_blocks_tag(
set_tp(ip->i_mount, pag->pag_agno, -1, _RET_IP_);
}

- spin_unlock(&pag->pag_ici_lock);
+ xa_unlock(&pag->pag_ici_xa);
xfs_perag_put(pag);
}

@@ -1607,11 +1582,11 @@ __xfs_inode_clear_blocks_tag(
spin_unlock(&ip->i_flags_lock);

pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
- spin_lock(&pag->pag_ici_lock);
+ xa_lock(&pag->pag_ici_xa);

- radix_tree_tag_clear(&pag->pag_ici_root,
+ __xa_clear_tag(&pag->pag_ici_xa,
XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino), tag);
- if (!radix_tree_tagged(&pag->pag_ici_root, tag)) {
+ if (!xa_tagged(&pag->pag_ici_xa, tag)) {
/* clear the eofblocks tag from the perag radix tree */
xa_clear_tag(&ip->i_mount->m_perag_xa,
XFS_INO_TO_AGNO(ip->i_mount, ip->i_ino),
@@ -1619,7 +1594,7 @@ __xfs_inode_clear_blocks_tag(
clear_tp(ip->i_mount, pag->pag_agno, -1, _RET_IP_);
}

- spin_unlock(&pag->pag_ici_lock);
+ xa_unlock(&pag->pag_ici_xa);
xfs_perag_put(pag);
}

diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h
index dfbf13b530bc..80e2b1aed973 100644
--- a/fs/xfs/xfs_icache.h
+++ b/fs/xfs/xfs_icache.h
@@ -35,8 +35,7 @@ struct xfs_eofblocks {
/*
* tags for inode radix tree
*/
-#define XFS_ICI_NO_TAG (-1) /* special flag for an untagged lookup
- in xfs_inode_ag_iterator */
+#define XFS_ICI_ALL XA_PRESENT /* all inodes */
#define XFS_ICI_RECLAIM_TAG XA_TAG_0 /* inode is to be reclaimed */
#define XFS_ICI_EOFBLOCKS_TAG XA_TAG_1 /* inode has blocks beyond EOF */
#define XFS_ICI_COWBLOCKS_TAG XA_TAG_2 /* inode can have cow blocks to gc */
@@ -91,7 +90,7 @@ int xfs_inode_ag_iterator_flags(struct xfs_mount *mp,
int flags, void *args, int iter_flags);
int xfs_inode_ag_iterator_tag(struct xfs_mount *mp,
int (*execute)(struct xfs_inode *ip, int flags, void *args),
- int flags, void *args, int tag);
+ int flags, void *args, xa_tag_t tag);

static inline int
xfs_fs_eofblocks_from_user(
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index 6f95bdb408ce..3421edc49ca0 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -2300,7 +2300,7 @@ xfs_ifree_cluster(
for (i = 0; i < inodes_per_cluster; i++) {
retry:
rcu_read_lock();
- ip = radix_tree_lookup(&pag->pag_ici_root,
+ ip = xa_load(&pag->pag_ici_xa,
XFS_INO_TO_AGINO(mp, (inum + i)));

/* Inode not in memory, nothing to do */
@@ -3198,7 +3198,7 @@ xfs_iflush_cluster(
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_perag *pag;
- unsigned long first_index, mask;
+ unsigned long first_index, last_index, mask;
unsigned long inodes_per_cluster;
int cilist_size;
struct xfs_inode **cilist;
@@ -3216,12 +3216,12 @@ xfs_iflush_cluster(
if (!cilist)
goto out_put;

- mask = ~(((mp->m_inode_cluster_size >> mp->m_sb.sb_inodelog)) - 1);
- first_index = XFS_INO_TO_AGINO(mp, ip->i_ino) & mask;
+ mask = (((mp->m_inode_cluster_size >> mp->m_sb.sb_inodelog)) - 1);
+ first_index = XFS_INO_TO_AGINO(mp, ip->i_ino) & ~mask;
+ last_index = first_index | mask;
rcu_read_lock();
- /* really need a gang lookup range call here */
- nr_found = radix_tree_gang_lookup(&pag->pag_ici_root, (void**)cilist,
- first_index, inodes_per_cluster);
+ nr_found = xa_extract(&pag->pag_ici_xa, (void**)cilist, first_index,
+ last_index, inodes_per_cluster, XA_PRESENT);
if (nr_found == 0)
goto out_free;

@@ -3242,16 +3242,6 @@ xfs_iflush_cluster(
spin_unlock(&cip->i_flags_lock);
continue;
}
-
- /*
- * Once we fall off the end of the cluster, no point checking
- * any more inodes in the list because they will also all be
- * outside the cluster.
- */
- if ((XFS_INO_TO_AGINO(mp, cip->i_ino) & mask) != first_index) {
- spin_unlock(&cip->i_flags_lock);
- break;
- }
spin_unlock(&cip->i_flags_lock);

/*
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index 0541aeb8449c..fc517e424fae 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -210,9 +210,8 @@ xfs_initialize_perag(
goto out_unwind_new_pags;
pag->pag_agno = index;
pag->pag_mount = mp;
- spin_lock_init(&pag->pag_ici_lock);
mutex_init(&pag->pag_ici_reclaim_lock);
- INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC);
+ xa_init(&pag->pag_ici_xa);
if (xfs_buf_hash_init(pag))
goto out_free_pag;
init_waitqueue_head(&pag->pagb_wait);
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 6e5ad7b26f46..ab0f706d2fd7 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -374,8 +374,7 @@ typedef struct xfs_perag {

atomic_t pagf_fstrms; /* # of filestreams active in this AG */

- spinlock_t pag_ici_lock; /* incore inode cache lock */
- struct radix_tree_root pag_ici_root; /* incore inode cache root */
+ struct xarray pag_ici_xa; /* incore inode cache */
int pag_ici_reclaimable; /* reclaimable inodes */
struct mutex pag_ici_reclaim_lock; /* serialisation point */
unsigned long pag_ici_reclaim_cursor; /* reclaim restart point */
--
2.15.1


2018-01-17 20:40:16

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 26/99] page cache: Convert page cache lookups to XArray

From: Matthew Wilcox <[email protected]>

Introduce page_cache_pin() to factor out the common logic between the
various lookup routines:

find_get_entry
find_get_entries
find_get_pages_range
find_get_pages_contig
find_get_pages_range_tag
find_get_entries_tag
filemap_map_pages

By using the xa_state to control the iteration, we can remove most of
the gotos and just use the normal break/continue loop control flow.

Also convert the regression1 read-side to XArray since that simulates
the functions being modified here.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/pagemap.h | 6 +-
mm/filemap.c | 380 +++++++++------------------------
tools/testing/radix-tree/regression1.c | 68 +++---
3 files changed, 129 insertions(+), 325 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 34d4fa3ad1c5..1a59f4a5424a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -365,17 +365,17 @@ static inline unsigned find_get_pages(struct address_space *mapping,
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start,
unsigned int nr_pages, struct page **pages);
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
- pgoff_t end, int tag, unsigned int nr_pages,
+ pgoff_t end, xa_tag_t tag, unsigned int nr_pages,
struct page **pages);
static inline unsigned find_get_pages_tag(struct address_space *mapping,
- pgoff_t *index, int tag, unsigned int nr_pages,
+ pgoff_t *index, xa_tag_t tag, unsigned int nr_pages,
struct page **pages)
{
return find_get_pages_range_tag(mapping, index, (pgoff_t)-1, tag,
nr_pages, pages);
}
unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
- int tag, unsigned int nr_entries,
+ xa_tag_t tag, unsigned int nr_entries,
struct page **entries, pgoff_t *indices);

struct page *grab_cache_page_write_begin(struct address_space *mapping,
diff --git a/mm/filemap.c b/mm/filemap.c
index ed30d5310e50..317a89df1945 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1401,6 +1401,32 @@ bool page_cache_range_empty(struct address_space *mapping, pgoff_t index,
}
EXPORT_SYMBOL_GPL(page_cache_range_empty);

+/*
+ * page_cache_pin() - Try to pin a page in the page cache.
+ * @xas: The XArray operation state.
+ * @pagep: The page which has been previously found at this location.
+ *
+ * On success, the page has an elevated refcount, but is not locked.
+ * This implements the lockless pagecache protocol as described in
+ * include/linux/pagemap.h; see page_cache_get_speculative().
+ *
+ * Return: True if the page is still in the cache.
+ */
+static bool page_cache_pin(struct xa_state *xas, struct page *page)
+{
+ struct page *head = compound_head(page);
+ bool got = page_cache_get_speculative(head);
+
+ if (likely(got && (xas_reload(xas) == page) &&
+ (compound_head(page) == head)))
+ return true;
+
+ if (got)
+ put_page(head);
+ xas_retry(xas, XA_RETRY_ENTRY);
+ return false;
+}
+
/**
* find_get_entry - find and get a page cache entry
* @mapping: the address_space to search
@@ -1416,51 +1442,21 @@ EXPORT_SYMBOL_GPL(page_cache_range_empty);
*/
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
{
- void **pagep;
- struct page *head, *page;
+ XA_STATE(xas, &mapping->pages, offset);
+ struct page *page;

rcu_read_lock();
-repeat:
- page = NULL;
- pagep = radix_tree_lookup_slot(&mapping->pages, offset);
- if (pagep) {
- page = radix_tree_deref_slot(pagep);
- if (unlikely(!page))
- goto out;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page))
- goto repeat;
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Return
- * it without attempting to raise page count.
- */
- goto out;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
+ do {
+ page = xas_load(&xas);
+ if (xas_retry(&xas, page))
+ continue;
+ if (!page || xa_is_value(page))
+ break;
+ if (!page_cache_pin(&xas, page))
+ continue;
+ } while (0);

- /*
- * Has the page moved?
- * This is part of the lockless pagecache protocol. See
- * include/linux/pagemap.h for details.
- */
- if (unlikely(page != *pagep)) {
- put_page(head);
- goto repeat;
- }
- }
-out:
rcu_read_unlock();
-
return page;
}
EXPORT_SYMBOL(find_get_entry);
@@ -1487,7 +1483,7 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)

repeat:
page = find_get_entry(mapping, offset);
- if (page && !radix_tree_exception(page)) {
+ if (page && !xa_is_value(page)) {
lock_page(page);
/* Has the page been truncated? */
if (unlikely(page_mapping(page) != mapping)) {
@@ -1620,50 +1616,21 @@ unsigned find_get_entries(struct address_space *mapping,
pgoff_t start, unsigned int nr_entries,
struct page **entries, pgoff_t *indices)
{
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ struct page *page;
unsigned int ret = 0;
- struct radix_tree_iter iter;

if (!nr_entries)
return 0;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!xa_is_value(page) && !page_cache_pin(&xas, page))
continue;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page, a swap
- * entry from shmem/tmpfs or a DAX entry. Return it
- * without attempting to raise page count.
- */
- goto export;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }

- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
-export:
- indices[ret] = iter.index;
+ indices[ret] = xas.xa_index;
entries[ret] = page;
if (++ret == nr_entries)
break;
@@ -1697,56 +1664,26 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
pgoff_t end, unsigned int nr_pages,
struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, *start);
+ struct page *page;
unsigned ret = 0;

if (unlikely(!nr_pages))
return 0;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) {
- struct page *head, *page;
-
- if (iter.index > end)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each(&xas, page, end) {
+ if (xas_retry(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Skip
- * over it.
- */
+ /* Skip over shadow or swap entries */
+ if (xa_is_value(page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }

pages[ret] = page;
if (++ret == nr_pages) {
- *start = pages[ret - 1]->index + 1;
+ *start = page->index + 1;
goto out;
}
}
@@ -1754,7 +1691,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
/*
* We come here when there is no page beyond @end. We take care to not
* overflow the index @start as it confuses some of the callers. This
- * breaks the iteration when there is page at index -1 but that is
+ * breaks the iteration when there is a page at index -1 but that is
* already broken anyway.
*/
if (end == (pgoff_t)-1)
@@ -1782,57 +1719,28 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
unsigned int nr_pages, struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, index);
+ struct page *page;
unsigned int ret = 0;

if (unlikely(!nr_pages))
return 0;

rcu_read_lock();
- radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- /* The hole, there no reason to continue */
- if (unlikely(!page))
- break;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page,
- * or a swap entry from shmem/tmpfs. Stop
- * looking for contiguous pages.
- */
+ for (page = xas_load(&xas); page; page = xas_next(&xas)) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (xa_is_value(page))
break;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
+ if (!page_cache_pin(&xas, page))
+ continue;

/*
* must check mapping and index after taking the ref.
* otherwise we can get both false positives and false
* negatives, which is just confusing to the caller.
*/
- if (page->mapping == NULL || page_to_pgoff(page) != iter.index) {
+ if (!page->mapping || page_to_pgoff(page) != xas.xa_index) {
put_page(page);
break;
}
@@ -1859,74 +1767,42 @@ EXPORT_SYMBOL(find_get_pages_contig);
* @tag. We update @index to index the next page for the traversal.
*/
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
- pgoff_t end, int tag, unsigned int nr_pages,
+ pgoff_t end, xa_tag_t tag, unsigned int nr_pages,
struct page **pages)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, *index);
+ struct page *page;
unsigned ret = 0;

if (unlikely(!nr_pages))
return 0;

rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) {
- struct page *head, *page;
-
- if (iter.index > end)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each_tag(&xas, page, end, tag) {
+ if (xas_retry(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- /*
- * A shadow entry of a recently evicted page.
- *
- * Those entries should never be tagged, but
- * this tree walk is lockless and the tags are
- * looked up in bulk, one radix tree node at a
- * time, so there is a sizable window for page
- * reclaim to evict a page we saw tagged.
- *
- * Skip over it.
- */
+ /*
+ * Shadow entries should never be tagged, but this iteration
+ * is lockless so there is a window for page reclaim to evict
+ * a page we saw tagged. Skip over it.
+ */
+ if (xa_is_value(page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }

pages[ret] = page;
if (++ret == nr_pages) {
- *index = pages[ret - 1]->index + 1;
+ *index = page->index + 1;
goto out;
}
}

/*
- * We come here when we got at @end. We take care to not overflow the
+ * We come here when we got to @end. We take care to not overflow the
* index @index as it confuses some of the callers. This breaks the
- * iteration when there is page at index -1 but that is already broken
- * anyway.
+ * iteration when there is a page at index -1 but that is already
+ * broken anyway.
*/
if (end == (pgoff_t)-1)
*index = (pgoff_t)-1;
@@ -1952,54 +1828,24 @@ EXPORT_SYMBOL(find_get_pages_range_tag);
* @tag.
*/
unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
- int tag, unsigned int nr_entries,
+ xa_tag_t tag, unsigned int nr_entries,
struct page **entries, pgoff_t *indices)
{
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ struct page *page;
unsigned int ret = 0;
- struct radix_tree_iter iter;

if (!nr_entries)
return 0;

rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) {
- struct page *head, *page;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
+ xas_for_each_tag(&xas, page, ULONG_MAX, tag) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!xa_is_value(page) && !page_cache_pin(&xas, page))
continue;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
- /*
- * A shadow entry of a recently evicted page, a swap
- * entry from shmem/tmpfs or a DAX entry. Return it
- * without attempting to raise page count.
- */
- goto export;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }

- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
-export:
- indices[ret] = iter.index;
+ indices[ret] = xas.xa_index;
entries[ret] = page;
if (++ret == nr_entries)
break;
@@ -2608,45 +2454,21 @@ EXPORT_SYMBOL(filemap_fault);
void filemap_map_pages(struct vm_fault *vmf,
pgoff_t start_pgoff, pgoff_t end_pgoff)
{
- struct radix_tree_iter iter;
- void **slot;
struct file *file = vmf->vma->vm_file;
struct address_space *mapping = file->f_mapping;
pgoff_t last_pgoff = start_pgoff;
unsigned long max_idx;
- struct page *head, *page;
+ XA_STATE(xas, &mapping->pages, start_pgoff);
+ struct page *page;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) {
- if (iter.index > end_pgoff)
- break;
-repeat:
- page = radix_tree_deref_slot(slot);
- if (unlikely(!page))
- goto next;
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
+ xas_for_each(&xas, page, end_pgoff) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (xa_is_value(page))
goto next;
- }
-
- head = compound_head(page);
- if (!page_cache_get_speculative(head))
- goto repeat;
-
- /* The page was split under us? */
- if (compound_head(page) != head) {
- put_page(head);
- goto repeat;
- }
-
- /* Has the page moved? */
- if (unlikely(page != *slot)) {
- put_page(head);
- goto repeat;
- }
+ if (!page_cache_pin(&xas, page))
+ continue;

if (!PageUptodate(page) ||
PageReadahead(page) ||
@@ -2665,10 +2487,10 @@ void filemap_map_pages(struct vm_fault *vmf,
if (file->f_ra.mmap_miss > 0)
file->f_ra.mmap_miss--;

- vmf->address += (iter.index - last_pgoff) << PAGE_SHIFT;
+ vmf->address += (xas.xa_index - last_pgoff) << PAGE_SHIFT;
if (vmf->pte)
- vmf->pte += iter.index - last_pgoff;
- last_pgoff = iter.index;
+ vmf->pte += xas.xa_index - last_pgoff;
+ last_pgoff = xas.xa_index;
if (alloc_set_pte(vmf, NULL, page))
goto unlock;
unlock_page(page);
@@ -2681,8 +2503,6 @@ void filemap_map_pages(struct vm_fault *vmf,
/* Huge page is mapped? No need to proceed. */
if (pmd_trans_huge(*vmf->pmd))
break;
- if (iter.index == end_pgoff)
- break;
}
rcu_read_unlock();
}
diff --git a/tools/testing/radix-tree/regression1.c b/tools/testing/radix-tree/regression1.c
index 0aece092f40e..008393906be5 100644
--- a/tools/testing/radix-tree/regression1.c
+++ b/tools/testing/radix-tree/regression1.c
@@ -58,7 +58,7 @@ static struct page *page_alloc(void)
struct page *p;
p = malloc(sizeof(struct page));
p->count = 1;
- p->index = 1;
+ p->index = (unsigned long)p;
pthread_mutex_init(&p->lock, NULL);

return p;
@@ -77,53 +77,37 @@ static void page_free(struct page *p)
call_rcu(&p->rcu, page_rcu_free);
}

+static bool page_cache_pin(struct xa_state *xas, struct page *page)
+{
+ pthread_mutex_lock(&page->lock);
+ if (!page->count) {
+ pthread_mutex_unlock(&page->lock);
+ goto fail;
+ }
+ /* don't actually update page refcount */
+ pthread_mutex_unlock(&page->lock);
+
+ /* Has the page moved? */
+ if (xas_reload(xas) == page)
+ return true;
+fail:
+ xas_retry(xas, XA_RETRY_ENTRY);
+ return false;
+}
+
static unsigned find_get_pages(unsigned long start,
unsigned int nr_pages, struct page **pages)
{
- unsigned int i;
- unsigned int ret;
- unsigned int nr_found;
+ XA_STATE(xas, &mt_tree, start);
+ struct page *page;
+ unsigned int ret = 0;

rcu_read_lock();
-restart:
- nr_found = radix_tree_gang_lookup_slot(&mt_tree,
- (void ***)pages, NULL, start, nr_pages);
- ret = 0;
- for (i = 0; i < nr_found; i++) {
- struct page *page;
-repeat:
- page = radix_tree_deref_slot((void **)pages[i]);
- if (unlikely(!page))
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xas_retry(&xas, page))
+ continue;
+ if (!page_cache_pin(&xas, page))
continue;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- /*
- * Transient condition which can only trigger
- * when entry at index 0 moves out of or back
- * to root: none yet gotten, safe to restart.
- */
- assert((start | i) == 0);
- goto restart;
- }
- /*
- * No exceptional entries are inserted in this test.
- */
- assert(0);
- }
-
- pthread_mutex_lock(&page->lock);
- if (!page->count) {
- pthread_mutex_unlock(&page->lock);
- goto repeat;
- }
- /* don't actually update page refcount */
- pthread_mutex_unlock(&page->lock);
-
- /* Has the page moved? */
- if (unlikely(page != *((void **)pages[i]))) {
- goto repeat;
- }

pages[ret] = page;
ret++;
--
2.15.1


2018-01-17 20:40:35

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 69/99] brd: Convert to XArray

From: Matthew Wilcox <[email protected]>

Convert brd_pages from a radix tree to an XArray. Simpler and smaller
code; in particular another user of radix_tree_preload is eliminated.

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/block/brd.c | 93 ++++++++++++++++-------------------------------------
1 file changed, 28 insertions(+), 65 deletions(-)

diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 8028a3a7e7fd..59a1af7aaa79 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -17,7 +17,7 @@
#include <linux/bio.h>
#include <linux/highmem.h>
#include <linux/mutex.h>
-#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/fs.h>
#include <linux/slab.h>
#include <linux/backing-dev.h>
@@ -29,9 +29,9 @@
#define PAGE_SECTORS (1 << PAGE_SECTORS_SHIFT)

/*
- * Each block ramdisk device has a radix_tree brd_pages of pages that stores
- * the pages containing the block device's contents. A brd page's ->index is
- * its offset in PAGE_SIZE units. This is similar to, but in no way connected
+ * Each block ramdisk device has an xarray brd_pages that stores the pages
+ * containing the block device's contents. A brd page's ->index is its
+ * offset in PAGE_SIZE units. This is similar to, but in no way connected
* with, the kernel's pagecache or buffer cache (which sit above our block
* device).
*/
@@ -41,13 +41,7 @@ struct brd_device {
struct request_queue *brd_queue;
struct gendisk *brd_disk;
struct list_head brd_list;
-
- /*
- * Backing store of pages and lock to protect it. This is the contents
- * of the block device.
- */
- spinlock_t brd_lock;
- struct radix_tree_root brd_pages;
+ struct xarray brd_pages;
};

/*
@@ -62,17 +56,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
* The page lifetime is protected by the fact that we have opened the
* device node -- brd pages will never be deleted under us, so we
* don't need any further locking or refcounting.
- *
- * This is strictly true for the radix-tree nodes as well (ie. we
- * don't actually need the rcu_read_lock()), however that is not a
- * documented feature of the radix-tree API so it is better to be
- * safe here (we don't have total exclusion from radix tree updates
- * here, only deletes).
*/
- rcu_read_lock();
idx = sector >> PAGE_SECTORS_SHIFT; /* sector to page index */
- page = radix_tree_lookup(&brd->brd_pages, idx);
- rcu_read_unlock();
+ page = xa_load(&brd->brd_pages, idx);

BUG_ON(page && page->index != idx);

@@ -87,7 +73,7 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
{
pgoff_t idx;
- struct page *page;
+ struct page *curr, *page;
gfp_t gfp_flags;

page = brd_lookup_page(brd, sector);
@@ -108,62 +94,40 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
if (!page)
return NULL;

- if (radix_tree_preload(GFP_NOIO)) {
- __free_page(page);
- return NULL;
- }
-
- spin_lock(&brd->brd_lock);
idx = sector >> PAGE_SECTORS_SHIFT;
page->index = idx;
- if (radix_tree_insert(&brd->brd_pages, idx, page)) {
+ curr = xa_cmpxchg(&brd->brd_pages, idx, NULL, page, GFP_NOIO);
+ if (curr) {
__free_page(page);
- page = radix_tree_lookup(&brd->brd_pages, idx);
- BUG_ON(!page);
- BUG_ON(page->index != idx);
+ if (xa_err(curr)) {
+ page = NULL;
+ } else {
+ page = curr;
+ BUG_ON(!page);
+ BUG_ON(page->index != idx);
+ }
}
- spin_unlock(&brd->brd_lock);
-
- radix_tree_preload_end();

return page;
}

/*
- * Free all backing store pages and radix tree. This must only be called when
+ * Free all backing store pages and xarray. This must only be called when
* there are no other users of the device.
*/
-#define FREE_BATCH 16
static void brd_free_pages(struct brd_device *brd)
{
- unsigned long pos = 0;
- struct page *pages[FREE_BATCH];
- int nr_pages;
-
- do {
- int i;
-
- nr_pages = radix_tree_gang_lookup(&brd->brd_pages,
- (void **)pages, pos, FREE_BATCH);
-
- for (i = 0; i < nr_pages; i++) {
- void *ret;
-
- BUG_ON(pages[i]->index < pos);
- pos = pages[i]->index;
- ret = radix_tree_delete(&brd->brd_pages, pos);
- BUG_ON(!ret || ret != pages[i]);
- __free_page(pages[i]);
- }
-
- pos++;
+ XA_STATE(xas, &brd->brd_pages, 0);
+ struct page *page;

- /*
- * This assumes radix_tree_gang_lookup always returns as
- * many pages as possible. If the radix-tree code changes,
- * so will this have to.
- */
- } while (nr_pages == FREE_BATCH);
+ /* lockdep can't know there are no other users */
+ xas_lock(&xas);
+ xas_for_each(&xas, page, ULONG_MAX) {
+ BUG_ON(page->index != xas.xa_index);
+ __free_page(page);
+ xas_store(&xas, NULL);
+ }
+ xas_unlock(&xas);
}

/*
@@ -373,8 +337,7 @@ static struct brd_device *brd_alloc(int i)
if (!brd)
goto out;
brd->brd_number = i;
- spin_lock_init(&brd->brd_lock);
- INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC);
+ xa_init(&brd->brd_pages);

brd->brd_queue = blk_alloc_queue(GFP_KERNEL);
if (!brd->brd_queue)
--
2.15.1


2018-01-17 20:41:22

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 67/99] mm: Convert cgroup writeback to XArray

From: Matthew Wilcox <[email protected]>

This is a fairly naive conversion, leaving in place the GFP_ATOMIC
allocation. By switching the locking around, we could use GFP_KERNEL
and probably simplify the error handling.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/backing-dev-defs.h | 2 +-
include/linux/backing-dev.h | 2 +-
mm/backing-dev.c | 22 ++++++++++------------
3 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index bfe86b54f6c1..074a54aad33c 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -187,7 +187,7 @@ struct backing_dev_info {
struct bdi_writeback wb; /* the root writeback info for this bdi */
struct list_head wb_list; /* list of all wbs */
#ifdef CONFIG_CGROUP_WRITEBACK
- struct radix_tree_root cgwb_tree; /* radix tree of active cgroup wbs */
+ struct xarray cgwb_xa; /* radix tree of active cgroup wbs */
struct rb_root cgwb_congested_tree; /* their congested states */
#else
struct bdi_writeback_congested *wb_congested;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 3df0d20e23f3..27e7b31bd802 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -271,7 +271,7 @@ static inline struct bdi_writeback *wb_find_current(struct backing_dev_info *bdi
if (!memcg_css->parent)
return &bdi->wb;

- wb = radix_tree_lookup(&bdi->cgwb_tree, memcg_css->id);
+ wb = xa_load(&bdi->cgwb_xa, memcg_css->id);

/*
* %current's blkcg equals the effective blkcg of its memcg. No
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index b5f940ce0143..aa0f85df0928 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -417,8 +417,8 @@ static void wb_exit(struct bdi_writeback *wb)
#include <linux/memcontrol.h>

/*
- * cgwb_lock protects bdi->cgwb_tree, bdi->cgwb_congested_tree,
- * blkcg->cgwb_list, and memcg->cgwb_list. bdi->cgwb_tree is also RCU
+ * cgwb_lock protects bdi->cgwb_xa, bdi->cgwb_congested_tree,
+ * blkcg->cgwb_list, and memcg->cgwb_list. bdi->cgwb_xa is also RCU
* protected.
*/
static DEFINE_SPINLOCK(cgwb_lock);
@@ -539,7 +539,7 @@ static void cgwb_kill(struct bdi_writeback *wb)
{
lockdep_assert_held(&cgwb_lock);

- WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id));
+ WARN_ON(xa_erase(&wb->bdi->cgwb_xa, wb->memcg_css->id) != wb);
list_del(&wb->memcg_node);
list_del(&wb->blkcg_node);
percpu_ref_kill(&wb->refcnt);
@@ -571,7 +571,7 @@ static int cgwb_create(struct backing_dev_info *bdi,

/* look up again under lock and discard on blkcg mismatch */
spin_lock_irqsave(&cgwb_lock, flags);
- wb = radix_tree_lookup(&bdi->cgwb_tree, memcg_css->id);
+ wb = xa_load(&bdi->cgwb_xa, memcg_css->id);
if (wb && wb->blkcg_css != blkcg_css) {
cgwb_kill(wb);
wb = NULL;
@@ -614,8 +614,7 @@ static int cgwb_create(struct backing_dev_info *bdi,
spin_lock_irqsave(&cgwb_lock, flags);
if (test_bit(WB_registered, &bdi->wb.state) &&
blkcg_cgwb_list->next && memcg_cgwb_list->next) {
- /* we might have raced another instance of this function */
- ret = radix_tree_insert(&bdi->cgwb_tree, memcg_css->id, wb);
+ ret = xa_insert(&bdi->cgwb_xa, memcg_css->id, wb, GFP_ATOMIC);
if (!ret) {
list_add_tail_rcu(&wb->bdi_node, &bdi->wb_list);
list_add(&wb->memcg_node, memcg_cgwb_list);
@@ -682,7 +681,7 @@ struct bdi_writeback *wb_get_create(struct backing_dev_info *bdi,

do {
rcu_read_lock();
- wb = radix_tree_lookup(&bdi->cgwb_tree, memcg_css->id);
+ wb = xa_load(&bdi->cgwb_xa, memcg_css->id);
if (wb) {
struct cgroup_subsys_state *blkcg_css;

@@ -704,7 +703,7 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi)
{
int ret;

- INIT_RADIX_TREE(&bdi->cgwb_tree, GFP_ATOMIC);
+ xa_init(&bdi->cgwb_xa);
bdi->cgwb_congested_tree = RB_ROOT;

ret = wb_init(&bdi->wb, bdi, 1, GFP_KERNEL);
@@ -717,15 +716,14 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi)

static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &bdi->cgwb_xa, 0);
struct bdi_writeback *wb;

WARN_ON(test_bit(WB_registered, &bdi->wb.state));

spin_lock_irq(&cgwb_lock);
- radix_tree_for_each_slot(slot, &bdi->cgwb_tree, &iter, 0)
- cgwb_kill(*slot);
+ xas_for_each(&xas, wb, ULONG_MAX)
+ cgwb_kill(wb);

while (!list_empty(&bdi->wb_list)) {
wb = list_first_entry(&bdi->wb_list, struct bdi_writeback,
--
2.15.1


2018-01-17 20:42:04

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 66/99] page cache: Finish XArray conversion

From: Matthew Wilcox <[email protected]>

With no more radix tree API users left, we can drop the GFP flags
and use xa_init() instead of INIT_RADIX_TREE().

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/inode.c | 2 +-
include/linux/fs.h | 2 +-
mm/swap_state.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/inode.c b/fs/inode.c
index c7b00573c10d..f5680b805336 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -348,7 +348,7 @@ EXPORT_SYMBOL(inc_nlink);
void address_space_init_once(struct address_space *mapping)
{
memset(mapping, 0, sizeof(*mapping));
- INIT_RADIX_TREE(&mapping->pages, GFP_ATOMIC | __GFP_ACCOUNT);
+ xa_init_flags(&mapping->pages, XA_FLAGS_LOCK_IRQ);
init_rwsem(&mapping->i_mmap_rwsem);
INIT_LIST_HEAD(&mapping->private_list);
spin_lock_init(&mapping->private_lock);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c58bc3c619bf..b459bf4ddb62 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -410,7 +410,7 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
*/
struct address_space {
struct inode *host;
- struct radix_tree_root pages;
+ struct xarray pages;
gfp_t gfp_mask;
atomic_t i_mmap_writable;
struct rb_root_cached i_mmap;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 219e3b4f09e6..25f027d0bb00 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -573,7 +573,7 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages)
return -ENOMEM;
for (i = 0; i < nr; i++) {
space = spaces + i;
- INIT_RADIX_TREE(&space->pages, GFP_ATOMIC|__GFP_NOWARN);
+ xa_init_flags(&space->pages, XA_FLAGS_LOCK_IRQ);
atomic_set(&space->i_mmap_writable, 0);
space->a_ops = &swap_aops;
/* swap cache doesn't use writeback related tags */
--
2.15.1


2018-01-17 20:42:32

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 64/99] dax: Convert grab_mapping_entry to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 98 +++++++++++++++++-----------------------------------------------
1 file changed, 26 insertions(+), 72 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 494e8fb7a98f..3eb0cf176d69 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -44,6 +44,7 @@

/* The 'colour' (ie low bits) within a PMD of a page offset. */
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
+#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)

static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];

@@ -89,10 +90,10 @@ static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
DAX_ENTRY_LOCK);
}

-static unsigned int dax_radix_order(void *entry)
+static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_SHIFT - PAGE_SHIFT;
+ return PMD_ORDER;
return 0;
}

@@ -299,10 +300,11 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
{
XA_STATE(xas, &mapping->pages, index);
bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
- void *entry, **slot;
+ void *entry;

+ xas_set_order(&xas, index, size_flag ? PMD_ORDER : 0);
restart:
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
entry = get_unlocked_mapping_entry(&xas);

if (WARN_ON_ONCE(entry && !xa_is_value(entry))) {
@@ -326,84 +328,36 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
}
}

- /* No entry for given index? Make sure radix tree is big enough. */
- if (!entry || pmd_downgrade) {
- int err;
-
- if (pmd_downgrade) {
- /*
- * Make sure 'entry' remains valid while we drop
- * xa_lock.
- */
- entry = lock_slot(&xas);
- }
-
- xa_unlock_irq(&mapping->pages);
+ if (pmd_downgrade) {
+ entry = lock_slot(&xas);
/*
* Besides huge zero pages the only other thing that gets
* downgraded are empty entries which don't need to be
* unmapped.
*/
- if (pmd_downgrade && dax_is_zero_entry(entry))
+ if (dax_is_zero_entry(entry)) {
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
unmap_mapping_range(mapping,
(index << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
-
- err = radix_tree_preload(
- mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
- if (err) {
- if (pmd_downgrade)
- put_locked_mapping_entry(mapping, index);
- return ERR_PTR(err);
+ xas_lock_irq(&xas);
}
- xa_lock_irq(&mapping->pages);
-
- if (!entry) {
- /*
- * We needed to drop the pages lock while calling
- * radix_tree_preload() and we didn't have an entry to
- * lock. See if another thread inserted an entry at
- * our index during this time.
- */
- entry = __radix_tree_lookup(&mapping->pages, index,
- NULL, &slot);
- if (entry) {
- radix_tree_preload_end();
- xa_unlock_irq(&mapping->pages);
- goto restart;
- }
- }
-
- if (pmd_downgrade) {
- radix_tree_delete(&mapping->pages, index);
- mapping->nrexceptional--;
- dax_wake_entry(&xas, entry, true);
- }
-
+ xas_store(&xas, NULL);
+ mapping->nrexceptional--;
+ dax_wake_entry(&xas, entry, true);
+ }
+ if (!entry || pmd_downgrade) {
entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);
-
- err = __radix_tree_insert(&mapping->pages, index,
- dax_radix_order(entry), entry);
- radix_tree_preload_end();
- if (err) {
- xa_unlock_irq(&mapping->pages);
- /*
- * Our insertion of a DAX entry failed, most likely
- * because we were inserting a PMD entry and it
- * collided with a PTE sized entry at a different
- * index in the PMD range. We haven't inserted
- * anything into the radix tree and have no waiters to
- * wake.
- */
- return ERR_PTR(err);
- }
- /* Good, we have inserted empty locked entry into the tree. */
- mapping->nrexceptional++;
- xa_unlock_irq(&mapping->pages);
- return entry;
+ xas_store(&xas, entry);
+ if (!xas_error(&xas))
+ mapping->nrexceptional++;
+ } else {
+ entry = lock_slot(&xas);
}
- entry = lock_slot(&xas);
out_unlock:
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
+ if (xas_nomem(&xas, GFP_NOIO))
+ goto restart;
return entry;
}

@@ -682,7 +636,7 @@ static int dax_writeback_one(struct block_device *bdev,
* worry about partial PMD writebacks.
*/
sector = dax_radix_sector(entry);
- size = PAGE_SIZE << dax_radix_order(entry);
+ size = PAGE_SIZE << dax_entry_order(entry);

id = dax_read_lock();
ret = bdev_dax_pgoff(bdev, sector, size, &pgoff);
--
2.15.1


2018-01-17 20:43:19

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 62/99] dax: Convert dax_insert_pfn_mkwrite to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b66b8c896ed8..e6b25ef112f2 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1497,21 +1497,21 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
void *entry;
int vmf_ret, error;

- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
entry = get_unlocked_mapping_entry(&xas);
/* Did we race with someone splitting entry or so? */
if (!entry ||
(pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
(pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
put_unlocked_mapping_entry(&xas, entry);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
return VM_FAULT_NOPAGE;
}
- radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);
entry = lock_slot(&xas);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
switch (pe_size) {
case PE_SIZE_PTE:
error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
--
2.15.1


2018-01-17 20:43:24

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 65/99] dax: Fix sparse warning

From: Matthew Wilcox <[email protected]>

sparse doesn't know that follow_pte_pmd() conditionally acquires the ptl,
because it's in a separate compilation unit. Move follow_pte_pmd() to
mm.h where sparse can see it.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/mm.h | 15 ++++++++++++++-
mm/memory.c | 16 +---------------
2 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index fe1ee4313add..9c384c486edf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1314,7 +1314,7 @@ int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
struct vm_area_struct *vma);
void unmap_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen, int even_cows);
-int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
unsigned long *start, unsigned long *end,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
@@ -1324,6 +1324,19 @@ int follow_phys(struct vm_area_struct *vma, unsigned long address,
int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write);

+static inline int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+ unsigned long *start, unsigned long *end,
+ pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
+{
+ int res;
+
+ /* (void) is needed to make gcc happy */
+ (void) __cond_lock(*ptlp,
+ !(res = __follow_pte_pmd(mm, address, start, end,
+ ptepp, pmdpp, ptlp)));
+ return res;
+}
+
static inline void unmap_shared_mapping_range(struct address_space *mapping,
loff_t const holebegin, loff_t const holelen)
{
diff --git a/mm/memory.c b/mm/memory.c
index ca5674cbaff2..66184601ac03 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4201,7 +4201,7 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
}
#endif /* __PAGETABLE_PMD_FOLDED */

-static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
+int __follow_pte_pmd(struct mm_struct *mm, unsigned long address,
unsigned long *start, unsigned long *end,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
{
@@ -4278,20 +4278,6 @@ static inline int follow_pte(struct mm_struct *mm, unsigned long address,
return res;
}

-int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
- unsigned long *start, unsigned long *end,
- pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
-{
- int res;
-
- /* (void) is needed to make gcc happy */
- (void) __cond_lock(*ptlp,
- !(res = __follow_pte_pmd(mm, address, start, end,
- ptepp, pmdpp, ptlp)));
- return res;
-}
-EXPORT_SYMBOL(follow_pte_pmd);
-
/**
* follow_pfn - look up PFN at a user virtual address
* @vma: memory mapping
--
2.15.1


2018-01-17 20:43:40

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 63/99] dax: Convert dax_insert_mapping_entry to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index e6b25ef112f2..494e8fb7a98f 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -498,9 +498,9 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
void *entry, sector_t sector,
unsigned long flags, bool dirty)
{
- struct radix_tree_root *pages = &mapping->pages;
void *new_entry;
pgoff_t index = vmf->pgoff;
+ XA_STATE(xas, &mapping->pages, index);

if (dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
@@ -516,7 +516,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
PAGE_SIZE, 0);
}

- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
new_entry = dax_radix_locked_entry(sector, flags);

if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
@@ -528,21 +528,15 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
* existing entry is a PMD, we will just leave the PMD in the
* tree and dirty it if necessary.
*/
- struct radix_tree_node *node;
- void **slot;
- void *ret;
-
- ret = __radix_tree_lookup(pages, index, &node, &slot);
- WARN_ON_ONCE(ret != entry);
- __radix_tree_replace(pages, node, slot,
- new_entry, NULL);
+ void *prev = xas_store(&xas, new_entry);
+ WARN_ON_ONCE(prev != entry);
entry = new_entry;
}

if (dirty)
- radix_tree_tag_set(pages, index, PAGECACHE_TAG_DIRTY);
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);

- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return entry;
}

--
2.15.1


2018-01-17 20:44:12

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 25/99] page cache: Convert page deletion to XArray

From: Matthew Wilcox <[email protected]>

The code is slightly shorter and simpler.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index e6371b551de1..ed30d5310e50 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -112,30 +112,28 @@
* ->tasklist_lock (memory_failure, collect_procs_ao)
*/

-static void page_cache_tree_delete(struct address_space *mapping,
+static void page_cache_delete(struct address_space *mapping,
struct page *page, void *shadow)
{
- int i, nr;
+ XA_STATE(xas, &mapping->pages, page->index);
+ unsigned int i, nr;

- /* hugetlb pages are represented by one entry in the radix tree */
+ mapping_set_update(&xas, mapping);
+
+ /* hugetlb pages are represented by a single entry in the xarray */
nr = PageHuge(page) ? 1 : hpage_nr_pages(page);

VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageTail(page), page);
VM_BUG_ON_PAGE(nr != 1 && shadow, page);

- for (i = 0; i < nr; i++) {
- struct radix_tree_node *node;
- void **slot;
-
- __radix_tree_lookup(&mapping->pages, page->index + i,
- &node, &slot);
-
- VM_BUG_ON_PAGE(!node && nr != 1, page);
-
- radix_tree_clear_tags(&mapping->pages, node, slot);
- __radix_tree_replace(&mapping->pages, node, slot, shadow,
- workingset_lookup_update(mapping));
+ i = nr;
+repeat:
+ xas_store(&xas, shadow);
+ xas_init_tags(&xas);
+ if (--i) {
+ xas_next(&xas);
+ goto repeat;
}

page->mapping = NULL;
@@ -235,7 +233,7 @@ void __delete_from_page_cache(struct page *page, void *shadow)
trace_mm_filemap_delete_from_page_cache(page);

unaccount_page_cache_page(mapping, page);
- page_cache_tree_delete(mapping, page, shadow);
+ page_cache_delete(mapping, page, shadow);
}

static void page_cache_free_page(struct address_space *mapping,
--
2.15.1


2018-01-17 20:44:24

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 61/99] dax: Convert dax_writeback_one to XArray

From: Matthew Wilcox <[email protected]>

Likewise easy

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 9a30224da4d6..b66b8c896ed8 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -632,8 +632,7 @@ static int dax_writeback_one(struct block_device *bdev,
struct dax_device *dax_dev, struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_root *pages = &mapping->pages;
- XA_STATE(xas, pages, index);
+ XA_STATE(xas, &mapping->pages, index);
void *entry2, *kaddr;
long ret = 0, id;
sector_t sector;
@@ -648,7 +647,7 @@ static int dax_writeback_one(struct block_device *bdev,
if (WARN_ON(!xa_is_value(entry)))
return -EIO;

- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
entry2 = get_unlocked_mapping_entry(&xas);
/* Entry got punched out / reallocated? */
if (!entry2 || WARN_ON_ONCE(!xa_is_value(entry2)))
@@ -667,7 +666,7 @@ static int dax_writeback_one(struct block_device *bdev,
}

/* Another fsync thread may have already written back this entry */
- if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
+ if (!xas_get_tag(&xas, PAGECACHE_TAG_TOWRITE))
goto put_unlocked;
/* Lock the entry to serialize with page faults */
entry = lock_slot(&xas);
@@ -678,8 +677,8 @@ static int dax_writeback_one(struct block_device *bdev,
* at the entry only under xa_lock and once they do that they will
* see the entry locked and wait for it to unlock.
*/
- radix_tree_tag_clear(pages, index, PAGECACHE_TAG_TOWRITE);
- xa_unlock_irq(&mapping->pages);
+ xas_clear_tag(&xas, PAGECACHE_TAG_TOWRITE);
+ xas_unlock_irq(&xas);

/*
* Even if dax_writeback_mapping_range() was given a wbc->range_start
@@ -717,9 +716,7 @@ static int dax_writeback_one(struct block_device *bdev,
* the pfn mappings are writeprotected and fault waits for mapping
* entry lock.
*/
- xa_lock_irq(&mapping->pages);
- radix_tree_tag_clear(pages, index, PAGECACHE_TAG_DIRTY);
- xa_unlock_irq(&mapping->pages);
+ xa_clear_tag(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
dax_unlock:
dax_read_unlock(id);
@@ -728,7 +725,7 @@ static int dax_writeback_one(struct block_device *bdev,

put_unlocked:
put_unlocked_mapping_entry(&xas, entry2);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return ret;
}

--
2.15.1


2018-01-17 20:44:28

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 60/99] dax: Convert __dax_invalidate_mapping_entry to XArray

From: Matthew Wilcox <[email protected]>

Simple now that we already have an xa_state!

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index d3fe61b95216..9a30224da4d6 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -413,24 +413,24 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
XA_STATE(xas, &mapping->pages, index);
int ret = 0;
void *entry;
- struct radix_tree_root *pages = &mapping->pages;

xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(&xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
goto out;
if (!trunc &&
- (radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
- radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE)))
+ (xas_get_tag(&xas, PAGECACHE_TAG_DIRTY) ||
+ xas_get_tag(&xas, PAGECACHE_TAG_TOWRITE)))
goto out;
- radix_tree_delete(pages, index);
+ xas_store(&xas, NULL);
mapping->nrexceptional--;
ret = 1;
out:
put_unlocked_mapping_entry(&xas, entry);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return ret;
}
+
/*
* Delete DAX entry at @index from @mapping. Wait for it
* to be unlocked before deleting it.
--
2.15.1


2018-01-17 20:45:43

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 57/99] dax: Convert dax_unlock_mapping_entry to XArray

From: Matthew Wilcox <[email protected]>

Replace slot_locked() with dax_locked() and inline unlock_slot() into
its only caller.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 48 ++++++++++++++++--------------------------------
1 file changed, 16 insertions(+), 32 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 5097a606da1a..f3463d93a6ce 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -73,6 +73,11 @@ fs_initcall(init_dax_wait_table);
#define DAX_ZERO_PAGE (1UL << 2)
#define DAX_EMPTY (1UL << 3)

+static bool dax_locked(void *entry)
+{
+ return xa_to_value(entry) & DAX_ENTRY_LOCK;
+}
+
static unsigned long dax_radix_sector(void *entry)
{
return xa_to_value(entry) >> DAX_SHIFT;
@@ -180,16 +185,6 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
}

-/*
- * Check whether the given slot is locked. Must be called with xa_lock held.
- */
-static inline int slot_locked(struct address_space *mapping, void **slot)
-{
- unsigned long entry = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
- return entry & DAX_ENTRY_LOCK;
-}
-
/*
* Mark the given slot as locked. Must be called with xa_lock held.
*/
@@ -202,18 +197,6 @@ static inline void *lock_slot(struct address_space *mapping, void **slot)
return entry;
}

-/*
- * Mark the given slot as unlocked. Must be called with xa_lock held.
- */
-static inline void *unlock_slot(struct address_space *mapping, void **slot)
-{
- unsigned long v = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
- void *entry = xa_mk_value(v & ~DAX_ENTRY_LOCK);
- radix_tree_replace_slot(&mapping->pages, slot, entry);
- return entry;
-}
-
/*
* Lookup entry in radix tree, wait for it to become unlocked if it is
* a DAX entry and return it. The caller must call
@@ -237,8 +220,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
entry = __radix_tree_lookup(&mapping->pages, index, NULL,
&slot);
if (!entry ||
- WARN_ON_ONCE(!xa_is_value(entry)) ||
- !slot_locked(mapping, slot)) {
+ WARN_ON_ONCE(!xa_is_value(entry)) || !dax_locked(entry)) {
if (slotp)
*slotp = slot;
return entry;
@@ -257,17 +239,19 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
static void dax_unlock_mapping_entry(struct address_space *mapping,
pgoff_t index)
{
- void *entry, **slot;
+ XA_STATE(xas, &mapping->pages, index);
+ void *entry;

- xa_lock_irq(&mapping->pages);
- entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
- if (WARN_ON_ONCE(!entry || !xa_is_value(entry) ||
- !slot_locked(mapping, slot))) {
- xa_unlock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
+ entry = xas_load(&xas);
+ if (WARN_ON_ONCE(!entry || !xa_is_value(entry) || !dax_locked(entry))) {
+ xas_unlock_irq(&xas);
return;
}
- unlock_slot(mapping, slot);
- xa_unlock_irq(&mapping->pages);
+ entry = xa_mk_value(xa_to_value(entry) & ~DAX_ENTRY_LOCK);
+ xas_store(&xas, entry);
+ /* Safe to not call xas_pause here -- we don't touch the array after */
+ xas_unlock_irq(&xas);
dax_wake_mapping_entry_waiter(mapping, index, entry, false);
}

--
2.15.1


2018-01-17 20:46:06

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 58/99] dax: Convert lock_slot to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index f3463d93a6ce..8eab0b56f7f9 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -188,12 +188,11 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
/*
* Mark the given slot as locked. Must be called with xa_lock held.
*/
-static inline void *lock_slot(struct address_space *mapping, void **slot)
+static inline void *lock_slot(struct xa_state *xas)
{
- unsigned long v = xa_to_value(
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ unsigned long v = xa_to_value(xas_load(xas));
void *entry = xa_mk_value(v | DAX_ENTRY_LOCK);
- radix_tree_replace_slot(&mapping->pages, slot, entry);
+ xas_store(xas, entry);
return entry;
}

@@ -244,7 +243,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,

xas_lock_irq(&xas);
entry = xas_load(&xas);
- if (WARN_ON_ONCE(!entry || !xa_is_value(entry) || !dax_locked(entry))) {
+ if (WARN_ON_ONCE(!xa_is_value(entry) || !dax_locked(entry))) {
xas_unlock_irq(&xas);
return;
}
@@ -303,6 +302,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
unsigned long size_flag)
{
+ XA_STATE(xas, &mapping->pages, index);
bool pmd_downgrade = false; /* splitting 2MiB entry into 4k entries? */
void *entry, **slot;

@@ -341,7 +341,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
* Make sure 'entry' remains valid while we drop
* xa_lock.
*/
- entry = lock_slot(mapping, slot);
+ entry = lock_slot(&xas);
}

xa_unlock_irq(&mapping->pages);
@@ -408,7 +408,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
xa_unlock_irq(&mapping->pages);
return entry;
}
- entry = lock_slot(mapping, slot);
+ entry = lock_slot(&xas);
out_unlock:
xa_unlock_irq(&mapping->pages);
return entry;
@@ -639,6 +639,7 @@ static int dax_writeback_one(struct block_device *bdev,
pgoff_t index, void *entry)
{
struct radix_tree_root *pages = &mapping->pages;
+ XA_STATE(xas, pages, index);
void *entry2, **slot, *kaddr;
long ret = 0, id;
sector_t sector;
@@ -675,7 +676,7 @@ static int dax_writeback_one(struct block_device *bdev,
if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
goto put_unlocked;
/* Lock the entry to serialize with page faults */
- entry = lock_slot(mapping, slot);
+ entry = lock_slot(&xas);
/*
* We can clear the tag now but we have to be careful so that concurrent
* dax_writeback_one() calls for the same index cannot finish before we
@@ -1500,8 +1501,9 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
pfn_t pfn)
{
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
- void *entry, **slot;
pgoff_t index = vmf->pgoff;
+ XA_STATE(xas, &mapping->pages, index);
+ void *entry, **slot;
int vmf_ret, error;

xa_lock_irq(&mapping->pages);
@@ -1517,7 +1519,7 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
return VM_FAULT_NOPAGE;
}
radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
- entry = lock_slot(mapping, slot);
+ entry = lock_slot(&xas);
xa_unlock_irq(&mapping->pages);
switch (pe_size) {
case PE_SIZE_PTE:
--
2.15.1


2018-01-17 20:46:41

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 59/99] dax: More XArray conversion

From: Matthew Wilcox <[email protected]>

This time, we want to convert get_unlocked_mapping_entry() to use the
XArray. That has a ripple effect, causing us to change the waitqueues
to hash on the address of the xarray rather than the address of the
mapping (functionally equivalent), and create a lot of on-the-stack
xa_state which are only used as a container for passing the xarray and
the index down to deeper function calls.

Also rename dax_wake_mapping_entry_waiter() to dax_wake_entry().

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/dax.c | 72 +++++++++++++++++++++++++++++-----------------------------------
1 file changed, 33 insertions(+), 39 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 8eab0b56f7f9..d3fe61b95216 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -120,7 +120,7 @@ static int dax_is_empty_entry(void *entry)
* DAX radix tree locking
*/
struct exceptional_entry_key {
- struct address_space *mapping;
+ struct xarray *xa;
pgoff_t entry_start;
};

@@ -129,9 +129,10 @@ struct wait_exceptional_entry_queue {
struct exceptional_entry_key key;
};

-static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
- pgoff_t index, void *entry, struct exceptional_entry_key *key)
+static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
+ void *entry, struct exceptional_entry_key *key)
{
+ unsigned long index = xas->xa_index;
unsigned long hash;

/*
@@ -142,10 +143,10 @@ static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping,
if (dax_is_pmd_entry(entry))
index &= ~PG_PMD_COLOUR;

- key->mapping = mapping;
+ key->xa = xas->xa;
key->entry_start = index;

- hash = hash_long((unsigned long)mapping ^ index, DAX_WAIT_TABLE_BITS);
+ hash = hash_long((unsigned long)xas->xa ^ index, DAX_WAIT_TABLE_BITS);
return wait_table + hash;
}

@@ -156,7 +157,7 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
struct wait_exceptional_entry_queue *ewait =
container_of(wait, struct wait_exceptional_entry_queue, wait);

- if (key->mapping != ewait->key.mapping ||
+ if (key->xa != ewait->key.xa ||
key->entry_start != ewait->key.entry_start)
return 0;
return autoremove_wake_function(wait, mode, sync, NULL);
@@ -167,13 +168,12 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
* The important information it's conveying is whether the entry at
* this index used to be a PMD entry.
*/
-static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
- pgoff_t index, void *entry, bool wake_all)
+static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all)
{
struct exceptional_entry_key key;
wait_queue_head_t *wq;

- wq = dax_entry_waitqueue(mapping, index, entry, &key);
+ wq = dax_entry_waitqueue(xas, entry, &key);

/*
* Checking for locked entry and prepare_to_wait_exclusive() happens
@@ -205,10 +205,9 @@ static inline void *lock_slot(struct xa_state *xas)
*
* Must be called with xa_lock held.
*/
-static void *get_unlocked_mapping_entry(struct address_space *mapping,
- pgoff_t index, void ***slotp)
+static void *get_unlocked_mapping_entry(struct xa_state *xas)
{
- void *entry, **slot;
+ void *entry;
struct wait_exceptional_entry_queue ewait;
wait_queue_head_t *wq;

@@ -216,22 +215,19 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
ewait.wait.func = wake_exceptional_entry_func;

for (;;) {
- entry = __radix_tree_lookup(&mapping->pages, index, NULL,
- &slot);
- if (!entry ||
- WARN_ON_ONCE(!xa_is_value(entry)) || !dax_locked(entry)) {
- if (slotp)
- *slotp = slot;
+ entry = xas_load(xas);
+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)) ||
+ !dax_locked(entry))
return entry;
- }

- wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
+ wq = dax_entry_waitqueue(xas, entry, &ewait.key);
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
- xa_unlock_irq(&mapping->pages);
+ xas_pause(xas);
+ xas_unlock_irq(xas);
schedule();
finish_wait(wq, &ewait.wait);
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(xas);
}
}

@@ -251,7 +247,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
xas_store(&xas, entry);
/* Safe to not call xas_pause here -- we don't touch the array after */
xas_unlock_irq(&xas);
- dax_wake_mapping_entry_waiter(mapping, index, entry, false);
+ dax_wake_entry(&xas, entry, false);
}

static void put_locked_mapping_entry(struct address_space *mapping,
@@ -264,14 +260,13 @@ static void put_locked_mapping_entry(struct address_space *mapping,
* Called when we are done with radix tree entry we looked up via
* get_unlocked_mapping_entry() and which we didn't lock in the end.
*/
-static void put_unlocked_mapping_entry(struct address_space *mapping,
- pgoff_t index, void *entry)
+static void put_unlocked_mapping_entry(struct xa_state *xas, void *entry)
{
if (!entry)
return;

/* We have to wake up next waiter for the radix tree entry lock */
- dax_wake_mapping_entry_waiter(mapping, index, entry, false);
+ dax_wake_entry(xas, entry, false);
}

/*
@@ -308,7 +303,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,

restart:
xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, &slot);
+ entry = get_unlocked_mapping_entry(&xas);

if (WARN_ON_ONCE(entry && !xa_is_value(entry))) {
entry = ERR_PTR(-EIO);
@@ -318,8 +313,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
if (entry) {
if (size_flag & DAX_PMD) {
if (dax_is_pte_entry(entry)) {
- put_unlocked_mapping_entry(mapping, index,
- entry);
+ put_unlocked_mapping_entry(&xas, entry);
entry = ERR_PTR(-EEXIST);
goto out_unlock;
}
@@ -382,8 +376,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
if (pmd_downgrade) {
radix_tree_delete(&mapping->pages, index);
mapping->nrexceptional--;
- dax_wake_mapping_entry_waiter(mapping, index, entry,
- true);
+ dax_wake_entry(&xas, entry, true);
}

entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);
@@ -417,12 +410,13 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
static int __dax_invalidate_mapping_entry(struct address_space *mapping,
pgoff_t index, bool trunc)
{
+ XA_STATE(xas, &mapping->pages, index);
int ret = 0;
void *entry;
struct radix_tree_root *pages = &mapping->pages;

xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, NULL);
+ entry = get_unlocked_mapping_entry(&xas);
if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
goto out;
if (!trunc &&
@@ -433,7 +427,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
mapping->nrexceptional--;
ret = 1;
out:
- put_unlocked_mapping_entry(mapping, index, entry);
+ put_unlocked_mapping_entry(&xas, entry);
xa_unlock_irq(&mapping->pages);
return ret;
}
@@ -640,7 +634,7 @@ static int dax_writeback_one(struct block_device *bdev,
{
struct radix_tree_root *pages = &mapping->pages;
XA_STATE(xas, pages, index);
- void *entry2, **slot, *kaddr;
+ void *entry2, *kaddr;
long ret = 0, id;
sector_t sector;
pgoff_t pgoff;
@@ -655,7 +649,7 @@ static int dax_writeback_one(struct block_device *bdev,
return -EIO;

xa_lock_irq(&mapping->pages);
- entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
+ entry2 = get_unlocked_mapping_entry(&xas);
/* Entry got punched out / reallocated? */
if (!entry2 || WARN_ON_ONCE(!xa_is_value(entry2)))
goto put_unlocked;
@@ -733,7 +727,7 @@ static int dax_writeback_one(struct block_device *bdev,
return ret;

put_unlocked:
- put_unlocked_mapping_entry(mapping, index, entry2);
+ put_unlocked_mapping_entry(&xas, entry2);
xa_unlock_irq(&mapping->pages);
return ret;
}
@@ -1503,16 +1497,16 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
pgoff_t index = vmf->pgoff;
XA_STATE(xas, &mapping->pages, index);
- void *entry, **slot;
+ void *entry;
int vmf_ret, error;

xa_lock_irq(&mapping->pages);
- entry = get_unlocked_mapping_entry(mapping, index, &slot);
+ entry = get_unlocked_mapping_entry(&xas);
/* Did we race with someone splitting entry or so? */
if (!entry ||
(pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
(pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
- put_unlocked_mapping_entry(mapping, index, entry);
+ put_unlocked_mapping_entry(&xas, entry);
xa_unlock_irq(&mapping->pages);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
--
2.15.1


2018-01-17 20:46:54

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 56/99] lustre: Convert to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
drivers/staging/lustre/lustre/llite/glimpse.c | 12 +++++-------
drivers/staging/lustre/lustre/mdc/mdc_request.c | 16 ++++++++--------
2 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index 5f2843da911c..25232fdf5797 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -57,7 +57,7 @@ static const struct cl_lock_descr whole_file = {
};

/*
- * Check whether file has possible unwriten pages.
+ * Check whether file has possible unwritten pages.
*
* \retval 1 file is mmap-ed or has dirty pages
* 0 otherwise
@@ -66,16 +66,14 @@ blkcnt_t dirty_cnt(struct inode *inode)
{
blkcnt_t cnt = 0;
struct vvp_object *vob = cl_inode2vvp(inode);
- void *results[1];

- if (inode->i_mapping)
- cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->pages,
- results, 0, 1,
- PAGECACHE_TAG_DIRTY);
+ if (inode->i_mapping && xa_tagged(&inode->i_mapping->pages,
+ PAGECACHE_TAG_DIRTY))
+ cnt = 1;
if (cnt == 0 && atomic_read(&vob->vob_mmap_cnt) > 0)
cnt = 1;

- return (cnt > 0) ? 1 : 0;
+ return cnt;
}

int cl_glimpse_lock(const struct lu_env *env, struct cl_io *io,
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 2ec79a6b17da..ea23247e9e02 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -934,17 +934,18 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
* hash _smaller_ than one we are looking for.
*/
unsigned long offset = hash_x_index(*hash, hash64);
+ XA_STATE(xas, &mapping->pages, offset);
struct page *page;
- int found;

- xa_lock_irq(&mapping->pages);
- found = radix_tree_gang_lookup(&mapping->pages,
- (void **)&page, offset, 1);
- if (found > 0 && !xa_is_value(page)) {
+ xas_lock_irq(&xas);
+ page = xas_find(&xas, ULONG_MAX);
+ if (xa_is_value(page))
+ page = NULL;
+ if (page) {
struct lu_dirpage *dp;

get_page(page);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
/*
* In contrast to find_lock_page() we are sure that directory
* page cannot be truncated (while DLM lock is held) and,
@@ -992,8 +993,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
page = ERR_PTR(-EIO);
}
} else {
- xa_unlock_irq(&mapping->pages);
- page = NULL;
+ xas_unlock_irq(&xas);
}
return page;
}
--
2.15.1


2018-01-17 20:47:47

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 52/99] fs: Convert buffer to XArray

From: Matthew Wilcox <[email protected]>

Mostly comment fixes, but one use of __xa_set_tag.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/buffer.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 1a6ae530156b..e1d18307d5c8 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -592,7 +592,7 @@ void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode)
EXPORT_SYMBOL(mark_buffer_dirty_inode);

/*
- * Mark the page dirty, and set it dirty in the radix tree, and mark the inode
+ * Mark the page dirty, and set it dirty in the page cache, and mark the inode
* dirty.
*
* If warn is true, then emit a warning if the page is not uptodate and has
@@ -609,8 +609,8 @@ void __set_page_dirty(struct page *page, struct address_space *mapping,
if (page->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages,
- page_index(page), PAGECACHE_TAG_DIRTY);
+ __xa_set_tag(&mapping->pages, page_index(page),
+ PAGECACHE_TAG_DIRTY);
}
xa_unlock_irqrestore(&mapping->pages, flags);
}
@@ -1072,7 +1072,7 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* The relationship between dirty buffers and dirty pages:
*
* Whenever a page has any dirty buffers, the page's dirty bit is set, and
- * the page is tagged dirty in its radix tree.
+ * the page is tagged dirty in the page cache.
*
* At all times, the dirtiness of the buffers represents the dirtiness of
* subsections of the page. If the page has buffers, the page dirty bit is
@@ -1095,9 +1095,9 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* mark_buffer_dirty - mark a buffer_head as needing writeout
* @bh: the buffer_head to mark dirty
*
- * mark_buffer_dirty() will set the dirty bit against the buffer, then set its
- * backing page dirty, then tag the page as dirty in its address_space's radix
- * tree and then attach the address_space's inode to its superblock's dirty
+ * mark_buffer_dirty() will set the dirty bit against the buffer, then set
+ * its backing page dirty, then tag the page as dirty in the page cache
+ * and then attach the address_space's inode to its superblock's dirty
* inode list.
*
* mark_buffer_dirty() is atomic. It takes bh->b_page->mapping->private_lock,
--
2.15.1


2018-01-17 20:47:53

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 53/99] fs: Convert writeback to XArray

From: Matthew Wilcox <[email protected]>

A couple of short loops.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/fs-writeback.c | 25 +++++++++----------------
1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index e2c1ca667d9a..897a89489fe9 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -339,9 +339,9 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
struct bdi_writeback *new_wb = isw->new_wb;
- struct radix_tree_iter iter;
+ XA_STATE(xas, &mapping->pages, 0);
+ struct page *page;
bool switched = false;
- void **slot;

/*
* By the time control reaches here, RCU grace period has passed
@@ -375,25 +375,18 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
* to possibly dirty pages while PAGECACHE_TAG_WRITEBACK points to
* pages actually under writeback.
*/
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
- PAGECACHE_TAG_DIRTY) {
- struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (likely(page) && PageDirty(page)) {
+ xas_for_each_tag(&xas, page, ULONG_MAX, PAGECACHE_TAG_DIRTY) {
+ if (PageDirty(page)) {
dec_wb_stat(old_wb, WB_RECLAIMABLE);
inc_wb_stat(new_wb, WB_RECLAIMABLE);
}
}

- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
- PAGECACHE_TAG_WRITEBACK) {
- struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (likely(page)) {
- WARN_ON_ONCE(!PageWriteback(page));
- dec_wb_stat(old_wb, WB_WRITEBACK);
- inc_wb_stat(new_wb, WB_WRITEBACK);
- }
+ xas_set(&xas, 0);
+ xas_for_each_tag(&xas, page, ULONG_MAX, PAGECACHE_TAG_WRITEBACK) {
+ WARN_ON_ONCE(!PageWriteback(page));
+ dec_wb_stat(old_wb, WB_WRITEBACK);
+ inc_wb_stat(new_wb, WB_WRITEBACK);
}

wb_get(new_wb);
--
2.15.1


2018-01-17 20:48:04

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 54/99] nilfs2: Convert to XArray

From: Matthew Wilcox <[email protected]>

I'm not 100% convinced that the rewrite of nilfs_copy_back_pages is
correct, but it will at least have different bugs from the current
version.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/nilfs2/btnode.c | 37 +++++++++++-----------------
fs/nilfs2/page.c | 72 +++++++++++++++++++++++++++++++-----------------------
2 files changed, 56 insertions(+), 53 deletions(-)

diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index 9e2a00207436..b5997e8c5441 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -177,42 +177,36 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
ctxt->newbh = NULL;

if (inode->i_blkbits == PAGE_SHIFT) {
- lock_page(obh->b_page);
- /*
- * We cannot call radix_tree_preload for the kernels older
- * than 2.6.23, because it is not exported for modules.
- */
+ void *entry;
+ struct page *opage = obh->b_page;
+ lock_page(opage);
retry:
- err = radix_tree_preload(GFP_NOFS & ~__GFP_HIGHMEM);
- if (err)
- goto failed_unlock;
/* BUG_ON(oldkey != obh->b_page->index); */
- if (unlikely(oldkey != obh->b_page->index))
- NILFS_PAGE_BUG(obh->b_page,
+ if (unlikely(oldkey != opage->index))
+ NILFS_PAGE_BUG(opage,
"invalid oldkey %lld (newkey=%lld)",
(unsigned long long)oldkey,
(unsigned long long)newkey);

- xa_lock_irq(&btnc->pages);
- err = radix_tree_insert(&btnc->pages, newkey, obh->b_page);
- xa_unlock_irq(&btnc->pages);
+ entry = xa_cmpxchg(&btnc->pages, newkey, NULL, opage, GFP_NOFS);
/*
* Note: page->index will not change to newkey until
* nilfs_btnode_commit_change_key() will be called.
* To protect the page in intermediate state, the page lock
* is held.
*/
- radix_tree_preload_end();
- if (!err)
+ if (!entry)
return 0;
- else if (err != -EEXIST)
+ if (xa_is_err(entry)) {
+ err = xa_err(entry);
goto failed_unlock;
+ }

err = invalidate_inode_pages2_range(btnc, newkey, newkey);
if (!err)
goto retry;
/* fallback to copy mode */
- unlock_page(obh->b_page);
+ unlock_page(opage);
}

nbh = nilfs_btnode_create_block(btnc, newkey);
@@ -252,9 +246,8 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
mark_buffer_dirty(obh);

xa_lock_irq(&btnc->pages);
- radix_tree_delete(&btnc->pages, oldkey);
- radix_tree_tag_set(&btnc->pages, newkey,
- PAGECACHE_TAG_DIRTY);
+ __xa_erase(&btnc->pages, oldkey);
+ __xa_set_tag(&btnc->pages, newkey, PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&btnc->pages);

opage->index = obh->b_blocknr = newkey;
@@ -283,9 +276,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
return;

if (nbh == NULL) { /* blocksize == pagesize */
- xa_lock_irq(&btnc->pages);
- radix_tree_delete(&btnc->pages, newkey);
- xa_unlock_irq(&btnc->pages);
+ xa_erase(&btnc->pages, newkey);
unlock_page(ctxt->bh->b_page);
} else
brelse(nbh);
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 1c6703efde9e..31d20f624971 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -304,10 +304,10 @@ int nilfs_copy_dirty_pages(struct address_space *dmap,
void nilfs_copy_back_pages(struct address_space *dmap,
struct address_space *smap)
{
+ XA_STATE(xas, &dmap->pages, 0);
struct pagevec pvec;
unsigned int i, n;
pgoff_t index = 0;
- int err;

pagevec_init(&pvec);
repeat:
@@ -317,43 +317,56 @@ void nilfs_copy_back_pages(struct address_space *dmap,

for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i], *dpage;
- pgoff_t offset = page->index;
+ xas_set(&xas, page->index);

lock_page(page);
- dpage = find_lock_page(dmap, offset);
+ do {
+ xas_lock_irq(&xas);
+ dpage = xas_create(&xas);
+ if (!xas_error(&xas))
+ break;
+ xas_unlock_irq(&xas);
+ if (!xas_nomem(&xas, GFP_NOFS)) {
+ unlock_page(page);
+ /*
+ * Callers have a touching faith that this
+ * function cannot fail. Just leak the page.
+ * Other pages may be salvagable if the
+ * xarray doesn't need to allocate memory
+ * to store them.
+ */
+ WARN_ON(1);
+ page->mapping = NULL;
+ put_page(page);
+ goto shadow_remove;
+ }
+ } while (1);
+
if (dpage) {
- /* override existing page on the destination cache */
+ get_page(dpage);
+ xas_unlock_irq(&xas);
+ lock_page(dpage);
+ /* override existing page in the destination cache */
WARN_ON(PageDirty(dpage));
nilfs_copy_page(dpage, page, 0);
unlock_page(dpage);
put_page(dpage);
} else {
- struct page *page2;
-
- /* move the page to the destination cache */
- xa_lock_irq(&smap->pages);
- page2 = radix_tree_delete(&smap->pages, offset);
- WARN_ON(page2 != page);
-
- smap->nrpages--;
- xa_unlock_irq(&smap->pages);
-
- xa_lock_irq(&dmap->pages);
- err = radix_tree_insert(&dmap->pages, offset, page);
- if (unlikely(err < 0)) {
- WARN_ON(err == -EEXIST);
- page->mapping = NULL;
- put_page(page); /* for cache */
- } else {
- page->mapping = dmap;
- dmap->nrpages++;
- if (PageDirty(page))
- radix_tree_tag_set(&dmap->pages,
- offset,
- PAGECACHE_TAG_DIRTY);
- }
+ xas_store(&xas, page);
+ page->mapping = dmap;
+ dmap->nrpages++;
+ if (PageDirty(page))
+ xas_set_tag(&xas, PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&dmap->pages);
}
+
+shadow_remove:
+ /* remove the page from the shadow cache */
+ xa_lock_irq(&smap->pages);
+ WARN_ON(__xa_erase(&smap->pages, xas.xa_index) != page);
+ smap->nrpages--;
+ xa_unlock_irq(&smap->pages);
+
unlock_page(page);
}
pagevec_release(&pvec);
@@ -476,8 +489,7 @@ int __nilfs_clear_page_dirty(struct page *page)
if (mapping) {
xa_lock_irq(&mapping->pages);
if (test_bit(PG_dirty, &page->flags)) {
- radix_tree_tag_clear(&mapping->pages,
- page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irq(&mapping->pages);
return clear_page_dirty_for_io(page);
--
2.15.1


2018-01-17 20:49:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 55/99] f2fs: Convert to XArray

From: Matthew Wilcox <[email protected]>

This is a straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/f2fs/data.c | 3 +--
fs/f2fs/dir.c | 5 +----
fs/f2fs/inline.c | 6 +-----
fs/f2fs/node.c | 10 ++--------
4 files changed, 5 insertions(+), 19 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index c8f6d9806896..1f3f192f152f 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2175,8 +2175,7 @@ void f2fs_set_page_dirty_nobuffers(struct page *page)
xa_lock_irqsave(&mapping->pages, flags);
WARN_ON_ONCE(!PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages,
- page_index(page), PAGECACHE_TAG_DIRTY);
+ __xa_set_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);

diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index b5515ea6bb2f..296070016ec9 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -708,7 +708,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
unsigned int bit_pos;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
struct address_space *mapping = page_mapping(page);
- unsigned long flags;
int i;

f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
@@ -739,10 +738,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,

if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) {
- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);

clear_page_dirty_for_io(page);
ClearPagePrivate(page);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 7858b8e15f33..d3c3f84beca9 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -204,7 +204,6 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
void *src_addr, *dst_addr;
struct dnode_of_data dn;
struct address_space *mapping = page_mapping(page);
- unsigned long flags;
int err;

set_new_dnode(&dn, inode, NULL, NULL, 0);
@@ -226,10 +225,7 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page);

- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages, page_index(page),
- PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);
+ xa_clear_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY);

set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 6b64a3009d55..0a6d5c2f996e 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -88,14 +88,10 @@ bool available_free_memory(struct f2fs_sb_info *sbi, int type)
static void clear_node_page_dirty(struct page *page)
{
struct address_space *mapping = page->mapping;
- unsigned int long flags;

if (PageDirty(page)) {
- xa_lock_irqsave(&mapping->pages, flags);
- radix_tree_tag_clear(&mapping->pages,
- page_index(page),
+ xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- xa_unlock_irqrestore(&mapping->pages, flags);

clear_page_dirty_for_io(page);
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
@@ -1142,9 +1138,7 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
return;
f2fs_bug_on(sbi, check_nid_range(sbi, nid));

- rcu_read_lock();
- apage = radix_tree_lookup(&NODE_MAPPING(sbi)->pages, nid);
- rcu_read_unlock();
+ apage = xa_load(&NODE_MAPPING(sbi)->pages, nid);
if (apage)
return;

--
2.15.1


2018-01-17 20:49:42

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 50/99] shmem: Comment fixups

From: Matthew Wilcox <[email protected]>

Remove the last mentions of radix tree from various comments.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 4dbcfb436bd1..5110848885d4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -743,7 +743,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
}

/*
- * Remove range of pages and swap entries from radix tree, and free them.
+ * Remove range of pages and swap entries from page cache, and free them.
* If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
*/
static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
@@ -1118,10 +1118,10 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
* We needed to drop mutex to make that restrictive page
* allocation, but the inode might have been freed while we
* dropped it: although a racing shmem_evict_inode() cannot
- * complete without emptying the radix_tree, our page lock
+ * complete without emptying the page cache, our page lock
* on this swapcache page is not enough to prevent that -
* free_swap_and_cache() of our swap entry will only
- * trylock_page(), removing swap from radix_tree whatever.
+ * trylock_page(), removing swap from page cache whatever.
*
* We must not proceed to shmem_add_to_page_cache() if the
* inode has been freed, but of course we cannot rely on
@@ -1187,7 +1187,7 @@ int shmem_unuse(swp_entry_t swap, struct page *page)
false);
if (error)
goto out;
- /* No radix_tree_preload: swap entry keeps a place for page in tree */
+ /* No memory allocation: swap entry occupies the slot for the page */
error = -EAGAIN;

mutex_lock(&shmem_swaplist_mutex);
@@ -1863,7 +1863,7 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode,
spin_unlock_irq(&info->lock);
goto repeat;
}
- if (error == -EEXIST) /* from above or from radix_tree_insert */
+ if (error == -EEXIST)
goto repeat;
return error;
}
@@ -2475,7 +2475,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
}

/*
- * llseek SEEK_DATA or SEEK_HOLE through the radix_tree.
+ * llseek SEEK_DATA or SEEK_HOLE through the page cache.
*/
static pgoff_t shmem_seek_hole_data(struct address_space *mapping,
pgoff_t index, pgoff_t end, int whence)
@@ -2563,7 +2563,7 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
}

/*
- * We need a tag: a new tag would expand every radix_tree_node by 8 bytes,
+ * We need a tag: a new tag would expand every xa_node by 8 bytes,
* so reuse a tag which we firmly believe is never set or cleared on shmem.
*/
#define SHMEM_TAG_PINNED PAGECACHE_TAG_TOWRITE
--
2.15.1


2018-01-17 20:50:38

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 46/99] shmem: Convert shmem_add_to_page_cache to XArray

From: Matthew Wilcox <[email protected]>

This removes the last caller of radix_tree_maybe_preload_order().
Simpler code, unless we run out of memory for new xa_nodes partway through
inserting entries into the xarray. Hopefully we can support multi-index
entries in the page cache soon and all the awful code goes away.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 87 ++++++++++++++++++++++++++++----------------------------------
1 file changed, 39 insertions(+), 48 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e4a2eb1336be..0f49edae05e4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -558,9 +558,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
*/
static int shmem_add_to_page_cache(struct page *page,
struct address_space *mapping,
- pgoff_t index, void *expected)
+ pgoff_t index, void *expected, gfp_t gfp)
{
- int error, nr = hpage_nr_pages(page);
+ XA_STATE(xas, &mapping->pages, index);
+ unsigned long i, nr = 1UL << compound_order(page);

VM_BUG_ON_PAGE(PageTail(page), page);
VM_BUG_ON_PAGE(index != round_down(index, nr), page);
@@ -569,49 +570,47 @@ static int shmem_add_to_page_cache(struct page *page,
VM_BUG_ON(expected && PageTransHuge(page));

page_ref_add(page, nr);
- page->mapping = mapping;
page->index = index;
+ page->mapping = mapping;

- xa_lock_irq(&mapping->pages);
- if (PageTransHuge(page)) {
- void __rcu **results;
- pgoff_t idx;
- int i;
-
- error = 0;
- if (radix_tree_gang_lookup_slot(&mapping->pages,
- &results, &idx, index, 1) &&
- idx < index + HPAGE_PMD_NR) {
- error = -EEXIST;
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, index + nr - 1);
+ if (xas_error(&xas))
+ goto unlock;
+ for (i = 0; i < nr; i++) {
+ void *entry = xas_load(&xas);
+ if (entry != expected)
+ xas_set_err(&xas, -ENOENT);
+ if (xas_error(&xas))
+ goto undo;
+ xas_store(&xas, page + i);
+ xas_next(&xas);
}
-
- if (!error) {
- for (i = 0; i < HPAGE_PMD_NR; i++) {
- error = radix_tree_insert(&mapping->pages,
- index + i, page + i);
- VM_BUG_ON(error);
- }
+ if (PageTransHuge(page)) {
count_vm_event(THP_FILE_ALLOC);
+ __inc_node_page_state(page, NR_SHMEM_THPS);
}
- } else if (!expected) {
- error = radix_tree_insert(&mapping->pages, index, page);
- } else {
- error = shmem_xa_replace(mapping, index, expected, page);
- }
-
- if (!error) {
mapping->nrpages += nr;
- if (PageTransHuge(page))
- __inc_node_page_state(page, NR_SHMEM_THPS);
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
__mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
- xa_unlock_irq(&mapping->pages);
- } else {
+ goto unlock;
+undo:
+ while (i-- > 0) {
+ xas_store(&xas, NULL);
+ xas_prev(&xas);
+ }
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ if (xas_error(&xas)) {
page->mapping = NULL;
- xa_unlock_irq(&mapping->pages);
page_ref_sub(page, nr);
+ return xas_error(&xas);
}
- return error;
+
+ return 0;
}

/*
@@ -1159,7 +1158,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
*/
if (!error)
error = shmem_add_to_page_cache(*pagep, mapping, index,
- radswap);
+ radswap, gfp);
if (error != -ENOMEM) {
/*
* Truncation and eviction use free_swap_and_cache(), which
@@ -1677,7 +1676,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
false);
if (!error) {
error = shmem_add_to_page_cache(page, mapping, index,
- swp_to_radix_entry(swap));
+ swp_to_radix_entry(swap), gfp);
/*
* We already confirmed swap under page lock, and make
* no memory allocation here, so usually no possibility
@@ -1783,13 +1782,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode,
PageTransHuge(page));
if (error)
goto unacct;
- error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK,
- compound_order(page));
- if (!error) {
- error = shmem_add_to_page_cache(page, mapping, hindex,
- NULL);
- radix_tree_preload_end();
- }
+ error = shmem_add_to_page_cache(page, mapping, hindex,
+ NULL, gfp & GFP_RECLAIM_MASK);
if (error) {
mem_cgroup_cancel_charge(page, memcg,
PageTransHuge(page));
@@ -2256,11 +2250,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
if (ret)
goto out_release;

- ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK);
- if (!ret) {
- ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL);
- radix_tree_preload_end();
- }
+ ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL,
+ gfp & GFP_RECLAIM_MASK);
if (ret)
goto out_release_uncharge;

--
2.15.1


2018-01-17 20:50:59

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 47/99] shmem: Convert shmem_alloc_hugepage to XArray

From: Matthew Wilcox <[email protected]>

xa_find() is a slightly easier API to use than
radix_tree_gang_lookup_slot() because it contains its own RCU locking.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 0f49edae05e4..e8233cb7ab5c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1413,23 +1413,17 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
struct vm_area_struct pvma;
- struct inode *inode = &info->vfs_inode;
- struct address_space *mapping = inode->i_mapping;
- pgoff_t idx, hindex;
- void __rcu **results;
+ struct address_space *mapping = info->vfs_inode.i_mapping;
+ pgoff_t hindex;
struct page *page;

if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
return NULL;

hindex = round_down(index, HPAGE_PMD_NR);
- rcu_read_lock();
- if (radix_tree_gang_lookup_slot(&mapping->pages, &results, &idx,
- hindex, 1) && idx < hindex + HPAGE_PMD_NR) {
- rcu_read_unlock();
+ if (xa_find(&mapping->pages, &hindex, hindex + HPAGE_PMD_NR - 1,
+ XA_PRESENT))
return NULL;
- }
- rcu_read_unlock();

shmem_pseudo_vma_init(&pvma, info, hindex);
page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN,
--
2.15.1


2018-01-17 20:51:16

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 51/99] btrfs: Convert page cache to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/compression.c | 4 +---
fs/btrfs/extent_io.c | 6 ++----
2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index e687d06cd97c..4174b166e235 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -449,9 +449,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
if (pg_index > end_index)
break;

- rcu_read_lock();
- page = radix_tree_lookup(&mapping->pages, pg_index);
- rcu_read_unlock();
+ page = xa_load(&mapping->pages, pg_index);
if (page && !xa_is_value(page)) {
misses++;
if (misses > 4)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4301cbf4e31f..fd5e9d887328 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -5197,11 +5197,9 @@ void clear_extent_buffer_dirty(struct extent_buffer *eb)

clear_page_dirty_for_io(page);
xa_lock_irq(&page->mapping->pages);
- if (!PageDirty(page)) {
- radix_tree_tag_clear(&page->mapping->pages,
- page_index(page),
+ if (!PageDirty(page))
+ __xa_clear_tag(&page->mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- }
xa_unlock_irq(&page->mapping->pages);
ClearPageError(page);
unlock_page(page);
--
2.15.1


2018-01-17 20:51:26

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 48/99] shmem: Convert shmem_free_swap to XArray

From: Matthew Wilcox <[email protected]>

This is a perfect use for xa_cmpxchg(). Note the use of 0 for GFP
flags; we won't be allocating memory.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e8233cb7ab5c..5a2226e06f8c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -635,16 +635,13 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
}

/*
- * Remove swap entry from radix tree, free the swap and its page cache.
+ * Remove swap entry from page cache, free the swap and its page cache.
*/
static int shmem_free_swap(struct address_space *mapping,
pgoff_t index, void *radswap)
{
- void *old;
+ void *old = xa_cmpxchg(&mapping->pages, index, radswap, NULL, 0);

- xa_lock_irq(&mapping->pages);
- old = radix_tree_delete_item(&mapping->pages, index, radswap);
- xa_unlock_irq(&mapping->pages);
if (old != radswap)
return -ENOENT;
free_swap_and_cache(radix_to_swp_entry(radswap));
--
2.15.1


2018-01-17 20:51:53

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 49/99] shmem: Convert shmem_partial_swap_usage to XArray

From: Matthew Wilcox <[email protected]>

Simpler code because the xarray takes care of things like the limit and
dereferencing the slot.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 18 +++---------------
1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 5a2226e06f8c..4dbcfb436bd1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -658,29 +658,17 @@ static int shmem_free_swap(struct address_space *mapping,
unsigned long shmem_partial_swap_usage(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
struct page *page;
unsigned long swapped = 0;

rcu_read_lock();
-
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= end)
- break;
-
- page = radix_tree_deref_slot(slot);
-
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
+ xas_for_each(&xas, page, end - 1) {
if (xa_is_value(page))
swapped++;

if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
}
--
2.15.1


2018-01-17 20:52:25

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 44/99] shmem: Convert shmem_tag_pins to XArray

From: Matthew Wilcox <[email protected]>

Simplify the locking by taking the spinlock while we walk the tree on
the assumption that many acquires and releases of the lock will be
worse than holding the lock for a (potentially) long time.

We could replicate the same locking behaviour with the xarray, but would
have to be careful that the xa_node wasn't RCU-freed under us before we
took the lock.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 39 ++++++++++++++++-----------------------
1 file changed, 16 insertions(+), 23 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index ce285ae635ea..2f41c7ceea18 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)

static void shmem_tag_pins(struct address_space *mapping)
{
- struct radix_tree_iter iter;
- void **slot;
- pgoff_t start;
+ XA_STATE(xas, &mapping->pages, 0);
struct page *page;
+ unsigned int tagged = 0;

lru_add_drain();
- start = 0;
- rcu_read_lock();

- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- page = radix_tree_deref_slot(slot);
- if (!page || radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
- } else if (page_count(page) - page_mapcount(page) > 1) {
- xa_lock_irq(&mapping->pages);
- radix_tree_tag_set(&mapping->pages, iter.index,
- SHMEM_TAG_PINNED);
- xa_unlock_irq(&mapping->pages);
- }
+ xas_lock_irq(&xas);
+ xas_for_each(&xas, page, ULONG_MAX) {
+ if (xa_is_value(page))
+ continue;
+ if (page_count(page) - page_mapcount(page) > 1)
+ xas_set_tag(&xas, SHMEM_TAG_PINNED);

- if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
- cond_resched_rcu();
- }
+ if (++tagged % XA_CHECK_SCHED)
+ continue;
+
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
+ cond_resched();
+ xas_lock_irq(&xas);
}
- rcu_read_unlock();
+ xas_unlock_irq(&xas);
}

/*
--
2.15.1


2018-01-17 20:53:29

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 41/99] shmem: Convert replace to XArray

From: Matthew Wilcox <[email protected]>

shmem_radix_tree_replace() is renamed to shmem_xa_replace() and
converted to use the XArray API.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index c5731bb954a1..fad6c9e7402e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -321,24 +321,20 @@ void shmem_uncharge(struct inode *inode, long pages)
}

/*
- * Replace item expected in radix tree by a new item, while holding tree lock.
+ * Replace item expected in xarray by a new item, while holding xa_lock.
*/
-static int shmem_radix_tree_replace(struct address_space *mapping,
+static int shmem_xa_replace(struct address_space *mapping,
pgoff_t index, void *expected, void *replacement)
{
- struct radix_tree_node *node;
- void **pslot;
+ XA_STATE(xas, &mapping->pages, index);
void *item;

VM_BUG_ON(!expected);
VM_BUG_ON(!replacement);
- item = __radix_tree_lookup(&mapping->pages, index, &node, &pslot);
- if (!item)
- return -ENOENT;
+ item = xas_load(&xas);
if (item != expected)
return -ENOENT;
- __radix_tree_replace(&mapping->pages, node, pslot,
- replacement, NULL);
+ xas_store(&xas, replacement);
return 0;
}

@@ -605,8 +601,7 @@ static int shmem_add_to_page_cache(struct page *page,
} else if (!expected) {
error = radix_tree_insert(&mapping->pages, index, page);
} else {
- error = shmem_radix_tree_replace(mapping, index, expected,
- page);
+ error = shmem_xa_replace(mapping, index, expected, page);
}

if (!error) {
@@ -635,7 +630,7 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)
VM_BUG_ON_PAGE(PageCompound(page), page);

xa_lock_irq(&mapping->pages);
- error = shmem_radix_tree_replace(mapping, page->index, page, radswap);
+ error = shmem_xa_replace(mapping, page->index, page, radswap);
page->mapping = NULL;
mapping->nrpages--;
__dec_node_page_state(page, NR_FILE_PAGES);
@@ -1550,8 +1545,7 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
* a nice clean interface for us to replace oldpage by newpage there.
*/
xa_lock_irq(&swap_mapping->pages);
- error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,
- newpage);
+ error = shmem_xa_replace(swap_mapping, swap_index, oldpage, newpage);
if (!error) {
__inc_node_page_state(newpage, NR_FILE_PAGES);
__dec_node_page_state(oldpage, NR_FILE_PAGES);
--
2.15.1


2018-01-17 20:53:30

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 45/99] shmem: Convert shmem_wait_for_pins to XArray

From: Matthew Wilcox <[email protected]>

As with shmem_tag_pins(), hold the lock around the entire loop instead
of acquiring & dropping it for each entry we're going to untag.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 59 ++++++++++++++++++++++++-----------------------------------
1 file changed, 24 insertions(+), 35 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 2f41c7ceea18..e4a2eb1336be 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2636,9 +2636,7 @@ static void shmem_tag_pins(struct address_space *mapping)
*/
static int shmem_wait_for_pins(struct address_space *mapping)
{
- struct radix_tree_iter iter;
- void **slot;
- pgoff_t start;
+ XA_STATE(xas, &mapping->pages, 0);
struct page *page;
int error, scan;

@@ -2646,7 +2644,9 @@ static int shmem_wait_for_pins(struct address_space *mapping)

error = 0;
for (scan = 0; scan <= LAST_SCAN; scan++) {
- if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED))
+ unsigned int tagged = 0;
+
+ if (!xas_tagged(&xas, SHMEM_TAG_PINNED))
break;

if (!scan)
@@ -2654,45 +2654,34 @@ static int shmem_wait_for_pins(struct address_space *mapping)
else if (schedule_timeout_killable((HZ << scan) / 200))
scan = LAST_SCAN;

- start = 0;
- rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter,
- start, SHMEM_TAG_PINNED) {
-
- page = radix_tree_deref_slot(slot);
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
- continue;
- }
-
- page = NULL;
- }
-
- if (page &&
- page_count(page) - page_mapcount(page) != 1) {
- if (scan < LAST_SCAN)
- goto continue_resched;
-
+ xas_set(&xas, 0);
+ xas_lock_irq(&xas);
+ xas_for_each_tag(&xas, page, ULONG_MAX, SHMEM_TAG_PINNED) {
+ bool clear = true;
+ if (xa_is_value(page))
+ continue;
+ if (page_count(page) - page_mapcount(page) != 1) {
/*
* On the last scan, we clean up all those tags
* we inserted; but make a note that we still
* found pages pinned.
*/
- error = -EBUSY;
+ if (scan == LAST_SCAN)
+ error = -EBUSY;
+ else
+ clear = false;
}
+ if (clear)
+ xas_clear_tag(&xas, SHMEM_TAG_PINNED);
+ if (++tagged % XA_CHECK_SCHED)
+ continue;

- xa_lock_irq(&mapping->pages);
- radix_tree_tag_clear(&mapping->pages,
- iter.index, SHMEM_TAG_PINNED);
- xa_unlock_irq(&mapping->pages);
-continue_resched:
- if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
- cond_resched_rcu();
- }
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
+ cond_resched();
+ xas_lock_irq(&xas);
}
- rcu_read_unlock();
+ xas_unlock_irq(&xas);
}

return error;
--
2.15.1


2018-01-17 20:54:24

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 40/99] pagevec: Use xa_tag_t

From: Matthew Wilcox <[email protected]>

Removes sparse warnings.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/extent_io.c | 4 ++--
fs/ext4/inode.c | 2 +-
fs/f2fs/data.c | 2 +-
fs/gfs2/aops.c | 2 +-
include/linux/pagevec.h | 8 +++++---
mm/swap.c | 4 ++--
6 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 22948f4febe7..4301cbf4e31f 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3795,7 +3795,7 @@ int btree_write_cache_pages(struct address_space *mapping,
pgoff_t index;
pgoff_t end; /* Inclusive */
int scanned = 0;
- int tag;
+ xa_tag_t tag;

pagevec_init(&pvec);
if (wbc->range_cyclic) {
@@ -3922,7 +3922,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
pgoff_t done_index;
int range_whole = 0;
int scanned = 0;
- int tag;
+ xa_tag_t tag;

/*
* We have to hold onto the inode so that ordered extents can do their
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 534a9130f625..4b7c10853928 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2614,7 +2614,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
long left = mpd->wbc->nr_to_write;
pgoff_t index = mpd->first_page;
pgoff_t end = mpd->last_page;
- int tag;
+ xa_tag_t tag;
int i, err = 0;
int blkbits = mpd->inode->i_blkbits;
ext4_lblk_t lblk;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 8f51ac47b77f..c8f6d9806896 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1640,7 +1640,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
pgoff_t last_idx = ULONG_MAX;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;

pagevec_init(&pvec);

diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 1daf15a1f00c..c78ecd008191 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -369,7 +369,7 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
pgoff_t done_index;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;

pagevec_init(&pvec);
if (wbc->range_cyclic) {
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 5fb6580f7f23..5168901bf06d 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -9,6 +9,8 @@
#ifndef _LINUX_PAGEVEC_H
#define _LINUX_PAGEVEC_H

+#include <linux/xarray.h>
+
/* 14 pointers + two long's align the pagevec structure to a power of two */
#define PAGEVEC_SIZE 14

@@ -40,12 +42,12 @@ static inline unsigned pagevec_lookup(struct pagevec *pvec,

unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag);
+ xa_tag_t tag);
unsigned pagevec_lookup_range_nr_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag, unsigned max_pages);
+ xa_tag_t tag, unsigned max_pages);
static inline unsigned pagevec_lookup_tag(struct pagevec *pvec,
- struct address_space *mapping, pgoff_t *index, int tag)
+ struct address_space *mapping, pgoff_t *index, xa_tag_t tag)
{
return pagevec_lookup_range_tag(pvec, mapping, index, (pgoff_t)-1, tag);
}
diff --git a/mm/swap.c b/mm/swap.c
index 8d7773cb2c3f..31d79479dacf 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -991,7 +991,7 @@ EXPORT_SYMBOL(pagevec_lookup_range);

unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag)
+ xa_tag_t tag)
{
pvec->nr = find_get_pages_range_tag(mapping, index, end, tag,
PAGEVEC_SIZE, pvec->pages);
@@ -1001,7 +1001,7 @@ EXPORT_SYMBOL(pagevec_lookup_range_tag);

unsigned pagevec_lookup_range_nr_tag(struct pagevec *pvec,
struct address_space *mapping, pgoff_t *index, pgoff_t end,
- int tag, unsigned max_pages)
+ xa_tag_t tag, unsigned max_pages)
{
pvec->nr = find_get_pages_range_tag(mapping, index, end, tag,
min_t(unsigned int, max_pages, PAGEVEC_SIZE), pvec->pages);
--
2.15.1


2018-01-17 20:54:58

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 43/99] shmem: Convert find_swap_entry to XArray

From: Matthew Wilcox <[email protected]>

This is a 1:1 conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 654f367aca90..ce285ae635ea 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1076,28 +1076,27 @@ static void shmem_evict_inode(struct inode *inode)
clear_inode(inode);
}

-static unsigned long find_swap_entry(struct radix_tree_root *root, void *item)
+static unsigned long find_swap_entry(struct xarray *xa, void *item)
{
- struct radix_tree_iter iter;
- void **slot;
- unsigned long found = -1;
+ XA_STATE(xas, xa, 0);
unsigned int checked = 0;
+ void *entry;

rcu_read_lock();
- radix_tree_for_each_slot(slot, root, &iter, 0) {
- if (*slot == item) {
- found = iter.index;
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (xas_retry(&xas, entry))
+ continue;
+ if (entry == item)
break;
- }
checked++;
- if ((checked % 4096) != 0)
+ if ((checked % XA_CHECK_SCHED) != 0)
continue;
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
-
rcu_read_unlock();
- return found;
+
+ return xas_invalid(&xas) ? -1 : xas.xa_index;
}

/*
--
2.15.1


2018-01-17 20:55:54

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 42/99] shmem: Convert shmem_confirm_swap to XArray

From: Matthew Wilcox <[email protected]>

xa_load has its own RCU locking, so we can eliminate it here.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/shmem.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index fad6c9e7402e..654f367aca90 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -348,12 +348,7 @@ static int shmem_xa_replace(struct address_space *mapping,
static bool shmem_confirm_swap(struct address_space *mapping,
pgoff_t index, swp_entry_t swap)
{
- void *item;
-
- rcu_read_lock();
- item = radix_tree_lookup(&mapping->pages, index);
- rcu_read_unlock();
- return item == swp_to_radix_entry(swap);
+ return xa_load(&mapping->pages, index) == swp_to_radix_entry(swap);
}

/*
--
2.15.1


2018-01-17 20:56:09

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 38/99] mm: Convert collapse_shmem to XArray

From: Matthew Wilcox <[email protected]>

I found another victim of the radix tree being hard to use. Because
there was no call to radix_tree_preload(), khugepaged was allocating
radix_tree_nodes using GFP_ATOMIC.

I also converted a local_irq_save()/restore() pair to
disable()/enable().

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/khugepaged.c | 158 +++++++++++++++++++++++---------------------------------
1 file changed, 65 insertions(+), 93 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 55ade70c33bb..9f49d0cd61c2 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1282,17 +1282,17 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
*
* Basic scheme is simple, details are more complex:
* - allocate and freeze a new huge page;
- * - scan over radix tree replacing old pages the new one
+ * - scan page cache replacing old pages with the new one
* + swap in pages if necessary;
* + fill in gaps;
- * + keep old pages around in case if rollback is required;
- * - if replacing succeed:
+ * + keep old pages around in case rollback is required;
+ * - if replacing succeeds:
* + copy data over;
* + free old pages;
* + unfreeze huge page;
* - if replacing failed;
* + put all pages back and unfreeze them;
- * + restore gaps in the radix-tree;
+ * + restore gaps in the page cache;
* + free huge page;
*/
static void collapse_shmem(struct mm_struct *mm,
@@ -1300,12 +1300,11 @@ static void collapse_shmem(struct mm_struct *mm,
struct page **hpage, int node)
{
gfp_t gfp;
- struct page *page, *new_page, *tmp;
+ struct page *new_page;
struct mem_cgroup *memcg;
pgoff_t index, end = start + HPAGE_PMD_NR;
LIST_HEAD(pagelist);
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
int nr_none = 0, result = SCAN_SUCCEED;

VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
@@ -1330,48 +1329,48 @@ static void collapse_shmem(struct mm_struct *mm,
__SetPageLocked(new_page);
BUG_ON(!page_ref_freeze(new_page, 1));

-
/*
- * At this point the new_page is 'frozen' (page_count() is zero), locked
- * and not up-to-date. It's safe to insert it into radix tree, because
- * nobody would be able to map it or use it in other way until we
- * unfreeze it.
+ * At this point the new_page is 'frozen' (page_count() is zero),
+ * locked and not up-to-date. It's safe to insert it into the page
+ * cache, because nobody would be able to map it or use it in other
+ * way until we unfreeze it.
*/

- index = start;
- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- int n = min(iter.index, end) - index;
-
- /*
- * Handle holes in the radix tree: charge it from shmem and
- * insert relevant subpage of new_page into the radix-tree.
- */
- if (n && !shmem_charge(mapping->host, n)) {
- result = SCAN_FAIL;
+ /* This will be less messy when we use multi-index entries */
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, end - 1);
+ if (!xas_error(&xas))
break;
- }
- nr_none += n;
- for (; index < min(iter.index, end); index++) {
- radix_tree_insert(&mapping->pages, index,
- new_page + (index % HPAGE_PMD_NR));
- }
+ xas_unlock_irq(&xas);
+ if (!xas_nomem(&xas, GFP_KERNEL))
+ goto out;
+ } while (1);

- /* We are done. */
- if (index >= end)
- break;
+ for (index = start; index < end; index++) {
+ struct page *page = xas_next(&xas);
+
+ VM_BUG_ON(index != xas.xa_index);
+ if (!page) {
+ if (!shmem_charge(mapping->host, 1)) {
+ result = SCAN_FAIL;
+ break;
+ }
+ xas_store(&xas, new_page + (index % HPAGE_PMD_NR));
+ nr_none++;
+ continue;
+ }

- page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
if (xa_is_value(page) || !PageUptodate(page)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
SGP_NOHUGE)) {
result = SCAN_FAIL;
- goto tree_unlocked;
+ goto xa_unlocked;
}
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
+ xas_set(&xas, index);
} else if (trylock_page(page)) {
get_page(page);
} else {
@@ -1391,7 +1390,7 @@ static void collapse_shmem(struct mm_struct *mm,
result = SCAN_TRUNCATED;
goto out_unlock;
}
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);

if (isolate_lru_page(page)) {
result = SCAN_DEL_PAGE_LRU;
@@ -1402,17 +1401,16 @@ static void collapse_shmem(struct mm_struct *mm,
unmap_mapping_range(mapping, index << PAGE_SHIFT,
PAGE_SIZE, 0);

- xa_lock_irq(&mapping->pages);
+ xas_lock(&xas);
+ xas_set(&xas, index);

- slot = radix_tree_lookup_slot(&mapping->pages, index);
- VM_BUG_ON_PAGE(page != radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock), page);
+ VM_BUG_ON_PAGE(page != xas_load(&xas), page);
VM_BUG_ON_PAGE(page_mapped(page), page);

/*
* The page is expected to have page_count() == 3:
* - we hold a pin on it;
- * - one reference from radix tree;
+ * - one reference from page cache;
* - one from isolate_lru_page;
*/
if (!page_ref_freeze(page, 3)) {
@@ -1427,56 +1425,30 @@ static void collapse_shmem(struct mm_struct *mm,
list_add_tail(&page->lru, &pagelist);

/* Finally, replace with the new page. */
- radix_tree_replace_slot(&mapping->pages, slot,
- new_page + (index % HPAGE_PMD_NR));
-
- slot = radix_tree_iter_resume(slot, &iter);
- index++;
+ xas_store(&xas, new_page + (index % HPAGE_PMD_NR));
continue;
out_lru:
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
putback_lru_page(page);
out_isolate_failed:
unlock_page(page);
put_page(page);
- goto tree_unlocked;
+ goto xa_unlocked;
out_unlock:
unlock_page(page);
put_page(page);
break;
}
+ xas_unlock_irq(&xas);

- /*
- * Handle hole in radix tree at the end of the range.
- * This code only triggers if there's nothing in radix tree
- * beyond 'end'.
- */
- if (result == SCAN_SUCCEED && index < end) {
- int n = end - index;
-
- if (!shmem_charge(mapping->host, n)) {
- result = SCAN_FAIL;
- goto tree_locked;
- }
-
- for (; index < end; index++) {
- radix_tree_insert(&mapping->pages, index,
- new_page + (index % HPAGE_PMD_NR));
- }
- nr_none += n;
- }
-
-tree_locked:
- xa_unlock_irq(&mapping->pages);
-tree_unlocked:
-
+xa_unlocked:
if (result == SCAN_SUCCEED) {
- unsigned long flags;
+ struct page *page, *tmp;
struct zone *zone = page_zone(new_page);

/*
- * Replacing old pages with new one has succeed, now we need to
- * copy the content and free old pages.
+ * Replacing old pages with new one has succeeded, now we
+ * need to copy the content and free the old pages.
*/
list_for_each_entry_safe(page, tmp, &pagelist, lru) {
copy_highpage(new_page + (page->index % HPAGE_PMD_NR),
@@ -1490,16 +1462,16 @@ static void collapse_shmem(struct mm_struct *mm,
put_page(page);
}

- local_irq_save(flags);
+ local_irq_disable();
__inc_node_page_state(new_page, NR_SHMEM_THPS);
if (nr_none) {
__mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);
__mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);
}
- local_irq_restore(flags);
+ local_irq_enable();

/*
- * Remove pte page tables, so we can re-faulti
+ * Remove pte page tables, so we can re-fault
* the page as huge.
*/
retract_page_tables(mapping, start);
@@ -1514,37 +1486,37 @@ static void collapse_shmem(struct mm_struct *mm,

*hpage = NULL;
} else {
- /* Something went wrong: rollback changes to the radix-tree */
+ struct page *page;
+ /* Something went wrong: roll back page cache changes */
shmem_uncharge(mapping->host, nr_none);
- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= end)
- break;
+ xas_lock_irq(&xas);
+ xas_set(&xas, start);
+ xas_for_each(&xas, page, end - 1) {
page = list_first_entry_or_null(&pagelist,
struct page, lru);
- if (!page || iter.index < page->index) {
+ if (!page || xas.xa_index < page->index) {
if (!nr_none)
break;
nr_none--;
/* Put holes back where they were */
- radix_tree_delete(&mapping->pages, iter.index);
+ xas_store(&xas, NULL);
continue;
}

- VM_BUG_ON_PAGE(page->index != iter.index, page);
+ VM_BUG_ON_PAGE(page->index != xas.xa_index, page);

/* Unfreeze the page. */
list_del(&page->lru);
page_ref_unfreeze(page, 2);
- radix_tree_replace_slot(&mapping->pages, slot, page);
- slot = radix_tree_iter_resume(slot, &iter);
- xa_unlock_irq(&mapping->pages);
+ xas_store(&xas, page);
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
putback_lru_page(page);
unlock_page(page);
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
}
VM_BUG_ON(nr_none);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);

/* Unfreeze new_page, caller would take care about freeing it */
page_ref_unfreeze(new_page, 1);
--
2.15.1


2018-01-17 20:56:26

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 39/99] mm: Convert khugepaged_scan_shmem to XArray

From: Matthew Wilcox <[email protected]>

Slightly shorter and easier to read code.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/khugepaged.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 9f49d0cd61c2..15f1b2d81a69 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1534,8 +1534,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
pgoff_t start, struct page **hpage)
{
struct page *page = NULL;
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
int present, swap;
int node = NUMA_NO_NODE;
int result = SCAN_SUCCEED;
@@ -1544,17 +1543,11 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
swap = 0;
memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
- if (iter.index >= start + HPAGE_PMD_NR)
- break;
-
- page = radix_tree_deref_slot(slot);
- if (radix_tree_deref_retry(page)) {
- slot = radix_tree_iter_retry(&iter);
+ xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) {
+ if (xas_retry(&xas, page))
continue;
- }

- if (radix_tree_exception(page)) {
+ if (xa_is_value(page)) {
if (++swap > khugepaged_max_ptes_swap) {
result = SCAN_EXCEED_SWAP_PTE;
break;
@@ -1593,7 +1586,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
present++;

if (need_resched()) {
- slot = radix_tree_iter_resume(slot, &iter);
+ xas_pause(&xas);
cond_resched_rcu();
}
}
--
2.15.1


2018-01-17 20:57:13

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 36/99] mm: Convert page migration to XArray

From: Matthew Wilcox <[email protected]>

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/migrate.c | 41 ++++++++++++++++-------------------------
1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 75d19904dd9a..7122fec9b075 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -322,7 +322,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
page = migration_entry_to_page(entry);

/*
- * Once radix-tree replacement of page migration started, page_count
+ * Once page cache replacement of page migration started, page_count
* *must* be zero. And, we don't want to call wait_on_page_locked()
* against a page without get_page().
* So, we use get_page_unless_zero(), here. Even failed, page fault
@@ -437,10 +437,10 @@ int migrate_page_move_mapping(struct address_space *mapping,
struct buffer_head *head, enum migrate_mode mode,
int extra_count)
{
+ XA_STATE(xas, &mapping->pages, page_index(page));
struct zone *oldzone, *newzone;
int dirty;
int expected_count = 1 + extra_count;
- void **pslot;

/*
* Device public or private pages have an extra refcount as they are
@@ -466,21 +466,16 @@ int migrate_page_move_mapping(struct address_space *mapping,
oldzone = page_zone(page);
newzone = page_zone(newpage);

- xa_lock_irq(&mapping->pages);
-
- pslot = radix_tree_lookup_slot(&mapping->pages,
- page_index(page));
+ xas_lock_irq(&xas);

expected_count += 1 + page_has_private(page);
- if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot,
- &mapping->pages.xa_lock) != page) {
- xa_unlock_irq(&mapping->pages);
+ if (page_count(page) != expected_count || xas_load(&xas) != page) {
+ xas_unlock_irq(&xas);
return -EAGAIN;
}

if (!page_ref_freeze(page, expected_count)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}

@@ -494,7 +489,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (mode == MIGRATE_ASYNC && head &&
!buffer_migrate_lock_buffers(head, mode)) {
page_ref_unfreeze(page, expected_count);
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}

@@ -522,7 +517,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
SetPageDirty(newpage);
}

- radix_tree_replace_slot(&mapping->pages, pslot, newpage);
+ xas_store(&xas, newpage);

/*
* Drop cache reference from old page by unfreezing
@@ -531,7 +526,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
page_ref_unfreeze(page, expected_count - 1);

- xa_unlock(&mapping->pages);
+ xas_unlock(&xas);
/* Leave irq disabled to prevent preemption while updating stats */

/*
@@ -571,22 +566,18 @@ EXPORT_SYMBOL(migrate_page_move_mapping);
int migrate_huge_page_move_mapping(struct address_space *mapping,
struct page *newpage, struct page *page)
{
+ XA_STATE(xas, &mapping->pages, page_index(page));
int expected_count;
- void **pslot;
-
- xa_lock_irq(&mapping->pages);
-
- pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page));

+ xas_lock_irq(&xas);
expected_count = 2 + page_has_private(page);
- if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) {
- xa_unlock_irq(&mapping->pages);
+ if (page_count(page) != expected_count || xas_load(&xas) != page) {
+ xas_unlock_irq(&xas);
return -EAGAIN;
}

if (!page_ref_freeze(page, expected_count)) {
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
return -EAGAIN;
}

@@ -595,11 +586,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,

get_page(newpage);

- radix_tree_replace_slot(&mapping->pages, pslot, newpage);
+ xas_store(&xas, newpage);

page_ref_unfreeze(page, expected_count - 1);

- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);

return MIGRATEPAGE_SUCCESS;
}
--
2.15.1


2018-01-17 20:57:15

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 35/99] mm: Convert __do_page_cache_readahead to XArray

From: Matthew Wilcox <[email protected]>

This one is trivial.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/readahead.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index f64b31b3a84a..66bcaffd47f0 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -174,9 +174,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
if (page_offset > end_index)
break;

- rcu_read_lock();
- page = radix_tree_lookup(&mapping->pages, page_offset);
- rcu_read_unlock();
+ page = xa_load(&mapping->pages, page_offset);
if (page && !xa_is_value(page))
continue;

--
2.15.1


2018-01-17 21:01:26

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 37/99] mm: Convert huge_memory to XArray

From: Matthew Wilcox <[email protected]>

Quite a straightforward conversion.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/huge_memory.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f71dd3e7d8cd..5c275295bbd3 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2379,7 +2379,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
if (PageAnon(head) && !PageSwapCache(head)) {
page_ref_inc(page_tail);
} else {
- /* Additional pin to radix tree */
+ /* Additional pin to page cache */
page_ref_add(page_tail, 2);
}

@@ -2450,13 +2450,13 @@ static void __split_huge_page(struct page *page, struct list_head *list,
ClearPageCompound(head);
/* See comment in __split_huge_page_tail() */
if (PageAnon(head)) {
- /* Additional pin to radix tree of swap cache */
+ /* Additional pin to swap cache */
if (PageSwapCache(head))
page_ref_add(head, 2);
else
page_ref_inc(head);
} else {
- /* Additional pin to radix tree */
+ /* Additional pin to page cache */
page_ref_add(head, 2);
xa_unlock(&head->mapping->pages);
}
@@ -2568,7 +2568,7 @@ bool can_split_huge_page(struct page *page, int *pextra_pins)
{
int extra_pins;

- /* Additional pins from radix tree */
+ /* Additional pins from page cache */
if (PageAnon(page))
extra_pins = PageSwapCache(page) ? HPAGE_PMD_NR : 0;
else
@@ -2664,17 +2664,14 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
spin_lock_irqsave(zone_lru_lock(page_zone(head)), flags);

if (mapping) {
- void **pslot;
+ XA_STATE(xas, &mapping->pages, page_index(head));

- xa_lock(&mapping->pages);
- pslot = radix_tree_lookup_slot(&mapping->pages,
- page_index(head));
/*
- * Check if the head page is present in radix tree.
+ * Check if the head page is present in page cache.
* We assume all tail are present too, if head is there.
*/
- if (radix_tree_deref_slot_protected(pslot,
- &mapping->pages.xa_lock) != head)
+ xa_lock(&mapping->pages);
+ if (xas_load(&xas) != head)
goto fail;
}

--
2.15.1


2018-01-17 21:01:53

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 32/99] mm: Convert truncate to XArray

From: Matthew Wilcox <[email protected]>

This is essentially xa_cmpxchg() with the locking handled above us,
and it doesn't have to handle replacing a NULL entry.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/truncate.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/mm/truncate.c b/mm/truncate.c
index 69bb743dd7e5..70323c347298 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -33,15 +33,12 @@
static inline void __clear_shadow_entry(struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_node *node;
- void **slot;
+ XA_STATE(xas, &mapping->pages, index);

- if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot))
+ xas_set_update(&xas, workingset_update_node);
+ if (xas_load(&xas) != entry)
return;
- if (*slot != entry)
- return;
- __radix_tree_replace(&mapping->pages, node, slot, NULL,
- workingset_update_node);
+ xas_store(&xas, NULL);
mapping->nrexceptional--;
}

@@ -746,10 +743,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
index++;
}
/*
- * For DAX we invalidate page tables after invalidating radix tree. We
+ * For DAX we invalidate page tables after invalidating page cache. We
* could invalidate page tables while invalidating each entry however
* that would be expensive. And doing range unmapping before doesn't
- * work as we have no cheap way to find whether radix tree entry didn't
+ * work as we have no cheap way to find whether page cache entry didn't
* get remapped later.
*/
if (dax_mapping(mapping)) {
--
2.15.1


2018-01-17 21:02:11

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 33/99] mm: Convert add_to_swap_cache to XArray

From: Matthew Wilcox <[email protected]>

Combine __add_to_swap_cache and add_to_swap_cache into one function
since there is no more need to preload.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/swap_state.c | 93 ++++++++++++++++++---------------------------------------
1 file changed, 29 insertions(+), 64 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3f95e8fc4cb2..a57b5ad4c503 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -107,14 +107,15 @@ void show_swap_cache_info(void)
}

/*
- * __add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
+ * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
* but sets SwapCache flag and private instead of mapping and index.
*/
-int __add_to_swap_cache(struct page *page, swp_entry_t entry)
+int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
{
- int error, i, nr = hpage_nr_pages(page);
- struct address_space *address_space;
+ struct address_space *address_space = swap_address_space(entry);
pgoff_t idx = swp_offset(entry);
+ XA_STATE(xas, &address_space->pages, idx);
+ unsigned long i, nr = 1UL << compound_order(page);

VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapCache(page), page);
@@ -123,50 +124,30 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
page_ref_add(page, nr);
SetPageSwapCache(page);

- address_space = swap_address_space(entry);
- xa_lock_irq(&address_space->pages);
- for (i = 0; i < nr; i++) {
- set_page_private(page + i, entry.val + i);
- error = radix_tree_insert(&address_space->pages,
- idx + i, page + i);
- if (unlikely(error))
- break;
- }
- if (likely(!error)) {
+ do {
+ xas_lock_irq(&xas);
+ xas_create_range(&xas, idx + nr - 1);
+ if (xas_error(&xas))
+ goto unlock;
+ for (i = 0; i < nr; i++) {
+ VM_BUG_ON_PAGE(xas.xa_index != idx + i, page);
+ set_page_private(page + i, entry.val + i);
+ xas_store(&xas, page + i);
+ xas_next(&xas);
+ }
address_space->nrpages += nr;
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
ADD_CACHE_INFO(add_total, nr);
- } else {
- /*
- * Only the context which have set SWAP_HAS_CACHE flag
- * would call add_to_swap_cache().
- * So add_to_swap_cache() doesn't returns -EEXIST.
- */
- VM_BUG_ON(error == -EEXIST);
- set_page_private(page + i, 0UL);
- while (i--) {
- radix_tree_delete(&address_space->pages, idx + i);
- set_page_private(page + i, 0UL);
- }
- ClearPageSwapCache(page);
- page_ref_sub(page, nr);
- }
- xa_unlock_irq(&address_space->pages);
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp));

- return error;
-}
-
-
-int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp_mask)
-{
- int error;
+ if (!xas_error(&xas))
+ return 0;

- error = radix_tree_maybe_preload_order(gfp_mask, compound_order(page));
- if (!error) {
- error = __add_to_swap_cache(page, entry);
- radix_tree_preload_end();
- }
- return error;
+ ClearPageSwapCache(page);
+ page_ref_sub(page, nr);
+ return xas_error(&xas);
}

/*
@@ -220,7 +201,7 @@ int add_to_swap(struct page *page)
goto fail;

/*
- * Radix-tree node allocations from PF_MEMALLOC contexts could
+ * XArray node allocations from PF_MEMALLOC contexts could
* completely exhaust the page allocator. __GFP_NOMEMALLOC
* stops emergency reserves from being allocated.
*
@@ -232,7 +213,6 @@ int add_to_swap(struct page *page)
*/
err = add_to_swap_cache(page, entry,
__GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN);
- /* -ENOMEM radix-tree allocation failure */
if (err)
/*
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
@@ -400,19 +380,11 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
break; /* Out of memory */
}

- /*
- * call radix_tree_preload() while we can wait.
- */
- err = radix_tree_maybe_preload(gfp_mask & GFP_KERNEL);
- if (err)
- break;
-
/*
* Swap entry may have been freed since our caller observed it.
*/
err = swapcache_prepare(entry);
if (err == -EEXIST) {
- radix_tree_preload_end();
/*
* We might race against get_swap_page() and stumble
* across a SWAP_HAS_CACHE swap_map entry whose page
@@ -420,26 +392,19 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
*/
cond_resched();
continue;
- }
- if (err) { /* swp entry is obsolete ? */
- radix_tree_preload_end();
+ } else if (err) /* swp entry is obsolete ? */
break;
- }

- /* May fail (-ENOMEM) if radix-tree node allocation failed. */
+ /* May fail (-ENOMEM) if XArray node allocation failed. */
__SetPageLocked(new_page);
__SetPageSwapBacked(new_page);
- err = __add_to_swap_cache(new_page, entry);
+ err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL);
if (likely(!err)) {
- radix_tree_preload_end();
- /*
- * Initiate read into locked page and return.
- */
+ /* Initiate read into locked page */
lru_cache_add_anon(new_page);
*new_page_allocated = true;
return new_page;
}
- radix_tree_preload_end();
__ClearPageLocked(new_page);
/*
* add_to_swap_cache() doesn't return -EEXIST, so we can safely
--
2.15.1


2018-01-17 21:03:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 34/99] mm: Convert delete_from_swap_cache to XArray

From: Matthew Wilcox <[email protected]>

Both callers of __delete_from_swap_cache have the swp_entry_t already,
so pass that in to make constructing the XA_STATE easier.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 5 +++--
mm/swap_state.c | 24 ++++++++++--------------
mm/vmscan.c | 2 +-
3 files changed, 14 insertions(+), 17 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index e519554730fa..8eb99229dbc0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -408,7 +408,7 @@ extern void show_swap_cache_info(void);
extern int add_to_swap(struct page *page);
extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t);
extern int __add_to_swap_cache(struct page *page, swp_entry_t entry);
-extern void __delete_from_swap_cache(struct page *);
+extern void __delete_from_swap_cache(struct page *, swp_entry_t entry);
extern void delete_from_swap_cache(struct page *);
extern void free_page_and_swap_cache(struct page *);
extern void free_pages_and_swap_cache(struct page **, int);
@@ -583,7 +583,8 @@ static inline int add_to_swap_cache(struct page *page, swp_entry_t entry,
return -1;
}

-static inline void __delete_from_swap_cache(struct page *page)
+static inline void __delete_from_swap_cache(struct page *page,
+ swp_entry_t entry)
{
}

diff --git a/mm/swap_state.c b/mm/swap_state.c
index a57b5ad4c503..219e3b4f09e6 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -154,23 +154,22 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp)
* This must be called only on pages that have
* been verified to be in the swap cache.
*/
-void __delete_from_swap_cache(struct page *page)
+void __delete_from_swap_cache(struct page *page, swp_entry_t entry)
{
- struct address_space *address_space;
+ struct address_space *address_space = swap_address_space(entry);
int i, nr = hpage_nr_pages(page);
- swp_entry_t entry;
- pgoff_t idx;
+ pgoff_t idx = swp_offset(entry);
+ XA_STATE(xas, &address_space->pages, idx);

VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(!PageSwapCache(page), page);
VM_BUG_ON_PAGE(PageWriteback(page), page);

- entry.val = page_private(page);
- address_space = swap_address_space(entry);
- idx = swp_offset(entry);
for (i = 0; i < nr; i++) {
- radix_tree_delete(&address_space->pages, idx + i);
+ void *entry = xas_store(&xas, NULL);
+ VM_BUG_ON_PAGE(entry != page + i, entry);
set_page_private(page + i, 0);
+ xas_next(&xas);
}
ClearPageSwapCache(page);
address_space->nrpages -= nr;
@@ -246,14 +245,11 @@ int add_to_swap(struct page *page)
*/
void delete_from_swap_cache(struct page *page)
{
- swp_entry_t entry;
- struct address_space *address_space;
-
- entry.val = page_private(page);
+ swp_entry_t entry = { .val = page_private(page) };
+ struct address_space *address_space = swap_address_space(entry);

- address_space = swap_address_space(entry);
xa_lock_irq(&address_space->pages);
- __delete_from_swap_cache(page);
+ __delete_from_swap_cache(page, entry);
xa_unlock_irq(&address_space->pages);

put_swap_page(page, entry);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2fa675a2db31..51d437a18db8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -718,7 +718,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
if (PageSwapCache(page)) {
swp_entry_t swap = { .val = page_private(page) };
mem_cgroup_swapout(page, swap);
- __delete_from_swap_cache(page);
+ __delete_from_swap_cache(page, swap);
xa_unlock_irqrestore(&mapping->pages, flags);
put_swap_page(page, swap);
} else {
--
2.15.1


2018-01-17 21:04:01

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 31/99] mm: Convert workingset to XArray

From: Matthew Wilcox <[email protected]>

We construct a fake XA_STATE and use it to delete the node with xa_store()
rather than adding a special function for this unique use case.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 9 ---------
mm/workingset.c | 51 ++++++++++++++++++++++-----------------------------
2 files changed, 22 insertions(+), 38 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 394957963c4b..e519554730fa 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -306,15 +306,6 @@ void workingset_update_node(struct xa_node *node);
xas_set_update(xas, workingset_update_node); \
} while (0)

-/* Returns workingset_update_node() if the mapping has shadow entries. */
-#define workingset_lookup_update(mapping) \
-({ \
- radix_tree_update_node_t __helper = workingset_update_node; \
- if (dax_mapping(mapping) || shmem_mapping(mapping)) \
- __helper = NULL; \
- __helper; \
-})
-
/* linux/mm/page_alloc.c */
extern unsigned long totalram_pages;
extern unsigned long totalreserve_pages;
diff --git a/mm/workingset.c b/mm/workingset.c
index 91b6e16ad4c1..f7ca6ea5d8b1 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -148,7 +148,7 @@
* and activations is maintained (node->inactive_age).
*
* On eviction, a snapshot of this counter (along with some bits to
- * identify the node) is stored in the now empty page cache radix tree
+ * identify the node) is stored in the now empty page cache
* slot of the evicted page. This is called a shadow entry.
*
* On cache misses for which there are shadow entries, an eligible
@@ -162,7 +162,7 @@

/*
* Eviction timestamps need to be able to cover the full range of
- * actionable refaults. However, bits are tight in the radix tree
+ * actionable refaults. However, bits are tight in the xarray
* entry, and after storing the identifier for the lruvec there might
* not be enough left to represent every single actionable refault. In
* that case, we have to sacrifice granularity for distance, and group
@@ -338,7 +338,7 @@ void workingset_activation(struct page *page)

static struct list_lru shadow_nodes;

-void workingset_update_node(struct radix_tree_node *node)
+void workingset_update_node(struct xa_node *node)
{
/*
* Track non-empty nodes that contain only shadow entries;
@@ -370,7 +370,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
local_irq_enable();

/*
- * Approximate a reasonable limit for the radix tree nodes
+ * Approximate a reasonable limit for the nodes
* containing shadow entries. We don't need to keep more
* shadow entries than possible pages on the active list,
* since refault distances bigger than that are dismissed.
@@ -385,11 +385,11 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
* worst-case density of 1/8th. Below that, not all eligible
* refaults can be detected anymore.
*
- * On 64-bit with 7 radix_tree_nodes per page and 64 slots
+ * On 64-bit with 7 xa_nodes per page and 64 slots
* each, this will reclaim shadow entries when they consume
* ~1.8% of available memory:
*
- * PAGE_SIZE / radix_tree_nodes / node_entries * 8 / PAGE_SIZE
+ * PAGE_SIZE / xa_nodes / node_entries * 8 / PAGE_SIZE
*/
if (sc->memcg) {
cache = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid,
@@ -398,7 +398,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
cache = node_page_state(NODE_DATA(sc->nid), NR_ACTIVE_FILE) +
node_page_state(NODE_DATA(sc->nid), NR_INACTIVE_FILE);
}
- max_nodes = cache >> (RADIX_TREE_MAP_SHIFT - 3);
+ max_nodes = cache >> (XA_CHUNK_SHIFT - 3);

if (nodes <= max_nodes)
return 0;
@@ -408,11 +408,11 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
static enum lru_status shadow_lru_isolate(struct list_head *item,
struct list_lru_one *lru,
spinlock_t *lru_lock,
- void *arg)
+ void *arg) __must_hold(lru_lock)
{
+ XA_STATE(xas, NULL, 0);
struct address_space *mapping;
- struct radix_tree_node *node;
- unsigned int i;
+ struct xa_node *node;
int ret;

/*
@@ -420,7 +420,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* the shadow node LRU under the mapping->pages.xa_lock and the
* lru_lock. Because the page cache tree is emptied before
* the inode can be destroyed, holding the lru_lock pins any
- * address_space that has radix tree nodes on the LRU.
+ * address_space that has nodes on the LRU.
*
* We can then safely transition to the mapping->pages.xa_lock to
* pin only the address_space of the particular node we want
@@ -449,25 +449,18 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
goto out_invalid;
if (WARN_ON_ONCE(node->count != node->nr_values))
goto out_invalid;
- for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
- if (node->slots[i]) {
- if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
- goto out_invalid;
- if (WARN_ON_ONCE(!node->nr_values))
- goto out_invalid;
- if (WARN_ON_ONCE(!mapping->nrexceptional))
- goto out_invalid;
- node->slots[i] = NULL;
- node->nr_values--;
- node->count--;
- mapping->nrexceptional--;
- }
- }
- if (WARN_ON_ONCE(node->nr_values))
- goto out_invalid;
+ mapping->nrexceptional -= node->nr_values;
+ xas.xa = node->array;
+ xas.xa_node = rcu_dereference_protected(node->parent,
+ lockdep_is_held(&mapping->pages.xa_lock));
+ xas.xa_offset = node->offset;
+ xas.xa_update = workingset_update_node;
+ /*
+ * We could store a shadow entry here which was the minimum of the
+ * shadow entries we were tracking ...
+ */
+ xas_store(&xas, NULL);
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
- __radix_tree_delete_node(&mapping->pages, node,
- workingset_lookup_update(mapping));

out_invalid:
xa_unlock(&mapping->pages);
--
2.15.1


2018-01-17 21:04:20

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 27/99] page cache: Convert delete_batch to XArray

From: Matthew Wilcox <[email protected]>

Rename the function from page_cache_tree_delete_batch to just
page_cache_delete_batch.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 317a89df1945..d2a0031d61f5 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -276,7 +276,7 @@ void delete_from_page_cache(struct page *page)
EXPORT_SYMBOL(delete_from_page_cache);

/*
- * page_cache_tree_delete_batch - delete several pages from page cache
+ * page_cache_delete_batch - delete several pages from page cache
* @mapping: the mapping to which pages belong
* @pvec: pagevec with pages to delete
*
@@ -289,23 +289,18 @@ EXPORT_SYMBOL(delete_from_page_cache);
*
* The function expects xa_lock to be held.
*/
-static void
-page_cache_tree_delete_batch(struct address_space *mapping,
+static void page_cache_delete_batch(struct address_space *mapping,
struct pagevec *pvec)
{
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, pvec->pages[0]->index);
int total_pages = 0;
int i = 0, tail_pages = 0;
struct page *page;
- pgoff_t start;

- start = pvec->pages[0]->index;
- radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
+ mapping_set_update(&xas, mapping);
+ xas_for_each(&xas, page, ULONG_MAX) {
if (i >= pagevec_count(pvec) && !tail_pages)
break;
- page = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
if (xa_is_value(page))
continue;
if (!tail_pages) {
@@ -314,8 +309,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
* have our pages locked so they are protected from
* being removed.
*/
- if (page != pvec->pages[i])
+ if (page != pvec->pages[i]) {
+ VM_BUG_ON_PAGE(page->index >
+ pvec->pages[i]->index, page);
continue;
+ }
WARN_ON_ONCE(!PageLocked(page));
if (PageTransHuge(page) && !PageHuge(page))
tail_pages = HPAGE_PMD_NR - 1;
@@ -326,11 +324,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
*/
i++;
} else {
+ VM_BUG_ON_PAGE(page->index + HPAGE_PMD_NR - tail_pages
+ != pvec->pages[i]->index, page);
tail_pages--;
}
- radix_tree_clear_tags(&mapping->pages, iter.node, slot);
- __radix_tree_replace(&mapping->pages, iter.node, slot, NULL,
- workingset_lookup_update(mapping));
+ xas_store(&xas, NULL);
total_pages++;
}
mapping->nrpages -= total_pages;
@@ -351,7 +349,7 @@ void delete_from_page_cache_batch(struct address_space *mapping,

unaccount_page_cache_page(mapping, pvec->pages[i]);
}
- page_cache_tree_delete_batch(mapping, pvec);
+ page_cache_delete_batch(mapping, pvec);
xa_unlock_irqrestore(&mapping->pages, flags);

for (i = 0; i < pagevec_count(pvec); i++)
--
2.15.1


2018-01-17 21:04:28

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 30/99] mm: Convert page-writeback to XArray

From: Matthew Wilcox <[email protected]>

Includes moving mapping_tagged() to fs.h as a static inline, and
changing it to return bool.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/fs.h | 17 +++++++++------
mm/page-writeback.c | 62 +++++++++++++++++++----------------------------------
2 files changed, 32 insertions(+), 47 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index e4345c13e237..c58bc3c619bf 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -470,15 +470,18 @@ struct block_device {
struct mutex bd_fsfreeze_mutex;
} __randomize_layout;

+/* XArray tags, for tagging dirty and writeback pages in the pagecache. */
+#define PAGECACHE_TAG_DIRTY XA_TAG_0
+#define PAGECACHE_TAG_WRITEBACK XA_TAG_1
+#define PAGECACHE_TAG_TOWRITE XA_TAG_2
+
/*
- * Radix-tree tags, for tagging dirty and writeback pages within the pagecache
- * radix trees
+ * Returns true if any of the pages in the mapping are marked with the tag.
*/
-#define PAGECACHE_TAG_DIRTY 0
-#define PAGECACHE_TAG_WRITEBACK 1
-#define PAGECACHE_TAG_TOWRITE 2
-
-int mapping_tagged(struct address_space *mapping, int tag);
+static inline bool mapping_tagged(struct address_space *mapping, xa_tag_t tag)
+{
+ return xa_tagged(&mapping->pages, tag);
+}

static inline void i_mmap_lock_write(struct address_space *mapping)
{
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 588ce729d199..0407436a8305 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2098,33 +2098,25 @@ void __init page_writeback_init(void)
* dirty pages in the file (thus it is important for this function to be quick
* so that it can tag pages faster than a dirtying process can create them).
*/
-/*
- * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency.
- */
void tag_pages_for_writeback(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
-#define WRITEBACK_TAG_BATCH 4096
- unsigned long tagged = 0;
- struct radix_tree_iter iter;
- void **slot;
+ XA_STATE(xas, &mapping->pages, start);
+ unsigned int tagged = 0;
+ void *page;

- xa_lock_irq(&mapping->pages);
- radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start,
- PAGECACHE_TAG_DIRTY) {
- if (iter.index > end)
- break;
- radix_tree_iter_tag_set(&mapping->pages, &iter,
- PAGECACHE_TAG_TOWRITE);
- tagged++;
- if ((tagged % WRITEBACK_TAG_BATCH) != 0)
+ xas_lock_irq(&xas);
+ xas_for_each_tag(&xas, page, end, PAGECACHE_TAG_DIRTY) {
+ xas_set_tag(&xas, PAGECACHE_TAG_TOWRITE);
+ if (++tagged % XA_CHECK_SCHED)
continue;
- slot = radix_tree_iter_resume(slot, &iter);
- xa_unlock_irq(&mapping->pages);
+
+ xas_pause(&xas);
+ xas_unlock_irq(&xas);
cond_resched();
- xa_lock_irq(&mapping->pages);
+ xas_lock_irq(&xas);
}
- xa_unlock_irq(&mapping->pages);
+ xas_unlock_irq(&xas);
}
EXPORT_SYMBOL(tag_pages_for_writeback);

@@ -2164,7 +2156,7 @@ int write_cache_pages(struct address_space *mapping,
pgoff_t done_index;
int cycled;
int range_whole = 0;
- int tag;
+ xa_tag_t tag;

pagevec_init(&pvec);
if (wbc->range_cyclic) {
@@ -2445,7 +2437,7 @@ void account_page_cleaned(struct page *page, struct address_space *mapping,

/*
* For address_spaces which do not use buffers. Just tag the page as dirty in
- * its radix tree.
+ * the xarray.
*
* This is also used when a single buffer is being dirtied: we want to set the
* page dirty in that case, but not all the buffers. This is a "bottom-up"
@@ -2471,7 +2463,7 @@ int __set_page_dirty_nobuffers(struct page *page)
BUG_ON(page_mapping(page) != mapping);
WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->pages, page_index(page),
+ __xa_set_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);
@@ -2634,13 +2626,13 @@ EXPORT_SYMBOL(__cancel_dirty_page);
* Returns true if the page was previously dirty.
*
* This is for preparing to put the page under writeout. We leave the page
- * tagged as dirty in the radix tree so that a concurrent write-for-sync
+ * tagged as dirty in the xarray so that a concurrent write-for-sync
* can discover it via a PAGECACHE_TAG_DIRTY walk. The ->writepage
* implementation will run either set_page_writeback() or set_page_dirty(),
- * at which stage we bring the page's dirty flag and radix-tree dirty tag
+ * at which stage we bring the page's dirty flag and xarray dirty tag
* back into sync.
*
- * This incoherency between the page's dirty flag and radix-tree tag is
+ * This incoherency between the page's dirty flag and xarray tag is
* unfortunate, but it only exists while the page is locked.
*/
int clear_page_dirty_for_io(struct page *page)
@@ -2721,7 +2713,7 @@ int test_clear_page_writeback(struct page *page)
xa_lock_irqsave(&mapping->pages, flags);
ret = TestClearPageWriteback(page);
if (ret) {
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi)) {
struct bdi_writeback *wb = inode_to_wb(inode);
@@ -2773,7 +2765,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
on_wblist = mapping_tagged(mapping,
PAGECACHE_TAG_WRITEBACK);

- radix_tree_tag_set(&mapping->pages, page_index(page),
+ __xa_set_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi))
inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK);
@@ -2787,10 +2779,10 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
sb_mark_inode_writeback(mapping->host);
}
if (!PageDirty(page))
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
if (!keep_write)
- radix_tree_tag_clear(&mapping->pages, page_index(page),
+ __xa_clear_tag(&mapping->pages, page_index(page),
PAGECACHE_TAG_TOWRITE);
xa_unlock_irqrestore(&mapping->pages, flags);
} else {
@@ -2806,16 +2798,6 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
}
EXPORT_SYMBOL(__test_set_page_writeback);

-/*
- * Return true if any of the pages in the mapping are marked with the
- * passed tag.
- */
-int mapping_tagged(struct address_space *mapping, int tag)
-{
- return radix_tree_tagged(&mapping->pages, tag);
-}
-EXPORT_SYMBOL(mapping_tagged);
-
/**
* wait_for_stable_page() - wait for writeback to finish, if necessary.
* @page: The page to wait on.
--
2.15.1


2018-01-17 21:05:10

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 29/99] page cache: Convert filemap_range_has_page to XArray

From: Matthew Wilcox <[email protected]>

Instead of calling find_get_pages_range() and putting any reference,
just use xa_find() to look for a page in the right range.

Signed-off-by: Matthew Wilcox <[email protected]>
---
mm/filemap.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 2536fcacb5bc..cd01f353cf6a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -461,18 +461,11 @@ bool filemap_range_has_page(struct address_space *mapping,
{
pgoff_t index = start_byte >> PAGE_SHIFT;
pgoff_t end = end_byte >> PAGE_SHIFT;
- struct page *page;

if (end_byte < start_byte)
return false;

- if (mapping->nrpages == 0)
- return false;
-
- if (!find_get_pages_range(mapping, &index, end, 1, &page))
- return false;
- put_page(page);
- return true;
+ return xa_find(&mapping->pages, &index, end, XA_PRESENT);
}
EXPORT_SYMBOL(filemap_range_has_page);

--
2.15.1


2018-01-17 21:06:01

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 22/99] page cache: Convert hole search to XArray

From: Matthew Wilcox <[email protected]>

The page cache offers the ability to search for a miss in the previous or
next N locations. Rather than teach the XArray about the page cache's
definition of a miss, use xas_prev() and xas_next() to search the page
array. This should be more efficient as it does not have to start the
lookup from the top for each index.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 2 +-
include/linux/pagemap.h | 4 +-
mm/filemap.c | 110 ++++++++++++++++++---------------------
mm/readahead.c | 4 +-
4 files changed, 55 insertions(+), 65 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 995d707537da..7bd643538cff 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -826,7 +826,7 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx)
end = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
if (end != inode->i_mapping->nrpages) {
rcu_read_lock();
- end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX);
+ end = page_cache_next_gap(mapping, idx + 1, ULONG_MAX);
rcu_read_unlock();
}

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 80a6149152d4..0db127c3ccac 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -241,9 +241,9 @@ static inline gfp_t readahead_gfp_mask(struct address_space *x)

typedef int filler_t(void *, struct page *);

-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);

#define FGP_ACCESSED 0x00000001
diff --git a/mm/filemap.c b/mm/filemap.c
index 309be963140c..146e8ec16ec0 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1327,86 +1327,76 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
}

/**
- * page_cache_next_hole - find the next hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the
- * lowest indexed hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'return - index >=
- * max_scan' will be true). In rare cases of index wrap-around, 0 will
- * be returned.
- *
- * page_cache_next_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 5, then subsequently a hole is created at
- * index 10, page_cache_next_hole covering both indexes may return 10
- * if called under rcu_read_lock.
+ * page_cache_next_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the
+ * gap with the lowest index.
+ *
+ * This function may be called under the rcu_read_lock. However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 5, then subsequently a gap is
+ * created at index 10, page_cache_next_gap covering both indices may
+ * return 10 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'return - index >= max_scan' will be true).
+ * In the rare case of index wrap-around, 0 will be returned.
*/
-pgoff_t page_cache_next_hole(struct address_space *mapping,
+pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan)
{
- unsigned long i;
+ XA_STATE(xas, &mapping->pages, index);

- for (i = 0; i < max_scan; i++) {
- struct page *page;
-
- page = radix_tree_lookup(&mapping->pages, index);
- if (!page || xa_is_value(page))
+ while (max_scan--) {
+ void *entry = xas_next(&xas);
+ if (!entry || xa_is_value(entry))
break;
- index++;
- if (index == 0)
+ if (xas.xa_index == 0)
break;
}

- return index;
+ return xas.xa_index;
}
-EXPORT_SYMBOL(page_cache_next_hole);
+EXPORT_SYMBOL(page_cache_next_gap);

/**
- * page_cache_prev_hole - find the prev hole (not-present entry)
- * @mapping: mapping
- * @index: index
- * @max_scan: maximum range to search
- *
- * Search backwards in the range [max(index-max_scan+1, 0), index] for
- * the first hole.
- *
- * Returns: the index of the hole if found, otherwise returns an index
- * outside of the set specified (in which case 'index - return >=
- * max_scan' will be true). In rare cases of wrap-around, ULONG_MAX
- * will be returned.
- *
- * page_cache_prev_hole may be called under rcu_read_lock. However,
- * like radix_tree_gang_lookup, this will not atomically search a
- * snapshot of the tree at a single point in time. For example, if a
- * hole is created at index 10, then subsequently a hole is created at
- * index 5, page_cache_prev_hole covering both indexes may return 5 if
- * called under rcu_read_lock.
+ * page_cache_prev_gap() - Find the next gap in the page cache.
+ * @mapping: Mapping.
+ * @index: Index.
+ * @max_scan: Maximum range to search.
+ *
+ * Search the range [max(index - max_scan + 1, 0), index] for the
+ * gap with the highest index.
+ *
+ * This function may be called under the rcu_read_lock. However, this will
+ * not atomically search a snapshot of the cache at a single point in time.
+ * For example, if a gap is created at index 10, then subsequently a gap is
+ * created at index 5, page_cache_prev_gap() covering both indices may
+ * return 5 if called under the rcu_read_lock.
+ *
+ * Return: The index of the gap if found, otherwise an index outside the
+ * range specified (in which case 'index - return >= max_scan' will be true).
+ * In the rare case of wrap-around, ULONG_MAX will be returned.
*/
-pgoff_t page_cache_prev_hole(struct address_space *mapping,
+pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan)
{
- unsigned long i;
-
- for (i = 0; i < max_scan; i++) {
- struct page *page;
+ XA_STATE(xas, &mapping->pages, index);

- page = radix_tree_lookup(&mapping->pages, index);
- if (!page || xa_is_value(page))
+ while (max_scan--) {
+ void *entry = xas_prev(&xas);
+ if (!entry || xa_is_value(entry))
break;
- index--;
- if (index == ULONG_MAX)
+ if (xas.xa_index == ULONG_MAX)
break;
}

- return index;
+ return xas.xa_index;
}
-EXPORT_SYMBOL(page_cache_prev_hole);
+EXPORT_SYMBOL(page_cache_prev_gap);

/**
* find_get_entry - find and get a page cache entry
diff --git a/mm/readahead.c b/mm/readahead.c
index 4851f002605f..f64b31b3a84a 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -329,7 +329,7 @@ static pgoff_t count_history_pages(struct address_space *mapping,
pgoff_t head;

rcu_read_lock();
- head = page_cache_prev_hole(mapping, offset - 1, max);
+ head = page_cache_prev_gap(mapping, offset - 1, max);
rcu_read_unlock();

return offset - 1 - head;
@@ -417,7 +417,7 @@ ondemand_readahead(struct address_space *mapping,
pgoff_t start;

rcu_read_lock();
- start = page_cache_next_hole(mapping, offset + 1, max_pages);
+ start = page_cache_next_gap(mapping, offset + 1, max_pages);
rcu_read_unlock();

if (!start || start - offset > max_pages)
--
2.15.1


2018-01-17 21:06:11

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 23/99] page cache: Add page_cache_range_empty function

From: Matthew Wilcox <[email protected]>

btrfs has its own custom function for determining whether the page cache
has any pages in a particular range. Move this functionality to the
page cache, and call it from btrfs.

Signed-off-by: Matthew Wilcox <[email protected]>
---
fs/btrfs/btrfs_inode.h | 7 ++++-
fs/btrfs/inode.c | 70 -------------------------------------------------
include/linux/pagemap.h | 2 ++
mm/filemap.c | 26 ++++++++++++++++++
4 files changed, 34 insertions(+), 71 deletions(-)

diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 63f0ccc92a71..a48bd6e0a0bb 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -365,6 +365,11 @@ static inline void btrfs_print_data_csum_error(struct btrfs_inode *inode,
logical_start, csum, csum_expected, mirror_num);
}

-bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end);
+static inline bool btrfs_page_exists_in_range(struct inode *inode,
+ loff_t start, loff_t end)
+{
+ return page_cache_range_empty(inode->i_mapping, start >> PAGE_SHIFT,
+ end >> PAGE_SHIFT);
+}

#endif
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index dbdb5bf6bca1..d7d2c556d5a2 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7541,76 +7541,6 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
return ret;
}

-bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
-{
- struct radix_tree_root *root = &inode->i_mapping->pages;
- bool found = false;
- void **pagep = NULL;
- struct page *page = NULL;
- unsigned long start_idx;
- unsigned long end_idx;
-
- start_idx = start >> PAGE_SHIFT;
-
- /*
- * end is the last byte in the last page. end == start is legal
- */
- end_idx = end >> PAGE_SHIFT;
-
- rcu_read_lock();
-
- /* Most of the code in this while loop is lifted from
- * find_get_page. It's been modified to begin searching from a
- * page and return just the first page found in that range. If the
- * found idx is less than or equal to the end idx then we know that
- * a page exists. If no pages are found or if those pages are
- * outside of the range then we're fine (yay!) */
- while (page == NULL &&
- radix_tree_gang_lookup_slot(root, &pagep, NULL, start_idx, 1)) {
- page = radix_tree_deref_slot(pagep);
- if (unlikely(!page))
- break;
-
- if (radix_tree_exception(page)) {
- if (radix_tree_deref_retry(page)) {
- page = NULL;
- continue;
- }
- /*
- * Otherwise, shmem/tmpfs must be storing a swap entry
- * here so return it without attempting to raise page
- * count.
- */
- page = NULL;
- break; /* TODO: Is this relevant for this use case? */
- }
-
- if (!page_cache_get_speculative(page)) {
- page = NULL;
- continue;
- }
-
- /*
- * Has the page moved?
- * This is part of the lockless pagecache protocol. See
- * include/linux/pagemap.h for details.
- */
- if (unlikely(page != *pagep)) {
- put_page(page);
- page = NULL;
- }
- }
-
- if (page) {
- if (page->index <= end_idx)
- found = true;
- put_page(page);
- }
-
- rcu_read_unlock();
- return found;
-}
-
static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend,
struct extent_state **cached_state, int writing)
{
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 0db127c3ccac..34d4fa3ad1c5 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -245,6 +245,8 @@ pgoff_t page_cache_next_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
pgoff_t page_cache_prev_gap(struct address_space *mapping,
pgoff_t index, unsigned long max_scan);
+bool page_cache_range_empty(struct address_space *mapping,
+ pgoff_t index, pgoff_t max);

#define FGP_ACCESSED 0x00000001
#define FGP_LOCK 0x00000002
diff --git a/mm/filemap.c b/mm/filemap.c
index 146e8ec16ec0..f1b4480723dd 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1398,6 +1398,32 @@ pgoff_t page_cache_prev_gap(struct address_space *mapping,
}
EXPORT_SYMBOL(page_cache_prev_gap);

+bool page_cache_range_empty(struct address_space *mapping, pgoff_t index,
+ pgoff_t max)
+{
+ struct page *page;
+ XA_STATE(xas, &mapping->pages, index);
+
+ rcu_read_lock();
+ do {
+ page = xas_find(&xas, max);
+ if (xas_retry(&xas, page))
+ continue;
+ /* Shadow entries don't count */
+ if (xa_is_value(page))
+ continue;
+ /*
+ * We don't need to try to pin this page; we're about to
+ * release the RCU lock anyway. It is enough to know that
+ * there was a page here recently.
+ */
+ } while (0);
+ rcu_read_unlock();
+
+ return page != NULL;
+}
+EXPORT_SYMBOL_GPL(page_cache_range_empty);
+
/**
* find_get_entry - find and get a page cache entry
* @mapping: the address_space to search
--
2.15.1


2018-01-17 21:06:57

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 24/99] page cache: Add and replace pages using the XArray

From: Matthew Wilcox <[email protected]>

Use the XArray APIs to add and replace pages in the page cache. This
removes two uses of the radix tree preload API and is significantly
shorter code.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/swap.h | 8 ++-
mm/filemap.c | 143 ++++++++++++++++++++++-----------------------------
2 files changed, 67 insertions(+), 84 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index c2b8128799c1..394957963c4b 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -299,8 +299,12 @@ void *workingset_eviction(struct address_space *mapping, struct page *page);
bool workingset_refault(void *shadow);
void workingset_activation(struct page *page);

-/* Do not use directly, use workingset_lookup_update */
-void workingset_update_node(struct radix_tree_node *node);
+/* Only track the nodes of mappings with shadow entries */
+void workingset_update_node(struct xa_node *node);
+#define mapping_set_update(xas, mapping) do { \
+ if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \
+ xas_set_update(xas, workingset_update_node); \
+} while (0)

/* Returns workingset_update_node() if the mapping has shadow entries. */
#define workingset_lookup_update(mapping) \
diff --git a/mm/filemap.c b/mm/filemap.c
index f1b4480723dd..e6371b551de1 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -112,35 +112,6 @@
* ->tasklist_lock (memory_failure, collect_procs_ao)
*/

-static int page_cache_tree_insert(struct address_space *mapping,
- struct page *page, void **shadowp)
-{
- struct radix_tree_node *node;
- void **slot;
- int error;
-
- error = __radix_tree_create(&mapping->pages, page->index, 0,
- &node, &slot);
- if (error)
- return error;
- if (*slot) {
- void *p;
-
- p = radix_tree_deref_slot_protected(slot,
- &mapping->pages.xa_lock);
- if (!xa_is_value(p))
- return -EEXIST;
-
- mapping->nrexceptional--;
- if (shadowp)
- *shadowp = p;
- }
- __radix_tree_replace(&mapping->pages, node, slot, page,
- workingset_lookup_update(mapping));
- mapping->nrpages++;
- return 0;
-}
-
static void page_cache_tree_delete(struct address_space *mapping,
struct page *page, void *shadow)
{
@@ -776,51 +747,44 @@ EXPORT_SYMBOL(file_write_and_wait_range);
* locked. This function does not add the new page to the LRU, the
* caller must do that.
*
- * The remove + add is atomic. The only way this function can fail is
- * memory allocation failure.
+ * The remove + add is atomic. This function cannot fail.
*/
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
{
- int error;
+ struct address_space *mapping = old->mapping;
+ void (*freepage)(struct page *) = mapping->a_ops->freepage;
+ pgoff_t offset = old->index;
+ XA_STATE(xas, &mapping->pages, offset);
+ unsigned long flags;

VM_BUG_ON_PAGE(!PageLocked(old), old);
VM_BUG_ON_PAGE(!PageLocked(new), new);
VM_BUG_ON_PAGE(new->mapping, new);

- error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
- if (!error) {
- struct address_space *mapping = old->mapping;
- void (*freepage)(struct page *);
- unsigned long flags;
-
- pgoff_t offset = old->index;
- freepage = mapping->a_ops->freepage;
-
- get_page(new);
- new->mapping = mapping;
- new->index = offset;
+ get_page(new);
+ new->mapping = mapping;
+ new->index = offset;

- xa_lock_irqsave(&mapping->pages, flags);
- __delete_from_page_cache(old, NULL);
- error = page_cache_tree_insert(mapping, new, NULL);
- BUG_ON(error);
+ xas_lock_irqsave(&xas, flags);
+ xas_store(&xas, new);

- /*
- * hugetlb pages do not participate in page cache accounting.
- */
- if (!PageHuge(new))
- __inc_node_page_state(new, NR_FILE_PAGES);
- if (PageSwapBacked(new))
- __inc_node_page_state(new, NR_SHMEM);
- xa_unlock_irqrestore(&mapping->pages, flags);
- mem_cgroup_migrate(old, new);
- radix_tree_preload_end();
- if (freepage)
- freepage(old);
- put_page(old);
- }
+ old->mapping = NULL;
+ /* hugetlb pages do not participate in page cache accounting. */
+ if (!PageHuge(old))
+ __dec_node_page_state(new, NR_FILE_PAGES);
+ if (!PageHuge(new))
+ __inc_node_page_state(new, NR_FILE_PAGES);
+ if (PageSwapBacked(old))
+ __dec_node_page_state(new, NR_SHMEM);
+ if (PageSwapBacked(new))
+ __inc_node_page_state(new, NR_SHMEM);
+ xas_unlock_irqrestore(&xas, flags);
+ mem_cgroup_migrate(old, new);
+ if (freepage)
+ freepage(old);
+ put_page(old);

- return error;
+ return 0;
}
EXPORT_SYMBOL_GPL(replace_page_cache_page);

@@ -829,12 +793,15 @@ static int __add_to_page_cache_locked(struct page *page,
pgoff_t offset, gfp_t gfp_mask,
void **shadowp)
{
+ XA_STATE(xas, &mapping->pages, offset);
int huge = PageHuge(page);
struct mem_cgroup *memcg;
int error;
+ void *old;

VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+ mapping_set_update(&xas, mapping);

if (!huge) {
error = mem_cgroup_try_charge(page, current->mm,
@@ -843,39 +810,51 @@ static int __add_to_page_cache_locked(struct page *page,
return error;
}

- error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM);
- if (error) {
- if (!huge)
- mem_cgroup_cancel_charge(page, memcg, false);
- return error;
- }
-
get_page(page);
page->mapping = mapping;
page->index = offset;

- xa_lock_irq(&mapping->pages);
- error = page_cache_tree_insert(mapping, page, shadowp);
- radix_tree_preload_end();
- if (unlikely(error))
- goto err_insert;
+ do {
+ xas_lock_irq(&xas);
+ old = xas_create(&xas);
+ if (xas_error(&xas))
+ goto unlock;
+ if (xa_is_value(old)) {
+ mapping->nrexceptional--;
+ if (shadowp)
+ *shadowp = old;
+ } else if (old) {
+ xas_set_err(&xas, -EEXIST);
+ goto unlock;
+ }
+
+ xas_store(&xas, page);
+ mapping->nrpages++;
+
+ /*
+ * hugetlb pages do not participate in
+ * page cache accounting.
+ */
+ if (!huge)
+ __inc_node_page_state(page, NR_FILE_PAGES);
+unlock:
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, gfp_mask & ~__GFP_HIGHMEM));
+
+ if (xas_error(&xas))
+ goto error;

- /* hugetlb pages do not participate in page cache accounting. */
- if (!huge)
- __inc_node_page_state(page, NR_FILE_PAGES);
- xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_commit_charge(page, memcg, false, false);
trace_mm_filemap_add_to_page_cache(page);
return 0;
-err_insert:
+error:
page->mapping = NULL;
/* Leave page->index set: truncation relies upon it */
- xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
- return error;
+ return xas_error(&xas);
}

/**
--
2.15.1


2018-01-17 21:07:28

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 20/99] ida: Convert to XArray

From: Matthew Wilcox <[email protected]>

Use the xarray infrstructure like we used the radix tree infrastructure.
This lets us get rid of idr_get_free() from the radix tree code.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/idr.h | 8 +-
include/linux/radix-tree.h | 4 -
lib/idr.c | 320 ++++++++++++++++++++++++++-------------------
lib/radix-tree.c | 119 -----------------
4 files changed, 187 insertions(+), 264 deletions(-)

diff --git a/include/linux/idr.h b/include/linux/idr.h
index 9064ae5f0abc..ad4199247301 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -232,11 +232,11 @@ struct ida_bitmap {
DECLARE_PER_CPU(struct ida_bitmap *, ida_bitmap);

struct ida {
- struct radix_tree_root ida_rt;
+ struct xarray ida_xa;
};

#define IDA_INIT(name) { \
- .ida_rt = RADIX_TREE_INIT(name, IDR_INIT_FLAGS | GFP_NOWAIT), \
+ .ida_xa = XARRAY_INIT_FLAGS(name.ida_xa, IDR_INIT_FLAGS) \
}
#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)

@@ -251,7 +251,7 @@ void ida_simple_remove(struct ida *ida, unsigned int id);

static inline void ida_init(struct ida *ida)
{
- INIT_RADIX_TREE(&ida->ida_rt, IDR_INIT_FLAGS | GFP_NOWAIT);
+ xa_init_flags(&ida->ida_xa, IDR_INIT_FLAGS);
}

/**
@@ -268,6 +268,6 @@ static inline int ida_get_new(struct ida *ida, int *p_id)

static inline bool ida_is_empty(const struct ida *ida)
{
- return radix_tree_empty(&ida->ida_rt);
+ return xa_empty(&ida->ida_xa);
}
#endif /* _LINUX_IDR_H */
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index f64beb9ba175..4c5c36414a80 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -302,10 +302,6 @@ int radix_tree_split(struct radix_tree_root *, unsigned long index,
int radix_tree_join(struct radix_tree_root *, unsigned long index,
unsigned new_order, void *);

-void __rcu **idr_get_free(struct radix_tree_root *root,
- struct radix_tree_iter *iter, gfp_t gfp,
- unsigned long max);
-
enum {
RADIX_TREE_ITER_TAG_MASK = 0x0f, /* tag index in lower nybble */
RADIX_TREE_ITER_TAGGED = 0x10, /* lookup tagged slots */
diff --git a/lib/idr.c b/lib/idr.c
index 379eaa8cb75b..7e9a8850b613 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -13,7 +13,6 @@
#include <linux/xarray.h>

DEFINE_PER_CPU(struct ida_bitmap *, ida_bitmap);
-static DEFINE_SPINLOCK(simple_ida_lock);

/* In radix-tree.c temporarily */
extern bool idr_nomem(struct xa_state *, gfp_t);
@@ -337,26 +336,23 @@ EXPORT_SYMBOL_GPL(idr_replace);
/*
* Developer's notes:
*
- * The IDA uses the functionality provided by the IDR & radix tree to store
- * bitmaps in each entry. The XA_FREE_TAG tag means there is at least one bit
- * free, unlike the IDR where it means at least one entry is free.
- *
- * I considered telling the radix tree that each slot is an order-10 node
- * and storing the bit numbers in the radix tree, but the radix tree can't
- * allow a single multiorder entry at index 0, which would significantly
- * increase memory consumption for the IDA. So instead we divide the index
- * by the number of bits in the leaf bitmap before doing a radix tree lookup.
- *
- * As an optimisation, if there are only a few low bits set in any given
- * leaf, instead of allocating a 128-byte bitmap, we store the bits
+ * The IDA uses the functionality provided by the IDR & XArray to store
+ * bitmaps in each entry. The XA_FREE_TAG tag is used to mean that there
+ * is at least one bit free, unlike the IDR where it means at least one
+ * array entry is free.
+ *
+ * The XArray supports multi-index entries, so I considered teaching the
+ * XArray that each slot is an order-10 node and indexing the XArray by the
+ * ID. The XArray has the significant optimisation of storing the first
+ * entry in the struct xarray and avoiding allocating an xa_node.
+ * Unfortunately, it can't do that for multi-order entries.
+ * So instead the XArray index is the ID divided by the number of bits in
+ * the bitmap
+ *
+ * As a further optimisation, if there are only a few low bits set in any
+ * given leaf, instead of allocating a 128-byte bitmap, we store the bits
* directly in the entry.
*
- * We allow the radix tree 'exceptional' count to get out of date. Nothing
- * in the IDA nor the radix tree code checks it. If it becomes important
- * to maintain an accurate exceptional count, switch the rcu_assign_pointer()
- * calls to radix_tree_iter_replace() which will correct the exceptional
- * count.
- *
* The IDA always requires a lock to alloc/free. If we add a 'test_bit'
* equivalent, it will still need locking. Going to RCU lookup would require
* using RCU to free bitmaps, and that's not trivial without embedding an
@@ -366,104 +362,114 @@ EXPORT_SYMBOL_GPL(idr_replace);

#define IDA_MAX (0x80000000U / IDA_BITMAP_BITS - 1)

+static struct ida_bitmap *alloc_ida_bitmap(void)
+{
+ struct ida_bitmap *bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (bitmap)
+ memset(bitmap, 0, sizeof(*bitmap));
+ return bitmap;
+}
+
+static void free_ida_bitmap(struct ida_bitmap *bitmap)
+{
+ if (this_cpu_cmpxchg(ida_bitmap, NULL, bitmap))
+ kfree(bitmap);
+}
+
/**
* ida_get_new_above - allocate new ID above or equal to a start id
* @ida: ida handle
* @start: id to start search at
* @id: pointer to the allocated handle
*
- * Allocate new ID above or equal to @start. It should be called
- * with any required locks to ensure that concurrent calls to
- * ida_get_new_above() / ida_get_new() / ida_remove() are not allowed.
- * Consider using ida_simple_get() if you do not have complex locking
- * requirements.
+ * Allocate new ID above or equal to @start. The ida has its own lock,
+ * although you may wish to provide your own locking around it.
*
* If memory is required, it will return %-EAGAIN, you should unlock
* and go back to the ida_pre_get() call. If the ida is full, it will
* return %-ENOSPC. On success, it will return 0.
*
- * @id returns a value in the range @start ... %0x7fffffff.
+ * @id returns a value in the range @start ... %INT_MAX.
*/
int ida_get_new_above(struct ida *ida, int start, int *id)
{
- struct radix_tree_root *root = &ida->ida_rt;
- void __rcu **slot;
- struct radix_tree_iter iter;
+ unsigned long flags;
+ unsigned long index = start / IDA_BITMAP_BITS;
+ unsigned int bit = start % IDA_BITMAP_BITS;
+ XA_STATE(xas, &ida->ida_xa, index);
struct ida_bitmap *bitmap;
- unsigned long index;
- unsigned bit;
- int new;
-
- index = start / IDA_BITMAP_BITS;
- bit = start % IDA_BITMAP_BITS;
-
- slot = radix_tree_iter_init(&iter, index);
- for (;;) {
- if (slot)
- slot = radix_tree_next_slot(slot, &iter,
- RADIX_TREE_ITER_TAGGED);
- if (!slot) {
- slot = idr_get_free(root, &iter, GFP_NOWAIT, IDA_MAX);
- if (IS_ERR(slot)) {
- if (slot == ERR_PTR(-ENOMEM))
- return -EAGAIN;
- return PTR_ERR(slot);
- }
- }
- if (iter.index > index)
- bit = 0;
- new = iter.index * IDA_BITMAP_BITS;
- bitmap = rcu_dereference_raw(*slot);
- if (xa_is_value(bitmap)) {
- unsigned long tmp = xa_to_value(bitmap);
- int vbit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE,
- bit);
- if (vbit < BITS_PER_XA_VALUE) {
- tmp |= 1UL << vbit;
- rcu_assign_pointer(*slot, xa_mk_value(tmp));
- *id = new + vbit;
- return 0;
- }
- bitmap = this_cpu_xchg(ida_bitmap, NULL);
- if (!bitmap)
- return -EAGAIN;
- memset(bitmap, 0, sizeof(*bitmap));
- bitmap->bitmap[0] = tmp;
- rcu_assign_pointer(*slot, bitmap);
- }
+ unsigned int new;
+
+ xas_lock_irqsave(&xas, flags);
+retry:
+ bitmap = xas_find_tag(&xas, IDA_MAX, XA_FREE_TAG);
+ if (xas.xa_index > IDA_MAX)
+ goto nospc;
+ if (xas.xa_index > index)
+ bit = 0;
+ new = xas.xa_index * IDA_BITMAP_BITS;
+ if (xa_is_value(bitmap)) {
+ unsigned long value = xa_to_value(bitmap);
+ if (bit < BITS_PER_XA_VALUE) {
+ unsigned long tmp = value | ((1UL << bit) - 1);
+ bit = ffz(tmp);

- if (bitmap) {
- bit = find_next_zero_bit(bitmap->bitmap,
- IDA_BITMAP_BITS, bit);
- new += bit;
- if (new < 0)
- return -ENOSPC;
- if (bit == IDA_BITMAP_BITS)
- continue;
-
- __set_bit(bit, bitmap->bitmap);
- if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
- radix_tree_iter_tag_clear(root, &iter,
- XA_FREE_TAG);
- } else {
- new += bit;
- if (new < 0)
- return -ENOSPC;
if (bit < BITS_PER_XA_VALUE) {
- bitmap = xa_mk_value(1UL << bit);
- } else {
- bitmap = this_cpu_xchg(ida_bitmap, NULL);
- if (!bitmap)
- return -EAGAIN;
- memset(bitmap, 0, sizeof(*bitmap));
- __set_bit(bit, bitmap->bitmap);
+ value |= (1UL << bit);
+ xas_store(&xas, xa_mk_value(value));
+ new += bit;
+ goto unlock;
}
- radix_tree_iter_replace(root, &iter, slot, bitmap);
}

- *id = new;
- return 0;
+ bitmap = alloc_ida_bitmap();
+ if (!bitmap)
+ goto nomem;
+ bitmap->bitmap[0] = value;
+ new += bit;
+ __set_bit(bit, bitmap->bitmap);
+ xas_store(&xas, bitmap);
+ if (xas_error(&xas))
+ free_ida_bitmap(bitmap);
+ } else if (bitmap) {
+ bit = find_next_zero_bit(bitmap->bitmap, IDA_BITMAP_BITS, bit);
+ if (bit == IDA_BITMAP_BITS)
+ goto retry;
+ new += bit;
+ if (new > INT_MAX)
+ goto nospc;
+ __set_bit(bit, bitmap->bitmap);
+ if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
+ xas_clear_tag(&xas, XA_FREE_TAG);
+ } else if (bit < BITS_PER_XA_VALUE) {
+ new += bit;
+ bitmap = xa_mk_value(1UL << bit);
+ xas_store(&xas, bitmap);
+ } else {
+ bitmap = alloc_ida_bitmap();
+ if (!bitmap)
+ goto nomem;
+ new += bit;
+ __set_bit(bit, bitmap->bitmap);
+ xas_store(&xas, bitmap);
+ if (xas_error(&xas))
+ free_ida_bitmap(bitmap);
}
+
+ if (idr_nomem(&xas, GFP_NOWAIT))
+ goto retry;
+unlock:
+ xas_unlock_irqrestore(&xas, flags);
+ if (xas_error(&xas) == -ENOMEM)
+ return -EAGAIN;
+ *id = new;
+ return 0;
+nospc:
+ xas_unlock_irqrestore(&xas, flags);
+ return -ENOSPC;
+nomem:
+ xas_unlock_irqrestore(&xas, flags);
+ return -EAGAIN;
}
EXPORT_SYMBOL(ida_get_new_above);

@@ -471,45 +477,44 @@ EXPORT_SYMBOL(ida_get_new_above);
* ida_remove - Free the given ID
* @ida: ida handle
* @id: ID to free
- *
- * This function should not be called at the same time as ida_get_new_above().
*/
void ida_remove(struct ida *ida, int id)
{
+ unsigned long flags;
unsigned long index = id / IDA_BITMAP_BITS;
- unsigned offset = id % IDA_BITMAP_BITS;
+ unsigned bit = id % IDA_BITMAP_BITS;
+ XA_STATE(xas, &ida->ida_xa, index);
struct ida_bitmap *bitmap;
- unsigned long *btmp;
- struct radix_tree_iter iter;
- void __rcu **slot;

- slot = radix_tree_iter_lookup(&ida->ida_rt, &iter, index);
- if (!slot)
+ xas_lock_irqsave(&xas, flags);
+ bitmap = xas_load(&xas);
+ if (!bitmap)
goto err;
-
- bitmap = rcu_dereference_raw(*slot);
if (xa_is_value(bitmap)) {
- btmp = (unsigned long *)slot;
- offset += 1; /* Intimate knowledge of the xa_data encoding */
- if (offset >= BITS_PER_LONG)
+ unsigned long v = xa_to_value(bitmap);
+ if (bit >= BITS_PER_XA_VALUE)
goto err;
+ if (!(v & (1UL << bit)))
+ goto err;
+ v &= ~(1UL << bit);
+ if (v)
+ bitmap = xa_mk_value(v);
+ else
+ bitmap = NULL;
+ xas_store(&xas, bitmap);
} else {
- btmp = bitmap->bitmap;
- }
- if (!test_bit(offset, btmp))
- goto err;
-
- __clear_bit(offset, btmp);
- radix_tree_iter_tag_set(&ida->ida_rt, &iter, XA_FREE_TAG);
- if (xa_is_value(bitmap)) {
- if (xa_to_value(rcu_dereference_raw(*slot)) == 0)
- radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
- } else if (bitmap_empty(btmp, IDA_BITMAP_BITS)) {
- kfree(bitmap);
- radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
+ if (!__test_and_clear_bit(bit, bitmap->bitmap))
+ goto err;
+ if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
+ kfree(bitmap);
+ xas_store(&xas, NULL);
+ }
}
+ xas_set_tag(&xas, XA_FREE_TAG);
+ xas_unlock_irqrestore(&xas, flags);
return;
err:
+ xas_unlock_irqrestore(&xas, flags);
WARN(1, "ida_remove called for id=%d which is not allocated.\n", id);
}
EXPORT_SYMBOL(ida_remove);
@@ -519,21 +524,21 @@ EXPORT_SYMBOL(ida_remove);
* @ida: ida handle
*
* Calling this function releases all resources associated with an IDA. When
- * this call returns, the IDA is empty and can be reused or freed. The caller
- * should not allow ida_remove() or ida_get_new_above() to be called at the
- * same time.
+ * this call returns, the IDA is empty and can be reused or freed.
*/
void ida_destroy(struct ida *ida)
{
- struct radix_tree_iter iter;
- void __rcu **slot;
+ XA_STATE(xas, &ida->ida_xa, 0);
+ unsigned long flags;
+ struct ida_bitmap *bitmap;

- radix_tree_for_each_slot(slot, &ida->ida_rt, &iter, 0) {
- struct ida_bitmap *bitmap = rcu_dereference_raw(*slot);
+ xas_lock_irqsave(&xas, flags);
+ xas_for_each(&xas, bitmap, ULONG_MAX) {
if (!xa_is_value(bitmap))
kfree(bitmap);
- radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
+ xas_store(&xas, NULL);
}
+ xas_unlock_irqrestore(&xas, flags);
}
EXPORT_SYMBOL(ida_destroy);

@@ -557,7 +562,6 @@ int ida_simple_get(struct ida *ida, unsigned int start, unsigned int end,
{
int ret, id;
unsigned int max;
- unsigned long flags;

BUG_ON((int)start < 0);
BUG_ON((int)end < 0);
@@ -573,7 +577,6 @@ int ida_simple_get(struct ida *ida, unsigned int start, unsigned int end,
if (!ida_pre_get(ida, gfp_mask))
return -ENOMEM;

- spin_lock_irqsave(&simple_ida_lock, flags);
ret = ida_get_new_above(ida, start, &id);
if (!ret) {
if (id > max) {
@@ -583,7 +586,6 @@ int ida_simple_get(struct ida *ida, unsigned int start, unsigned int end,
ret = id;
}
}
- spin_unlock_irqrestore(&simple_ida_lock, flags);

if (unlikely(ret == -EAGAIN))
goto again;
@@ -604,11 +606,55 @@ EXPORT_SYMBOL(ida_simple_get);
*/
void ida_simple_remove(struct ida *ida, unsigned int id)
{
- unsigned long flags;
-
BUG_ON((int)id < 0);
- spin_lock_irqsave(&simple_ida_lock, flags);
ida_remove(ida, id);
- spin_unlock_irqrestore(&simple_ida_lock, flags);
}
EXPORT_SYMBOL(ida_simple_remove);
+
+#ifdef XA_DEBUG
+static void dump_ida_node(void *entry, unsigned long index)
+{
+ unsigned long i;
+
+ if (!entry)
+ return;
+
+ if (xa_is_node(entry)) {
+ struct xa_node *node = xa_to_node(entry);
+ unsigned long first = index * IDA_BITMAP_BITS;
+ unsigned long last = first | ((((unsigned long)XA_CHUNK_SIZE *
+ IDA_BITMAP_BITS) << node->shift) - 1);
+
+ pr_debug("ida node: %p offset %d indices %lu-%lu parent %p free %lx shift %d count %d\n",
+ node, node->offset, first, last, node->parent,
+ node->tags[0][0], node->shift, node->count);
+ for (i = 0; i < XA_CHUNK_SIZE; i++)
+ dump_ida_node(node->slots[i],
+ index | (i << node->shift));
+ } else if (xa_is_value(entry)) {
+ pr_debug("ida excp: %p offset %d indices %lu-%lu data %lx\n",
+ entry, (int)(index & XA_CHUNK_MASK),
+ index * IDA_BITMAP_BITS,
+ index * IDA_BITMAP_BITS + BITS_PER_XA_VALUE,
+ xa_to_value(entry));
+ } else {
+ struct ida_bitmap *bitmap = entry;
+
+ pr_debug("ida btmp: %p offset %d indices %lu-%lu data", bitmap,
+ (int)(index & XA_CHUNK_MASK),
+ index * IDA_BITMAP_BITS,
+ (index + 1) * IDA_BITMAP_BITS - 1);
+ for (i = 0; i < IDA_BITMAP_LONGS; i++)
+ pr_cont(" %lx", bitmap->bitmap[i]);
+ pr_cont("\n");
+ }
+}
+
+void ida_dump(struct ida *ida)
+{
+ struct xarray *xa = &ida->ida_xa;
+ pr_debug("ida: %p node %p free %d\n", ida, xa->xa_head,
+ xa_tagged(xa, XA_FREE_TAG));
+ dump_ida_node(xa->xa_head, 0);
+}
+#endif
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 75e02cb78ada..6f653d02a079 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -254,54 +254,6 @@ static unsigned long next_index(unsigned long index,
return (index & ~node_maxindex(node)) + (offset << node->shift);
}

-#ifndef __KERNEL__
-static void dump_ida_node(void *entry, unsigned long index)
-{
- unsigned long i;
-
- if (!entry)
- return;
-
- if (radix_tree_is_internal_node(entry)) {
- struct radix_tree_node *node = entry_to_node(entry);
-
- pr_debug("ida node: %p offset %d indices %lu-%lu parent %p free %lx shift %d count %d\n",
- node, node->offset, index * IDA_BITMAP_BITS,
- ((index | node_maxindex(node)) + 1) *
- IDA_BITMAP_BITS - 1,
- node->parent, node->tags[0][0], node->shift,
- node->count);
- for (i = 0; i < RADIX_TREE_MAP_SIZE; i++)
- dump_ida_node(node->slots[i],
- index | (i << node->shift));
- } else if (xa_is_value(entry)) {
- pr_debug("ida excp: %p offset %d indices %lu-%lu data %lx\n",
- entry, (int)(index & RADIX_TREE_MAP_MASK),
- index * IDA_BITMAP_BITS,
- index * IDA_BITMAP_BITS + BITS_PER_XA_VALUE,
- xa_to_value(entry));
- } else {
- struct ida_bitmap *bitmap = entry;
-
- pr_debug("ida btmp: %p offset %d indices %lu-%lu data", bitmap,
- (int)(index & RADIX_TREE_MAP_MASK),
- index * IDA_BITMAP_BITS,
- (index + 1) * IDA_BITMAP_BITS - 1);
- for (i = 0; i < IDA_BITMAP_LONGS; i++)
- pr_cont(" %lx", bitmap->bitmap[i]);
- pr_cont("\n");
- }
-}
-
-static void ida_dump(struct ida *ida)
-{
- struct radix_tree_root *root = &ida->ida_rt;
- pr_debug("ida: %p node %p free %d\n", ida, root->xa_head,
- root->xa_flags >> ROOT_TAG_SHIFT);
- dump_ida_node(root->xa_head, 0);
-}
-#endif
-
/*
* This assumes that the caller has performed appropriate preallocation, and
* that the caller has pinned this thread of control to the current CPU.
@@ -2083,77 +2035,6 @@ int ida_pre_get(struct ida *ida, gfp_t gfp)
}
EXPORT_SYMBOL(ida_pre_get);

-void __rcu **idr_get_free(struct radix_tree_root *root,
- struct radix_tree_iter *iter, gfp_t gfp,
- unsigned long max)
-{
- struct radix_tree_node *node = NULL, *child;
- void __rcu **slot = (void __rcu **)&root->xa_head;
- unsigned long maxindex, start = iter->next_index;
- unsigned int shift, offset = 0;
-
- grow:
- shift = radix_tree_load_root(root, &child, &maxindex);
- if (!radix_tree_tagged(root, XA_FREE_TAG))
- start = max(start, maxindex + 1);
- if (start > max)
- return ERR_PTR(-ENOSPC);
-
- if (start > maxindex) {
- int error = radix_tree_extend(root, gfp, start, shift);
- if (error < 0)
- return ERR_PTR(error);
- shift = error;
- child = rcu_dereference_raw(root->xa_head);
- }
-
- while (shift) {
- shift -= RADIX_TREE_MAP_SHIFT;
- if (child == NULL) {
- /* Have to add a child node. */
- child = radix_tree_node_alloc(gfp, node, root, shift,
- offset, 0, 0);
- if (!child)
- return ERR_PTR(-ENOMEM);
- all_tag_set(child, XA_FREE_TAG);
- rcu_assign_pointer(*slot, node_to_entry(child));
- if (node)
- node->count++;
- } else if (!radix_tree_is_internal_node(child))
- break;
-
- node = entry_to_node(child);
- offset = radix_tree_descend(node, &child, start);
- if (!tag_get(node, XA_FREE_TAG, offset)) {
- offset = radix_tree_find_next_bit(node, XA_FREE_TAG,
- offset + 1);
- start = next_index(start, node, offset);
- if (start > max)
- return ERR_PTR(-ENOSPC);
- while (offset == RADIX_TREE_MAP_SIZE) {
- offset = node->offset + 1;
- node = node->parent;
- if (!node)
- goto grow;
- shift = node->shift;
- }
- child = rcu_dereference_raw(node->slots[offset]);
- }
- slot = &node->slots[offset];
- }
-
- iter->index = start;
- if (node)
- iter->next_index = 1 + min(max, (start | node_maxindex(node)));
- else
- iter->next_index = 1;
- iter->node = node;
- __set_iter_shift(iter, shift);
- set_iter_tags(iter, node, offset, XA_FREE_TAG);
-
- return slot;
-}
-
static void
radix_tree_node_ctor(void *arg)
{
--
2.15.1


2018-01-17 21:07:53

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 21/99] xarray: Add xa_reserve and xa_release

From: Matthew Wilcox <[email protected]>

This function simply creates a slot in the XArray for users which need
to acquire multiple locks before storing their entry in the tree and
so cannot use a plain xa_store().

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 14 ++++++++++
lib/xarray.c | 51 ++++++++++++++++++++++++++++++++++
tools/testing/radix-tree/xarray-test.c | 25 +++++++++++++++++
3 files changed, 90 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 6f59f1f60205..c3f7405c5517 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -259,6 +259,7 @@ void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void *xa_cmpxchg(struct xarray *, unsigned long index,
void *old, void *entry, gfp_t);
+int xa_reserve(struct xarray *, unsigned long index, gfp_t);
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -373,6 +374,19 @@ static inline int xa_insert(struct xarray *xa, unsigned long index,
return -EEXIST;
}

+/**
+ * xa_release() - Release a reserved entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ *
+ * After calling xa_reserve(), you can call this function to release the
+ * reservation. It is harmless to call this function if the entry was used.
+ */
+static inline void xa_release(struct xarray *xa, unsigned long index)
+{
+ xa_cmpxchg(xa, index, NULL, NULL, 0);
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
diff --git a/lib/xarray.c b/lib/xarray.c
index ace309cc9253..b4dec8e2d202 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1275,6 +1275,8 @@ void *xa_cmpxchg(struct xarray *xa, unsigned long index,
do {
xas_lock(&xas);
curr = xas_load(&xas);
+ if (curr == XA_ZERO_ENTRY)
+ curr = NULL;
if (curr == old)
xas_store(&xas, entry);
xas_unlock(&xas);
@@ -1310,6 +1312,8 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,

do {
curr = xas_load(&xas);
+ if (curr == XA_ZERO_ENTRY)
+ curr = NULL;
if (curr == old)
xas_store(&xas, entry);
} while (__xas_nomem(&xas, gfp));
@@ -1318,6 +1322,53 @@ void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
}
EXPORT_SYMBOL(__xa_cmpxchg);

+/**
+ * xa_reserve() - Reserve this index in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @gfp: Memory allocation flags.
+ *
+ * Ensures there is somewhere to store an entry at @index in the array.
+ * If there is already something stored at @index, this function does
+ * nothing. If there was nothing there, the entry is marked as reserved.
+ * Loads from @index will continue to see a %NULL pointer until a
+ * subsequent store to @index.
+ *
+ * If you do not use the entry that you have reserved, call xa_release()
+ * or xa_erase() to free any unnecessary memory.
+ *
+ * Return: 0 if the reservation succeeded or -ENOMEM if it failed.
+ */
+int xa_reserve(struct xarray *xa, unsigned long index, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ unsigned int lock_type = xa_lock_type(xa);
+ void *curr;
+
+ do {
+ if (lock_type == XA_LOCK_IRQ)
+ xas_lock_irq(&xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_lock_bh(&xas);
+ else
+ xas_lock(&xas);
+
+ curr = xas_create(&xas);
+ if (!curr)
+ xas_store(&xas, XA_ZERO_ENTRY);
+
+ if (lock_type == XA_LOCK_IRQ)
+ xas_unlock_irq(&xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_unlock_bh(&xas);
+ else
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ return xas_error(&xas);
+}
+EXPORT_SYMBOL(xa_reserve);
+
/**
* __xa_set_tag() - Set this tag on this entry while locked.
* @xa: XArray.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 4d3541ac31e9..fe38b53df2ab 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -502,6 +502,29 @@ void check_move(struct xarray *xa)
} while (i < (1 << 16));
}

+void check_reserve(struct xarray *xa)
+{
+ assert(xa_empty(xa));
+ xa_reserve(xa, 12345678, GFP_KERNEL);
+ assert(!xa_empty(xa));
+ assert(!xa_load(xa, 12345678));
+ xa_release(xa, 12345678);
+ assert(xa_empty(xa));
+
+ xa_reserve(xa, 12345678, GFP_KERNEL);
+ assert(!xa_store(xa, 12345678, xa_mk_value(12345678), GFP_NOWAIT));
+ xa_release(xa, 12345678);
+ assert(xa_erase(xa, 12345678) == xa_mk_value(12345678));
+ assert(xa_empty(xa));
+
+ xa_reserve(xa, 12345678, GFP_KERNEL);
+ assert(!xa_cmpxchg(xa, 12345678, NULL, xa_mk_value(12345678),
+ GFP_NOWAIT));
+ xa_release(xa, 12345678);
+ assert(xa_erase(xa, 12345678) == xa_mk_value(12345678));
+ assert(xa_empty(xa));
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
@@ -548,6 +571,8 @@ void xarray_checks(void)

check_move(&array);
item_kill_tree(&array);
+
+ check_reserve(&array);
}

int __weak main(void)
--
2.15.1


2018-01-17 21:09:26

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 19/99] idr: Convert to XArray

From: Matthew Wilcox <[email protected]>

The IDR distinguishes between unallocated entries (read as NULL) and
entries where the user has chosen to store NULL. The radix tree was
modified to consider NULL entries which had tag 0 _clear_ as being
allocated, but it added a lot of complexity.

Instead, the XArray has a 'zero entry', which the normal API will treat
as NULL, but is distinct from NULL when using the advanced API. The IDR
code converts between NULL and zero entries.

The idr_for_each_entry_ul() iterator becomes an alias for xa_for_each(),
so we drop the idr_get_next_ul() function as it has no users.

The exported IDR API was a weird mix of GPL-only and general symbols;
I converted them all to GPL as there was no way to use the IDR API
without being GPL.

Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/core-api/xarray.rst | 6 +
include/linux/idr.h | 156 ++++++++++++-------
include/linux/xarray.h | 29 +++-
lib/idr.c | 298 ++++++++++++++++++++++--------------
lib/radix-tree.c | 77 +++++-----
lib/xarray.c | 32 ++++
tools/testing/radix-tree/idr-test.c | 34 ++++
7 files changed, 419 insertions(+), 213 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index 0172c7d9e6ea..1dea1c522506 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -284,6 +284,12 @@ to :c:func:`xas_retry`, and retry the operation if it returns ``true``.
this RCU period. You should restart the lookup from the head of the
array.

+ * - Zero
+ - :c:func:`xa_is_zero`
+ - Zero entries appear as ``NULL`` through the Normal API, but occupy an
+ entry in the XArray which can be tagged or otherwise used to reserve
+ the index.
+
Other internal entries may be added in the future. As far as possible, they
will be handled by :c:func:`xas_retry`.

diff --git a/include/linux/idr.h b/include/linux/idr.h
index 11eea38b9629..9064ae5f0abc 100644
--- a/include/linux/idr.h
+++ b/include/linux/idr.h
@@ -9,35 +9,35 @@
* tables.
*/

-#ifndef __IDR_H__
-#define __IDR_H__
+#ifndef _LINUX_IDR_H
+#define _LINUX_IDR_H

#include <linux/radix-tree.h>
#include <linux/gfp.h>
#include <linux/percpu.h>
-#include <linux/bug.h>
+#include <linux/xarray.h>

struct idr {
- struct radix_tree_root idr_rt;
- unsigned int idr_next;
+ struct xarray idr_xa;
+ unsigned int idr_next;
};

-/*
- * The IDR API does not expose the tagging functionality of the radix tree
- * to users. Use tag 0 to track whether a node has free space below it.
- */
-#define IDR_FREE 0
-
-/* Set the IDR flag and the IDR_FREE tag */
-#define IDR_RT_MARKER (ROOT_IS_IDR | (__force gfp_t) \
- (1 << (ROOT_TAG_SHIFT + IDR_FREE)))
+#define IDR_INIT_FLAGS (XA_FLAGS_TRACK_FREE | XA_FLAGS_LOCK_IRQ | \
+ XA_FLAGS_TAG(XA_FREE_TAG))

#define IDR_INIT(name) \
{ \
- .idr_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER) \
+ .idr_xa = XARRAY_INIT_FLAGS(name.idr_xa, IDR_INIT_FLAGS), \
+ .idr_next = 0, \
}
#define DEFINE_IDR(name) struct idr name = IDR_INIT(name)

+static inline void idr_init(struct idr *idr)
+{
+ xa_init_flags(&idr->idr_xa, IDR_INIT_FLAGS);
+ idr->idr_next = 0;
+}
+
/**
* idr_get_cursor - Return the current position of the cyclic allocator
* @idr: idr handle
@@ -66,62 +66,83 @@ static inline void idr_set_cursor(struct idr *idr, unsigned int val)

/**
* DOC: idr sync
- * idr synchronization (stolen from radix-tree.h)
+ * idr synchronization
*
- * idr_find() is able to be called locklessly, using RCU. The caller must
- * ensure calls to this function are made within rcu_read_lock() regions.
- * Other readers (lock-free or otherwise) and modifications may be running
- * concurrently.
+ * The IDR manages its own locking, using irqsafe spinlocks for operations
+ * which modify the IDR and RCU for operations which do not. The user of
+ * the IDR may choose to wrap accesses to it in a lock if it needs to
+ * guarantee the IDR does not change during a read access. The easiest way
+ * to do this is to grab the same lock the IDR uses for write accesses
+ * using one of the idr_lock() wrappers.
*
- * It is still required that the caller manage the synchronization and
- * lifetimes of the items. So if RCU lock-free lookups are used, typically
- * this would mean that the items have their own locks, or are amenable to
- * lock-free access; and that the items are freed by RCU (or only freed after
- * having been deleted from the idr tree *and* a synchronize_rcu() grace
- * period).
+ * The caller must still manage the synchronization and lifetimes of the
+ * items. So if RCU lock-free lookups are used, typically this would mean
+ * that the items have their own locks, or are amenable to lock-free access;
+ * and that the items are freed by RCU (or only freed after having been
+ * deleted from the IDR *and* a synchronize_rcu() grace period has elapsed).
*/

-void idr_preload(gfp_t gfp_mask);
+#define idr_lock(idr) xa_lock(&(idr)->idr_xa)
+#define idr_unlock(idr) xa_unlock(&(idr)->idr_xa)
+#define idr_lock_bh(idr) xa_lock_bh(&(idr)->idr_xa)
+#define idr_unlock_bh(idr) xa_unlock_bh(&(idr)->idr_xa)
+#define idr_lock_irq(idr) xa_lock_irq(&(idr)->idr_xa)
+#define idr_unlock_irq(idr) xa_unlock_irq(&(idr)->idr_xa)
+#define idr_lock_irqsave(idr, flags) \
+ xa_lock_irqsave(&(idr)->idr_xa, flags)
+#define idr_unlock_irqrestore(idr, flags) \
+ xa_unlock_irqrestore(&(idr)->idr_xa, flags)
+
+void idr_preload(gfp_t);

int idr_alloc(struct idr *, void *, int start, int end, gfp_t);
int __must_check idr_alloc_ul(struct idr *, void *, unsigned long *nextid,
unsigned long max, gfp_t);
int idr_alloc_cyclic(struct idr *, void *entry, int start, int end, gfp_t);
-int idr_for_each(const struct idr *,
+void *idr_remove(struct idr *, unsigned long id);
+void *idr_replace(struct idr *, void *, unsigned long id);
+int idr_for_each(struct idr *,
int (*fn)(int id, void *p, void *data), void *data);
void *idr_get_next(struct idr *, int *nextid);
-void *idr_get_next_ul(struct idr *, unsigned long *nextid);
-void *idr_replace(struct idr *, void *, unsigned long id);
-void idr_destroy(struct idr *);

+#ifdef CONFIG_64BIT
+int __must_check idr_alloc_u32(struct idr *, void *, unsigned int *nextid,
+ unsigned int max, gfp_t);
+#else /* !CONFIG_64BIT */
static inline int __must_check idr_alloc_u32(struct idr *idr, void *ptr,
- u32 *nextid, unsigned long max, gfp_t gfp)
-{
- unsigned long tmp = *nextid;
- int ret = idr_alloc_ul(idr, ptr, &tmp, max, gfp);
- *nextid = tmp;
- return ret;
-}
-
-static inline void *idr_remove(struct idr *idr, unsigned long id)
+ unsigned int *nextid, unsigned int max, gfp_t gfp)
{
- return radix_tree_delete_item(&idr->idr_rt, id, NULL);
+ return idr_alloc_ul(idr, ptr, (unsigned long *)nextid, max, gfp);
}
+#endif

-static inline void idr_init(struct idr *idr)
+/**
+ * idr_is_empty() - Determine if there are no entries in the IDR
+ * @idr: IDR handle.
+ *
+ * Return: %true if there are no entries in the IDR.
+ */
+static inline bool idr_is_empty(const struct idr *idr)
{
- INIT_RADIX_TREE(&idr->idr_rt, IDR_RT_MARKER);
- idr->idr_next = 0;
+ return xa_empty(&idr->idr_xa);
}

-static inline bool idr_is_empty(const struct idr *idr)
+/**
+ * idr_destroy() - Free all internal memory used by an IDR.
+ * @idr: IDR handle.
+ *
+ * When you have finished using an IDR, you can free all the memory used
+ * for the IDR data structure by calling this function. If you also
+ * wish to free the objects referenced by the IDR, you can use idr_for_each()
+ * or idr_for_each_entry() to do that first.
+ */
+static inline void idr_destroy(struct idr *idr)
{
- return radix_tree_empty(&idr->idr_rt) &&
- radix_tree_tagged(&idr->idr_rt, IDR_FREE);
+ xa_destroy(&idr->idr_xa);
}

/**
- * idr_preload_end - end preload section started with idr_preload()
+ * idr_preload_end() - end preload section started with idr_preload()
*
* Each idr_preload() should be matched with an invocation of this
* function. See idr_preload() for details.
@@ -132,7 +153,7 @@ static inline void idr_preload_end(void)
}

/**
- * idr_find - return pointer for given id
+ * idr_find() - return pointer for given id
* @idr: idr handle
* @id: lookup key
*
@@ -140,14 +161,35 @@ static inline void idr_preload_end(void)
* return indicates that @id is not valid or you passed %NULL in
* idr_get_new().
*
- * This function can be called under rcu_read_lock(), given that the leaf
- * pointers lifetimes are correctly managed.
+ * This function is protected by the RCU read lock. If you want to ensure
+ * that it does not race with a call to idr_remove(), perhaps because you
+ * need to establish a refcount on the object, you can use idr_lock() and
+ * idr_unlock() to prevent simultaneous modification.
*/
-static inline void *idr_find(const struct idr *idr, unsigned long id)
+static inline void *idr_find(struct idr *idr, unsigned long id)
{
- return radix_tree_lookup(&idr->idr_rt, id);
+ return xa_load(&idr->idr_xa, id);
}

+/**
+ * idr_for_each_entry_ul() - Iterate over the entries in an IDR.
+ * @idr: IDR handle.
+ * @entry: Pointer to each entry in turn.
+ * @id: ID of each entry.
+ *
+ * Initialise @id to the lowest ID before using this iterator.
+ * In the body of the loop, @entry will point to the object stored in the
+ * IDR. After the loop has finished normally, @entry will be %NULL, which
+ * is a convenient way to distinguish between a 'break' exit from the loop
+ * and normal termination.
+ *
+ * The control elements of this loop protect themselves with the RCU read
+ * lock, which is dropped before invoking the body. You may sleep unless
+ * your own locking prevents that.
+ */
+#define idr_for_each_entry_ul(idr, entry, id) \
+ xa_for_each(&(idr)->idr_xa, entry, id, ULONG_MAX, XA_PRESENT)
+
/**
* idr_for_each_entry - iterate over an idr's elements of a given type
* @idr: idr handle
@@ -160,8 +202,6 @@ static inline void *idr_find(const struct idr *idr, unsigned long id)
*/
#define idr_for_each_entry(idr, entry, id) \
for (id = 0; ((entry) = idr_get_next(idr, &(id))) != NULL; ++id)
-#define idr_for_each_entry_ul(idr, entry, id) \
- for (id = 0; ((entry) = idr_get_next_ul(idr, &(id))) != NULL; ++id)

/**
* idr_for_each_entry_continue - continue iteration over an idr's elements of a given type
@@ -196,7 +236,7 @@ struct ida {
};

#define IDA_INIT(name) { \
- .ida_rt = RADIX_TREE_INIT(name, IDR_RT_MARKER | GFP_NOWAIT), \
+ .ida_rt = RADIX_TREE_INIT(name, IDR_INIT_FLAGS | GFP_NOWAIT), \
}
#define DEFINE_IDA(name) struct ida name = IDA_INIT(name)

@@ -211,7 +251,7 @@ void ida_simple_remove(struct ida *ida, unsigned int id);

static inline void ida_init(struct ida *ida)
{
- INIT_RADIX_TREE(&ida->ida_rt, IDR_RT_MARKER | GFP_NOWAIT);
+ INIT_RADIX_TREE(&ida->ida_rt, IDR_INIT_FLAGS | GFP_NOWAIT);
}

/**
@@ -230,4 +270,4 @@ static inline bool ida_is_empty(const struct ida *ida)
{
return radix_tree_empty(&ida->ida_rt);
}
-#endif /* __IDR_H__ */
+#endif /* _LINUX_IDR_H */
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index ca6af6dd42c4..6f59f1f60205 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -32,7 +32,8 @@
* The following internal entries have a special meaning:
*
* 0-62: Sibling entries
- * 256: Retry entry
+ * 256: Zero entry
+ * 257: Retry entry
*
* Errors are also represented as internal entries, but use the negative
* space (-4094 to -2). They're never stored in the slots array; only
@@ -192,6 +193,7 @@ typedef unsigned __bitwise xa_tag_t;
#define XA_TAG_2 ((__force xa_tag_t)2U)
#define XA_PRESENT ((__force xa_tag_t)8U)
#define XA_TAG_MAX XA_TAG_2
+#define XA_FREE_TAG XA_TAG_0

enum xa_lock_type {
XA_LOCK_IRQ = 1,
@@ -204,6 +206,7 @@ enum xa_lock_type {
*/
#define XA_FLAGS_LOCK_IRQ ((__force gfp_t)XA_LOCK_IRQ)
#define XA_FLAGS_LOCK_BH ((__force gfp_t)XA_LOCK_BH)
+#define XA_FLAGS_TRACK_FREE ((__force gfp_t)4U)
#define XA_FLAGS_TAG(tag) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
(__force unsigned)(tag)))

@@ -555,7 +558,19 @@ static inline bool xa_is_sibling(const void *entry)
(entry < xa_mk_sibling(XA_CHUNK_SIZE - 1));
}

-#define XA_RETRY_ENTRY xa_mk_internal(256)
+#define XA_ZERO_ENTRY xa_mk_internal(256)
+#define XA_RETRY_ENTRY xa_mk_internal(257)
+
+/**
+ * xa_is_zero() - Is the entry a zero entry?
+ * @entry: Entry retrieved from the XArray
+ *
+ * Return: %true if the entry is a zero entry.
+ */
+static inline bool xa_is_zero(const void *entry)
+{
+ return unlikely(entry == XA_ZERO_ENTRY);
+}

/**
* xa_is_retry() - Is the entry a retry entry?
@@ -717,18 +732,20 @@ static inline bool xas_top(struct xa_node *node)
}

/**
- * xas_retry() - Handle a retry entry.
+ * xas_retry() - Retry the operation if appropriate.
* @xas: XArray operation state.
* @entry: Entry from xarray.
*
- * An RCU-protected read may see a retry entry as a side-effect of a
- * simultaneous modification. This function sets up the @xas to retry
- * the walk from the head of the array.
+ * The advanced functions may sometimes return an internal entry, such as
+ * a retry entry or a zero entry. This function sets up the @xas to restart
+ * the walk from the head of the array if needed.
*
* Return: true if the operation needs to be retried.
*/
static inline bool xas_retry(struct xa_state *xas, const void *entry)
{
+ if (xa_is_zero(entry))
+ return true;
if (!xa_is_retry(entry))
return false;
xas->xa_node = XAS_RESTART;
diff --git a/lib/idr.c b/lib/idr.c
index b9aa08e198a2..379eaa8cb75b 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -1,3 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * IDR implementation
+ * Copyright (c) 2017 Microsoft Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ */
+
#include <linux/bitmap.h>
#include <linux/export.h>
#include <linux/idr.h>
@@ -8,67 +15,121 @@
DEFINE_PER_CPU(struct ida_bitmap *, ida_bitmap);
static DEFINE_SPINLOCK(simple_ida_lock);

+/* In radix-tree.c temporarily */
+extern bool idr_nomem(struct xa_state *, gfp_t);
+
/**
- * idr_alloc_ul() - allocate a large ID
- * @idr: idr handle
- * @ptr: pointer to be associated with the new ID
- * @nextid: Pointer to minimum ID to allocate
- * @max: the maximum ID (inclusive)
- * @gfp: memory allocation flags
+ * idr_alloc_ul() - Allocate a large ID.
+ * @idr: IDR handle.
+ * @ptr: Pointer to be associated with the new ID.
+ * @nextid: Pointer to minimum ID to allocate.
+ * @max: The maximum ID (inclusive).
+ * @gfp: Memory allocation flags.
*
* Allocates an unused ID in the range [*nextid, end] and stores it in
* @nextid. Note that @max differs from the @end parameter to idr_alloc().
*
- * Simultaneous modifications to the @idr are not allowed and should be
- * prevented by the user, usually with a lock. idr_alloc_ul() may be called
- * concurrently with read-only accesses to the @idr, such as idr_find() and
- * idr_for_each_entry().
+ * The IDR uses its own spinlock to protect against simultaneous
+ * modification. @nextid is assigned to before @ptr is stored in the IDR;
+ * if @nextid points into the object referenced by @ptr, it will not be
+ * possible for a simultaneous lookup to see the wrong value in @nextid.
*
- * Return: 0 on success or a negative errno on failure (ENOMEM or ENOSPC)
+ * Return: 0 on success or a negative errno on failure (ENOMEM or ENOSPC).
*/
int idr_alloc_ul(struct idr *idr, void *ptr, unsigned long *nextid,
unsigned long max, gfp_t gfp)
{
- struct radix_tree_iter iter;
- void __rcu **slot;
+ XA_STATE(xas, &idr->idr_xa, *nextid);
+ unsigned long flags;

- if (WARN_ON_ONCE(radix_tree_is_internal_node(ptr)))
+ if (WARN_ON_ONCE(xa_is_internal(ptr)))
return -EINVAL;
+ if (!ptr)
+ ptr = XA_ZERO_ENTRY;
+
+ do {
+ xas_lock_irqsave(&xas, flags);
+ xas_find_tag(&xas, max, XA_FREE_TAG);
+ if (xas.xa_index > max)
+ xas_set_err(&xas, -ENOSPC);
+ else
+ *nextid = xas.xa_index;
+ xas_store(&xas, ptr);
+ xas_clear_tag(&xas, XA_FREE_TAG);
+ xas_unlock_irqrestore(&xas, flags);
+ } while (idr_nomem(&xas, gfp));
+
+ return xas_error(&xas);
+}
+EXPORT_SYMBOL_GPL(idr_alloc_ul);

- if (WARN_ON_ONCE(!(idr->idr_rt.xa_flags & ROOT_IS_IDR)))
- idr->idr_rt.xa_flags |= IDR_RT_MARKER;
-
- radix_tree_iter_init(&iter, *nextid);
- slot = idr_get_free(&idr->idr_rt, &iter, gfp, max);
- if (IS_ERR(slot))
- return PTR_ERR(slot);
-
- radix_tree_iter_replace(&idr->idr_rt, &iter, slot, ptr);
- radix_tree_iter_tag_clear(&idr->idr_rt, &iter, IDR_FREE);
+/**
+ * idr_alloc_u32() - Allocate an ID.
+ * @idr: IDR handle.
+ * @ptr: Pointer to be associated with the new ID.
+ * @nextid: Pointer to minimum ID to allocate.
+ * @max: The maximum ID (inclusive).
+ * @gfp: Memory allocation flags.
+ *
+ * Allocates an unused ID in the range [*nextid, end] and stores it in
+ * @nextid. Note that @max differs from the @end parameter to idr_alloc().
+ *
+ * The IDR uses its own spinlock to protect against simultaneous
+ * modification. @nextid is assigned to before @ptr is stored in the IDR;
+ * if @nextid points into the object referenced by @ptr, it will not be
+ * possible for a simultaneous lookup to see the wrong value in @nextid.
+ *
+ * Return: 0 on success or a negative errno on failure (ENOMEM or ENOSPC).
+ */
+#ifdef CONFIG_64BIT
+int idr_alloc_u32(struct idr *idr, void *ptr, unsigned int *nextid,
+ unsigned int max, gfp_t gfp)
+{
+ XA_STATE(xas, &idr->idr_xa, *nextid);
+ unsigned long flags;

- *nextid = iter.index;
- return 0;
+ if (WARN_ON_ONCE(xa_is_internal(ptr)))
+ return -EINVAL;
+ if (!ptr)
+ ptr = XA_ZERO_ENTRY;
+
+ do {
+ xas_lock_irqsave(&xas, flags);
+ xas_find_tag(&xas, max, XA_FREE_TAG);
+ if (xas.xa_index > max)
+ xas_set_err(&xas, -ENOSPC);
+ else
+ *nextid = xas.xa_index;
+ xas_store(&xas, ptr);
+ xas_clear_tag(&xas, XA_FREE_TAG);
+ xas_unlock_irqrestore(&xas, flags);
+ } while (idr_nomem(&xas, gfp));
+
+ return xas_error(&xas);
}
-EXPORT_SYMBOL_GPL(idr_alloc_ul);
+EXPORT_SYMBOL_GPL(idr_alloc_u32);
+#endif

/**
- * idr_alloc - allocate an id
- * @idr: idr handle
- * @ptr: pointer to be associated with the new id
- * @start: the minimum id (inclusive)
- * @end: the maximum id (exclusive)
- * @gfp: memory allocation flags
+ * idr_alloc() - Allocate an ID.
+ * @idr: IDR handle.
+ * @ptr: Pointer to be associated with the new ID.
+ * @start: The minimum id (inclusive).
+ * @end: The maximum id (exclusive).
+ * @gfp: Memory allocation flags.
+ *
+ * Allocates an unused ID >= start and < end.
*
- * Allocates an unused ID in the range [start, end). Returns -ENOSPC
- * if there are no unused IDs in that range.
+ * If @end is <= 0, it is treated as %INT_MAX + 1. This is to always
+ * allow using @start + N as @end as long as N is <= %INT_MAX. This
+ * differs from the @max parameter to idr_alloc_ul() and idr_alloc_u32().
*
- * Note that @end is treated as max when <= 0. This is to always allow
- * using @start + N as @end as long as N is inside integer range.
+ * The IDR uses its own spinlock to protect against simultaneous
+ * modification. The @ptr is visible to other simultaneous readers
+ * like idr_find() before this function returns.
*
- * Simultaneous modifications to the @idr are not allowed and should be
- * prevented by the user, usually with a lock. idr_alloc() may be called
- * concurrently with read-only accesses to the @idr, such as idr_find() and
- * idr_for_each_entry().
+ * Return: The newly allocated ID on success. -ENOMEM for a memory
+ * allocation failure. -ENOSPC if there are no free IDs in the range.
*/
int idr_alloc(struct idr *idr, void *ptr, int start, int end, gfp_t gfp)
{
@@ -88,16 +149,22 @@ int idr_alloc(struct idr *idr, void *ptr, int start, int end, gfp_t gfp)
EXPORT_SYMBOL_GPL(idr_alloc);

/**
- * idr_alloc_cyclic - allocate new idr entry in a cyclical fashion
- * @idr: idr handle
- * @ptr: pointer to be associated with the new id
- * @start: the minimum id (inclusive)
- * @end: the maximum id (exclusive)
- * @gfp: memory allocation flags
- *
- * Allocates an ID larger than the last ID allocated if one is available.
- * If not, it will attempt to allocate the smallest ID that is larger or
- * equal to @start.
+ * idr_alloc_cyclic - Allocate an ID cyclically.
+ * @idr: IDR handle.
+ * @ptr: Pointer to be associated with the new ID.
+ * @start: The minimum id (inclusive).
+ * @end: The maximum id (exclusive).
+ * @gfp: Memory allocation flags.
+ *
+ * Allocates an unused ID >= @start and < @end. It will start searching
+ * after the last ID allocated and wrap back around to @start.
+ *
+ * The IDR uses its own spinlock to protect against simultaneous
+ * modification. The @ptr is visible to other simultaneous readers
+ * like idr_find() before this function returns.
+ *
+ * Return: The newly allocated ID on success. -ENOMEM for a memory
+ * allocation failure. -ENOSPC if there are no free IDs in the range.
*/
int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end, gfp_t gfp)
{
@@ -119,88 +186,91 @@ int idr_alloc_cyclic(struct idr *idr, void *ptr, int start, int end, gfp_t gfp)
idr->idr_next = id + 1U;
return id;
}
-EXPORT_SYMBOL(idr_alloc_cyclic);
+EXPORT_SYMBOL_GPL(idr_alloc_cyclic);

/**
- * idr_for_each - iterate through all stored pointers
+ * idr_for_each() - iterate through all stored pointers
* @idr: idr handle
* @fn: function to be called for each pointer
* @data: data passed to callback function
*
- * The callback function will be called for each entry in @idr, passing
- * the id, the pointer and the data pointer passed to this function.
+ * The callback function will be called for each non-NULL pointer in
+ * @idr, passing the id, the pointer and @data. No internal locks are
+ * held while @fn is called, so @fn may sleep unless otherwise prevented
+ * by your own locking.
*
* If @fn returns anything other than %0, the iteration stops and that
* value is returned from this function.
*
- * idr_for_each() can be called concurrently with idr_alloc() and
- * idr_remove() if protected by RCU. Newly added entries may not be
- * seen and deleted entries may be seen, but adding and removing entries
- * will not cause other entries to be skipped, nor spurious ones to be seen.
+ * idr_for_each() protects itself with the RCU read lock. Newly added
+ * entries may not be seen and deleted entries may be seen, but adding
+ * and removing entries will not cause other entries to be skipped, nor
+ * spurious ones to be seen.
+ *
+ * Return: The value returned by the last call to @fn.
*/
-int idr_for_each(const struct idr *idr,
+int idr_for_each(struct idr *idr,
int (*fn)(int id, void *p, void *data), void *data)
{
- struct radix_tree_iter iter;
- void __rcu **slot;
+ unsigned long i = 0;
+ void *p;

- radix_tree_for_each_slot(slot, &idr->idr_rt, &iter, 0) {
- int ret = fn(iter.index, rcu_dereference_raw(*slot), data);
+ xa_for_each(&idr->idr_xa, p, i, INT_MAX, XA_PRESENT) {
+ int ret = fn(i, p, data);
if (ret)
return ret;
}

return 0;
}
-EXPORT_SYMBOL(idr_for_each);
+EXPORT_SYMBOL_GPL(idr_for_each);

/**
- * idr_get_next - Find next populated entry
+ * idr_get_next() - Find next populated entry
* @idr: idr handle
- * @nextid: Pointer to lowest possible ID to return
+ * @id: Pointer to lowest possible ID to return
*
* Returns the next populated entry in the tree with an ID greater than
* or equal to the value pointed to by @nextid. On exit, @nextid is updated
* to the ID of the found value. To use in a loop, the value pointed to by
* nextid must be incremented by the user.
+ *
+ * This function protects itself with the RCU read lock, so may return a
+ * stale entry or may skip a newly added entry unless synchronised with
+ * a lock.
*/
-void *idr_get_next(struct idr *idr, int *nextid)
+void *idr_get_next(struct idr *idr, int *id)
{
- struct radix_tree_iter iter;
- void __rcu **slot;
-
- slot = radix_tree_iter_find(&idr->idr_rt, &iter, *nextid);
- if (!slot)
- return NULL;
+ unsigned long index = *id;
+ void *entry = xa_find(&idr->idr_xa, &index, INT_MAX, XA_PRESENT);

- *nextid = iter.index;
- return rcu_dereference_raw(*slot);
+ *id = index;
+ return entry;
}
-EXPORT_SYMBOL(idr_get_next);
+EXPORT_SYMBOL_GPL(idr_get_next);

/**
- * idr_get_next_ul - Find next populated entry
- * @idr: idr handle
- * @nextid: Pointer to lowest possible ID to return
+ * idr_remove() - Remove an item from the IDR.
+ * @idr: IDR handle.
+ * @id: Object ID.
*
- * Returns the next populated entry in the tree with an ID greater than
- * or equal to the value pointed to by @nextid. On exit, @nextid is updated
- * to the ID of the found value. To use in a loop, the value pointed to by
- * nextid must be incremented by the user.
+ * Once this function returns, the ID is available for allocation again.
+ * This function protects itself with the IDR lock.
+ *
+ * Return: The pointer associated with this ID.
*/
-void *idr_get_next_ul(struct idr *idr, unsigned long *nextid)
+void *idr_remove(struct idr *idr, unsigned long id)
{
- struct radix_tree_iter iter;
- void __rcu **slot;
+ unsigned long flags;
+ void *entry;

- slot = radix_tree_iter_find(&idr->idr_rt, &iter, *nextid);
- if (!slot)
- return NULL;
+ xa_lock_irqsave(&idr->idr_xa, flags);
+ entry = __xa_erase(&idr->idr_xa, id);
+ xa_unlock_irqrestore(&idr->idr_xa, flags);

- *nextid = iter.index;
- return rcu_dereference_raw(*slot);
+ return entry;
}
-EXPORT_SYMBOL(idr_get_next_ul);
+EXPORT_SYMBOL_GPL(idr_remove);

/**
* idr_replace - replace pointer for given id
@@ -209,31 +279,35 @@ EXPORT_SYMBOL(idr_get_next_ul);
* @id: Lookup key
*
* Replace the pointer registered with an ID and return the old value.
- * This function can be called under the RCU read lock concurrently with
- * idr_alloc() and idr_remove() (as long as the ID being removed is not
- * the one being replaced!).
+ * This function protects itself with a spinlock.
*
* Returns: the old value on success. %-ENOENT indicates that @id was not
* found. %-EINVAL indicates that @id or @ptr were not valid.
*/
void *idr_replace(struct idr *idr, void *ptr, unsigned long id)
{
- struct radix_tree_node *node;
- void __rcu **slot = NULL;
- void *entry;
+ XA_STATE(xas, &idr->idr_xa, id);
+ unsigned long flags;
+ void *curr;

- if (WARN_ON_ONCE(radix_tree_is_internal_node(ptr)))
+ if (WARN_ON_ONCE(xa_is_internal(ptr)))
return ERR_PTR(-EINVAL);
-
- entry = __radix_tree_lookup(&idr->idr_rt, id, &node, &slot);
- if (!slot || radix_tree_tag_get(&idr->idr_rt, id, IDR_FREE))
- return ERR_PTR(-ENOENT);
-
- __radix_tree_replace(&idr->idr_rt, node, slot, ptr, NULL);
-
- return entry;
+ if (!ptr)
+ ptr = XA_ZERO_ENTRY;
+
+ xas_lock_irqsave(&xas, flags);
+ curr = xas_load(&xas);
+ if (curr)
+ xas_store(&xas, ptr);
+ else
+ curr = ERR_PTR(-ENOENT);
+ xas_unlock_irqrestore(&xas, flags);
+
+ if (xa_is_zero(curr))
+ return NULL;
+ return curr;
}
-EXPORT_SYMBOL(idr_replace);
+EXPORT_SYMBOL_GPL(idr_replace);

/**
* DOC: IDA description
@@ -264,7 +338,7 @@ EXPORT_SYMBOL(idr_replace);
* Developer's notes:
*
* The IDA uses the functionality provided by the IDR & radix tree to store
- * bitmaps in each entry. The IDR_FREE tag means there is at least one bit
+ * bitmaps in each entry. The XA_FREE_TAG tag means there is at least one bit
* free, unlike the IDR where it means at least one entry is free.
*
* I considered telling the radix tree that each slot is an order-10 node
@@ -370,7 +444,7 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
__set_bit(bit, bitmap->bitmap);
if (bitmap_full(bitmap->bitmap, IDA_BITMAP_BITS))
radix_tree_iter_tag_clear(root, &iter,
- IDR_FREE);
+ XA_FREE_TAG);
} else {
new += bit;
if (new < 0)
@@ -426,7 +500,7 @@ void ida_remove(struct ida *ida, int id)
goto err;

__clear_bit(offset, btmp);
- radix_tree_iter_tag_set(&ida->ida_rt, &iter, IDR_FREE);
+ radix_tree_iter_tag_set(&ida->ida_rt, &iter, XA_FREE_TAG);
if (xa_is_value(bitmap)) {
if (xa_to_value(rcu_dereference_raw(*slot)) == 0)
radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index a0fdea68ce9c..75e02cb78ada 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -529,6 +529,30 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order)
return __radix_tree_preload(gfp_mask, nr_nodes);
}

+/* Once the IDR users abandon the preload API, we can use xas_nomem */
+bool idr_nomem(struct xa_state *xas, gfp_t gfp)
+{
+ if (xas->xa_node != XA_ERROR(-ENOMEM)) {
+ xas_destroy(xas);
+ return false;
+ }
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep,
+ gfp | __GFP_NOWARN);
+ if (!xas->xa_alloc) {
+ struct radix_tree_preload *rtp;
+
+ rtp = this_cpu_ptr(&radix_tree_preloads);
+ if (!rtp->nr)
+ return false;
+ xas->xa_alloc = rtp->nodes;
+ rtp->nodes = xas->xa_alloc->parent;
+ rtp->nr--;
+ }
+
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+
static unsigned radix_tree_load_root(const struct radix_tree_root *root,
struct radix_tree_node **nodep, unsigned long *maxindex)
{
@@ -562,7 +586,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
maxshift += RADIX_TREE_MAP_SHIFT;

entry = rcu_dereference_raw(root->xa_head);
- if (!entry && (!is_idr(root) || root_tag_get(root, IDR_FREE)))
+ if (!entry && (!is_idr(root) || root_tag_get(root, XA_FREE_TAG)))
goto out;

do {
@@ -572,10 +596,10 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
return -ENOMEM;

if (is_idr(root)) {
- all_tag_set(node, IDR_FREE);
- if (!root_tag_get(root, IDR_FREE)) {
- tag_clear(node, IDR_FREE, 0);
- root_tag_set(root, IDR_FREE);
+ all_tag_set(node, XA_FREE_TAG);
+ if (!root_tag_get(root, XA_FREE_TAG)) {
+ tag_clear(node, XA_FREE_TAG, 0);
+ root_tag_set(root, XA_FREE_TAG);
}
} else {
/* Propagate the aggregated tag info to the new child */
@@ -646,8 +670,8 @@ static inline bool radix_tree_shrink(struct radix_tree_root *root,
* one (root->xa_head) as far as dependent read barriers go.
*/
root->xa_head = (void __rcu *)child;
- if (is_idr(root) && !tag_get(node, IDR_FREE, 0))
- root_tag_clear(root, IDR_FREE);
+ if (is_idr(root) && !tag_get(node, XA_FREE_TAG, 0))
+ root_tag_clear(root, XA_FREE_TAG);

/*
* We have a dilemma here. The node's slot[0] must not be
@@ -1074,7 +1098,7 @@ static bool node_tag_get(const struct radix_tree_root *root,
/*
* IDR users want to be able to store NULL in the tree, so if the slot isn't
* free, don't adjust the count, even if it's transitioning between NULL and
- * non-NULL. For the IDA, we mark slots as being IDR_FREE while they still
+ * non-NULL. For the IDA, we mark slots as being XA_FREE_TAG while they still
* have empty bits, but it only stores NULL in slots when they're being
* deleted.
*/
@@ -1084,7 +1108,7 @@ static int calculate_count(struct radix_tree_root *root,
{
if (is_idr(root)) {
unsigned offset = get_slot_offset(node, slot);
- bool free = node_tag_get(root, node, IDR_FREE, offset);
+ bool free = node_tag_get(root, node, XA_FREE_TAG, offset);
if (!free)
return 0;
if (!old)
@@ -1915,7 +1939,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
int tag;

if (is_idr(root))
- node_tag_set(root, node, IDR_FREE, offset);
+ node_tag_set(root, node, XA_FREE_TAG, offset);
else
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
node_tag_clear(root, node, tag, offset);
@@ -1963,7 +1987,7 @@ void *radix_tree_delete_item(struct radix_tree_root *root,
void *entry;

entry = __radix_tree_lookup(root, index, &node, &slot);
- if (!entry && (!is_idr(root) || node_tag_get(root, node, IDR_FREE,
+ if (!entry && (!is_idr(root) || node_tag_get(root, node, XA_FREE_TAG,
get_slot_offset(node, slot))))
return NULL;

@@ -2070,7 +2094,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,

grow:
shift = radix_tree_load_root(root, &child, &maxindex);
- if (!radix_tree_tagged(root, IDR_FREE))
+ if (!radix_tree_tagged(root, XA_FREE_TAG))
start = max(start, maxindex + 1);
if (start > max)
return ERR_PTR(-ENOSPC);
@@ -2091,7 +2115,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
offset, 0, 0);
if (!child)
return ERR_PTR(-ENOMEM);
- all_tag_set(child, IDR_FREE);
+ all_tag_set(child, XA_FREE_TAG);
rcu_assign_pointer(*slot, node_to_entry(child));
if (node)
node->count++;
@@ -2100,8 +2124,8 @@ void __rcu **idr_get_free(struct radix_tree_root *root,

node = entry_to_node(child);
offset = radix_tree_descend(node, &child, start);
- if (!tag_get(node, IDR_FREE, offset)) {
- offset = radix_tree_find_next_bit(node, IDR_FREE,
+ if (!tag_get(node, XA_FREE_TAG, offset)) {
+ offset = radix_tree_find_next_bit(node, XA_FREE_TAG,
offset + 1);
start = next_index(start, node, offset);
if (start > max)
@@ -2125,32 +2149,11 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
iter->next_index = 1;
iter->node = node;
__set_iter_shift(iter, shift);
- set_iter_tags(iter, node, offset, IDR_FREE);
+ set_iter_tags(iter, node, offset, XA_FREE_TAG);

return slot;
}

-/**
- * idr_destroy - release all internal memory from an IDR
- * @idr: idr handle
- *
- * After this function is called, the IDR is empty, and may be reused or
- * the data structure containing it may be freed.
- *
- * A typical clean-up sequence for objects stored in an idr tree will use
- * idr_for_each() to free all objects, if necessary, then idr_destroy() to
- * free the memory used to keep track of those objects.
- */
-void idr_destroy(struct idr *idr)
-{
- struct radix_tree_node *node = rcu_dereference_raw(idr->idr_rt.xa_head);
- if (radix_tree_is_internal_node(node))
- radix_tree_free_nodes(node);
- idr->idr_rt.xa_head = NULL;
- root_tag_set(&idr->idr_rt, IDR_FREE);
-}
-EXPORT_SYMBOL(idr_destroy);
-
static void
radix_tree_node_ctor(void *arg)
{
diff --git a/lib/xarray.c b/lib/xarray.c
index c044373d6893..ace309cc9253 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -46,6 +46,11 @@ static inline unsigned int xa_lock_type(const struct xarray *xa)
return (__force unsigned int)xa->xa_flags & 3;
}

+static inline bool xa_track_free(const struct xarray *xa)
+{
+ return xa->xa_flags & XA_FLAGS_TRACK_FREE;
+}
+
static inline void xa_tag_set(struct xarray *xa, xa_tag_t tag)
{
if (!(xa->xa_flags & XA_FLAGS_TAG(tag)))
@@ -81,6 +86,11 @@ static inline bool node_any_tag(struct xa_node *node, xa_tag_t tag)
return !bitmap_empty(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
}

+static inline void node_tag_all(struct xa_node *node, xa_tag_t tag)
+{
+ bitmap_fill(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
+}
+
#define tag_inc(tag) do { \
tag = (__force xa_tag_t)((__force unsigned)(tag) + 1); \
} while (0)
@@ -390,6 +400,8 @@ static void xas_shrink(struct xa_state *xas)
xas->xa_node = XAS_BOUNDS;

RCU_INIT_POINTER(xa->xa_head, entry);
+ if (xa_track_free(xa) && !node_get_tag(node, 0, XA_FREE_TAG))
+ xa_tag_clear(xa, XA_FREE_TAG);

node->count = 0;
node->nr_values = 0;
@@ -522,6 +534,14 @@ static int xas_expand(struct xa_state *xas, void *head)
RCU_INIT_POINTER(node->slots[0], head);

/* Propagate the aggregated tag info to the new child */
+ if (xa_track_free(xa)) {
+ node_tag_all(node, XA_FREE_TAG);
+ if (!xa_tagged(xa, XA_FREE_TAG)) {
+ node_clear_tag(node, 0, XA_FREE_TAG);
+ xa_tag_set(xa, XA_FREE_TAG);
+ }
+ tag_inc(tag);
+ }
for (;;) {
if (xa_tagged(xa, tag))
node_set_tag(node, 0, tag);
@@ -598,6 +618,8 @@ void *xas_create(struct xa_state *xas)
node = xas_alloc(xas, shift);
if (!node)
break;
+ if (xa_track_free(xa))
+ node_tag_all(node, XA_FREE_TAG);
rcu_assign_pointer(*slot, xa_mk_node(node));
} else if (xa_is_node(entry)) {
node = xa_to_node(entry);
@@ -815,6 +837,10 @@ void xas_init_tags(const struct xa_state *xas)
{
xa_tag_t tag = 0;

+ if (xa_track_free(xas->xa)) {
+ xas_set_tag(xas, XA_FREE_TAG);
+ tag_inc(tag);
+ }
for (;;) {
xas_clear_tag(xas, tag);
if (tag == XA_TAG_MAX)
@@ -1125,6 +1151,8 @@ void *xa_load(struct xarray *xa, unsigned long index)
rcu_read_lock();
do {
entry = xas_load(&xas);
+ if (xa_is_zero(entry))
+ entry = NULL;
} while (xas_retry(&xas, entry));
rcu_read_unlock();

@@ -1134,6 +1162,8 @@ EXPORT_SYMBOL(xa_load);

static void *xas_result(struct xa_state *xas, void *curr)
{
+ if (xa_is_zero(curr))
+ return NULL;
XA_NODE_BUG_ON(xas->xa_node, xa_is_internal(curr));
if (xas_error(xas))
curr = xas->xa_node;
@@ -1626,6 +1656,8 @@ void xa_dump_entry(const void *entry, unsigned long index, unsigned long shift)
pr_cont("retry (%ld)\n", xa_to_internal(entry));
else if (xa_is_sibling(entry))
pr_cont("sibling (slot %ld)\n", xa_to_sibling(entry));
+ else if (xa_is_zero(entry))
+ pr_cont("zero (%ld)\n", xa_to_internal(entry));
else
pr_cont("UNKNOWN ENTRY (%px)\n", entry);
}
diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
index 7499319e85f8..9701db3b7683 100644
--- a/tools/testing/radix-tree/idr-test.c
+++ b/tools/testing/radix-tree/idr-test.c
@@ -177,6 +177,31 @@ void idr_get_next_test(void)
idr_destroy(&idr);
}

+void idr_shrink_test(struct idr *idr)
+{
+ assert(idr_alloc(idr, NULL, 1, 2, GFP_KERNEL) == 1);
+ assert(idr_alloc(idr, NULL, 5000, 5001, GFP_KERNEL) == 5000);
+ idr_remove(idr, 5000);
+ idr_remove(idr, 1);
+ assert(idr_is_empty(idr));
+}
+
+/*
+ * Check that growing the IDR works properly.
+ */
+void idr_alloc_far(struct idr *idr, unsigned long end)
+{
+ int i;
+
+ for (i = 1; i < end; i++)
+ assert(idr_alloc(idr, idr, i, i + 1, GFP_KERNEL) == i);
+
+ for (i = 1; i <= end; i++) {
+ assert(idr_alloc(idr, idr, 1, 0, GFP_KERNEL) == end);
+ idr_remove(idr, end);
+ }
+}
+
void idr_checks(void)
{
unsigned long i;
@@ -227,6 +252,13 @@ void idr_checks(void)
idr_null_test();
idr_nowait_test();
idr_get_next_test();
+ idr_shrink_test(&idr);
+ idr_destroy(&idr);
+
+ for (i = 2; i < 18; i++) {
+ idr_alloc_far(&idr, 1UL << i);
+ idr_destroy(&idr);
+ }
}

/*
@@ -505,7 +537,9 @@ void ida_thread_tests(void)
int __weak main(void)
{
radix_tree_init();
+ printv(0, "starting IDR checks\n");
idr_checks();
+ printv(0, "starting IDA checks\n");
ida_checks();
ida_thread_tests();
radix_tree_cpu_dead(1);
--
2.15.1


2018-01-17 21:10:17

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 18/99] xarray: Add ability to store errno values

From: Matthew Wilcox <[email protected]>

While the radix tree offers no ability to store IS_ERR pointers,
documenting that the XArray does not led to some concern. Here is a
sanctioned way to store errnos in the XArray. I'm concerned that it
will confuse people who can't tell the difference between xa_is_err()
and xa_is_errno(), so I've added copious kernel-doc to help them tell
the difference.

Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/core-api/xarray.rst | 8 +++++--
include/linux/xarray.h | 44 ++++++++++++++++++++++++++++++++++
tools/testing/radix-tree/xarray-test.c | 8 ++++++-
3 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index 914999c0bf3f..0172c7d9e6ea 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -42,8 +42,12 @@ When you retrieve an entry from the XArray, you can check whether it is
a value entry by calling :c:func:`xa_is_value`, and convert it back to
an integer by calling :c:func:`xa_to_value`.

-The XArray does not support storing :c:func:`IS_ERR` pointers as some
-conflict with value entries or internal entries.
+The XArray does not support storing :c:func:`IS_ERR` pointers because
+some conflict with value entries or internal entries. If you need
+to store error numbers in the array, you can encode them into error
+entries with :c:func:`xa_mk_errno`, check whether a returned entry is
+an error with :c:func:`xa_is_errno` and convert it back into an errno
+with :c:func:`xa_to_errno`.

An unusual feature of the XArray is the ability to create entries which
occupy a range of indices. Once stored to, looking up any index in
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index acb6d02ff194..ca6af6dd42c4 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -75,6 +75,50 @@ static inline bool xa_is_value(const void *entry)
return (unsigned long)entry & 1;
}

+/**
+ * xa_mk_errno() - Create an XArray entry from an error number.
+ * @error: Error number to store in XArray.
+ *
+ * Return: An entry suitable for storing in the XArray.
+ */
+static inline void *xa_mk_errno(long error)
+{
+ return (void *)(error << 2);
+}
+
+/**
+ * xa_to_errno() - Get error number stored in an XArray entry.
+ * @entry: XArray entry.
+ *
+ * Calling this function on an entry which is not an xa_is_errno() will
+ * yield unpredictable results. Do not confuse this function with xa_err();
+ * this function is for errnos which have been stored in the XArray, and
+ * that function is for errors returned from the XArray implementation.
+ *
+ * Return: The error number stored in the XArray entry.
+ */
+static inline long xa_to_errno(const void *entry)
+{
+ return (long)entry >> 2;
+}
+
+/**
+ * xa_is_errno() - Determine if an entry is an errno.
+ * @entry: XArray entry.
+ *
+ * Do not confuse this function with xa_is_err(); that function tells you
+ * whether the XArray implementation returned an error; this function
+ * tells you whether the entry you successfully stored in the XArray
+ * represented an errno. If you have never stored an errno in the XArray,
+ * you do not have to check this.
+ *
+ * Return: True if the entry is an errno, false if it is a pointer.
+ */
+static inline bool xa_is_errno(const void *entry)
+{
+ return (((unsigned long)entry & 3) == 0) && (entry > (void *)-4096);
+}
+
/*
* xa_mk_internal() - Create an internal entry.
* @v: Value to turn into an internal entry.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 2ad460c1febf..4d3541ac31e9 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -29,7 +29,13 @@ void check_xa_err(struct xarray *xa)
assert(xa_err(xa_store(xa, 1, xa_mk_value(0), GFP_KERNEL)) == 0);
assert(xa_err(xa_store(xa, 1, NULL, 0)) == 0);
// kills the test-suite :-(
-// assert(xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) == -EINVAL);
+// assert(xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) == -EINVAL);
+
+ assert(xa_err(xa_store(xa, 0, xa_mk_errno(-ENOMEM), GFP_KERNEL)) == 0);
+ assert(xa_err(xa_load(xa, 0)) == 0);
+ assert(xa_is_errno(xa_load(xa, 0)) == true);
+ assert(xa_to_errno(xa_load(xa, 0)) == -ENOMEM);
+ xa_erase(xa, 0);
}

void check_xa_tag(struct xarray *xa)
--
2.15.1


2018-01-17 21:10:39

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 16/99] xarray: Add xas_create_range

From: Matthew Wilcox <[email protected]>

This hopefully temporary function is useful for users who have not yet
been converted to multi-index entries.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 2 ++
lib/xarray.c | 22 ++++++++++++++++++++++
2 files changed, 24 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 01ce313fc00e..acb6d02ff194 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -705,6 +705,8 @@ void xas_init_tags(const struct xa_state *);
bool xas_nomem(struct xa_state *, gfp_t);
void xas_pause(struct xa_state *);

+void xas_create_range(struct xa_state *, unsigned long max);
+
/**
* xas_reload() - Refetch an entry from the xarray.
* @xas: XArray operation state.
diff --git a/lib/xarray.c b/lib/xarray.c
index e8ece1fff9fd..c044373d6893 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -612,6 +612,28 @@ void *xas_create(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_create);

+/**
+ * xas_create_range() - Ensure that stores to this range will succeed
+ * @xas: XArray operation state.
+ * @max: The highest index to create a slot for.
+ *
+ * Creates all of the slots in the range between the current position of
+ * @xas and @max. This is for the benefit of users who have not yet been
+ * converted to multi-index entries.
+ *
+ * The implementation is naive.
+ */
+void xas_create_range(struct xa_state *xas, unsigned long max)
+{
+ XA_STATE(tmp, xas->xa, xas->xa_index);
+
+ do {
+ xas_create(&tmp);
+ xas_set(&tmp, tmp.xa_index + XA_CHUNK_SIZE);
+ } while (tmp.xa_index < max);
+}
+EXPORT_SYMBOL_GPL(xas_create_range);
+
static void store_siblings(struct xa_state *xas, void *entry, void *curr,
int *countp, int *valuesp)
{
--
2.15.1


2018-01-17 21:11:28

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 17/99] xarray: Add MAINTAINERS entry

From: Matthew Wilcox <[email protected]>

Add myself as XArray and IDR maintainer.

Signed-off-by: Matthew Wilcox <[email protected]>
---
MAINTAINERS | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 18994806e441..55ae4c0b38d5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14893,6 +14893,18 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso
S: Maintained
F: arch/x86/entry/vdso/

+XARRAY
+M: Matthew Wilcox <[email protected]>
+M: Matthew Wilcox <[email protected]>
+L: [email protected]
+S: Supported
+F: Documentation/core-api/xarray.rst
+F: lib/idr.c
+F: lib/xarray.c
+F: include/linux/idr.h
+F: include/linux/xarray.h
+F: tools/testing/radix-tree
+
XC2028/3028 TUNER DRIVER
M: Mauro Carvalho Chehab <[email protected]>
M: Mauro Carvalho Chehab <[email protected]>
--
2.15.1


2018-01-17 21:12:06

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 15/99] xarray: Add xas_next and xas_prev

From: Matthew Wilcox <[email protected]>

These two functions move the xas index by one position, and adjust the
rest of the iterator state to match it. This is more efficient than
calling xas_set() as it keeps the iterator at the leaves of the tree
instead of walking the iterator from the root each time.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 67 +++++++++
lib/xarray.c | 74 ++++++++++
tools/testing/radix-tree/xarray-test.c | 259 +++++++++++++++++++++++++++++++++
3 files changed, 400 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index d106b2fe4cec..01ce313fc00e 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -660,6 +660,12 @@ static inline bool xas_not_node(struct xa_node *node)
return ((unsigned long)node & 3) || !node;
}

+/* True if the node represents RESTART or an error */
+static inline bool xas_frozen(struct xa_node *node)
+{
+ return (unsigned long)node & 2;
+}
+
/* True if the node represents head-of-tree, RESTART or BOUNDS */
static inline bool xas_top(struct xa_node *node)
{
@@ -901,6 +907,67 @@ enum {
for (entry = xas_find_tag(xas, max, tag); entry; \
entry = xas_next_tag(xas, max, tag))

+void *__xas_next(struct xa_state *);
+void *__xas_prev(struct xa_state *);
+
+/**
+ * xas_prev() - Move iterator to previous index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * subtracted from the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index 0, this function wraps
+ * around to %ULONG_MAX.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_prev(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == 0))
+ return __xas_prev(xas);
+
+ xas->xa_index--;
+ xas->xa_offset--;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
+/**
+ * xas_next() - Move state to next index.
+ * @xas: XArray operation state.
+ *
+ * If the @xas was in an error state, it will remain in an error state
+ * and this function will return %NULL. If the @xas has never been walked,
+ * it will have the effect of calling xas_load(). Otherwise one will be
+ * added to the index and the state will be walked to the correct
+ * location in the array for the next operation.
+ *
+ * If the iterator was referencing index %ULONG_MAX, this function wraps
+ * around to 0.
+ *
+ * Return: The entry at the new index. This may be %NULL or an internal
+ * entry, although it should never be a node entry.
+ */
+static inline void *xas_next(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (unlikely(xas_not_node(node) || node->shift ||
+ xas->xa_offset == XA_CHUNK_MASK))
+ return __xas_next(xas);
+
+ xas->xa_index++;
+ xas->xa_offset++;
+ return xa_entry(xas->xa, node, xas->xa_offset);
+}
+
/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
void xas_destroy(struct xa_state *);

diff --git a/lib/xarray.c b/lib/xarray.c
index af81d4bf9ae1..e8ece1fff9fd 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -838,6 +838,80 @@ void xas_pause(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_pause);

+/*
+ * __xas_prev() - Find the previous entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_prev() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_prev(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index--;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset--;
+
+ while (xas->xa_offset == 255) {
+ xas->xa_offset = xas->xa_node->offset - 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_prev);
+
+/*
+ * __xas_next() - Find the next entry in the XArray.
+ * @xas: XArray operation state.
+ *
+ * Helper function for xas_next() which handles all the complex cases
+ * out of line.
+ */
+void *__xas_next(struct xa_state *xas)
+{
+ void *entry;
+
+ if (!xas_frozen(xas->xa_node))
+ xas->xa_index++;
+ if (xas_not_node(xas->xa_node))
+ return xas_load(xas);
+
+ if (xas->xa_offset != get_offset(xas->xa_index, xas->xa_node))
+ xas->xa_offset++;
+
+ while (xas->xa_offset == XA_CHUNK_SIZE) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ return set_bounds(xas);
+ }
+
+ for (;;) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+}
+EXPORT_SYMBOL_GPL(__xas_next);
+
/**
* xas_find() - Find the next present entry in the XArray.
* @xas: XArray operation state.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 26b25be81656..2ad460c1febf 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -49,6 +49,147 @@ void check_xa_tag(struct xarray *xa)
assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
}

+/* Check that putting the xas into an error state works correctly */
+void check_xas_error(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+
+ assert(xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL) == 0);
+ assert(xa_load(xa, 1) == xa_mk_value(1));
+
+ assert(xas_error(&xas) == 0);
+
+ xas_set_err(&xas, -ENOTTY);
+ assert(xas_error(&xas) == -ENOTTY);
+
+ xas_set_err(&xas, -ENOSPC);
+ assert(xas_error(&xas) == -ENOSPC);
+
+ xas_set_err(&xas, -ENOMEM);
+ assert(xas_error(&xas) == -ENOMEM);
+
+ assert(xas_load(&xas) == NULL);
+ assert(xas_store(&xas, &xas) == NULL);
+ assert(xas_load(&xas) == NULL);
+
+ assert(xas.xa_index == 0);
+ assert(xas_next(&xas) == NULL);
+ assert(xas.xa_index == 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == 0);
+
+ xas_retry(&xas, XA_RETRY_ENTRY);
+ assert(xas_error(&xas) == 0);
+
+ assert(xas_find(&xas, ULONG_MAX) == xa_mk_value(1));
+ assert(xas.xa_index == 1);
+ assert(xas_error(&xas) == 0);
+
+ assert(xas_find(&xas, ULONG_MAX) == NULL);
+ assert(xas.xa_index > 1);
+ assert(xas_error(&xas) == 0);
+ assert(xas.xa_node == XAS_BOUNDS);
+}
+
+void check_xas_pause(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *entry;
+ unsigned int seen;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_set_tag(xa, 0, XA_TAG_0);
+
+ seen = 0;
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++) {
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ }
+ }
+ rcu_read_unlock();
+ /* We don't see an entry that was added after we started */
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ }
+ rcu_read_unlock();
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ rcu_read_lock();
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xa_erase(xa, 1);
+ }
+ rcu_read_unlock();
+ assert(seen == 1);
+
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ seen = 0;
+ xas_set(&xas, 0);
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ for (entry = xas_load(&xas); entry; entry = xas_next(&xas)) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+
+ seen = 0;
+ xas_set(&xas, 0);
+ xa_set_tag(xa, 1, XA_TAG_0);
+ xas_for_each_tag(&xas, entry, ULONG_MAX, XA_TAG_0) {
+ if (!seen++)
+ xas_pause(&xas);
+ }
+ assert(seen == 2);
+}
+
void check_xas_retry(struct xarray *xa)
{
XA_STATE(xas, xa, 0);
@@ -257,9 +398,108 @@ void check_xas_delete(struct xarray *xa)
}
}

+void check_move_small(struct xarray *xa, unsigned long idx)
+{
+ XA_STATE(xas, xa, 0);
+ unsigned long i;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, idx, xa_mk_value(idx), GFP_KERNEL);
+
+ for (i = 0; i < idx * 4; i++) {
+ void *entry = xas_next(&xas);
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ }
+ xas_next(&xas);
+ assert(xas.xa_index == i);
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if (i <= idx)
+ assert(xas.xa_node != XAS_RESTART);
+ assert(xas.xa_index == i);
+ if (i == 0 || i == idx)
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ } while (i > 0);
+
+ xas_set(&xas, ULONG_MAX);
+ assert(xas_next(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+ assert(xas_next(&xas) == xa_mk_value(0));
+ assert(xas.xa_index == 0);
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+}
+
+void check_move(struct xarray *xa)
+{
+ XA_STATE(xas, xa, (1 << 16) - 1);
+ unsigned long i;
+
+ for (i = 0; i < (1 << 16); i++) {
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ }
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ assert(entry == xa_mk_value(i));
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+
+ for (i = (1 << 8); i < (1 << 15); i++) {
+ xa_erase(xa, i);
+ }
+
+ i = xas.xa_index;
+
+ do {
+ void *entry = xas_prev(&xas);
+ i--;
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ } while (i != 0);
+
+ assert(xas_prev(&xas) == NULL);
+ assert(xas.xa_index == ULONG_MAX);
+
+ do {
+ void *entry = xas_next(&xas);
+ if ((i < (1 << 8)) || (i >= (1 << 15)))
+ assert(entry == xa_mk_value(i));
+ else
+ assert(entry == NULL);
+ assert(i == xas.xa_index);
+ i++;
+ } while (i < (1 << 16));
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
+ unsigned long i;

check_xa_err(&array);
item_kill_tree(&array);
@@ -267,9 +507,15 @@ void xarray_checks(void)
check_xa_tag(&array);
item_kill_tree(&array);

+ check_xas_error(&array);
+ item_kill_tree(&array);
+
check_xas_retry(&array);
item_kill_tree(&array);

+ check_xas_pause(&array);
+ item_kill_tree(&array);
+
check_xa_load(&array);
item_kill_tree(&array);

@@ -283,6 +529,19 @@ void xarray_checks(void)
check_find(&array);
check_xas_delete(&array);
item_kill_tree(&array);
+
+ for (i = 0; i < 16; i++) {
+ check_move_small(&array, 1UL << i);
+ item_kill_tree(&array);
+ }
+
+ for (i = 2; i < 16; i++) {
+ check_move_small(&array, (1UL << i) - 1);
+ item_kill_tree(&array);
+ }
+
+ check_move(&array);
+ item_kill_tree(&array);
}

int __weak main(void)
--
2.15.1


2018-01-17 21:12:17

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 14/99] xarray: Add xa_destroy

From: Matthew Wilcox <[email protected]>

This function frees all the internal memory allocated to the xarray
and reinitialises it to be empty.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 1 +
lib/xarray.c | 26 ++++++++++++++++++++++++++
2 files changed, 27 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index d79fd48e4957..d106b2fe4cec 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -221,6 +221,7 @@ void *xa_find_after(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
unsigned long max, unsigned int n, xa_tag_t);
+void xa_destroy(struct xarray *);

/**
* xa_init() - Initialise an empty XArray.
diff --git a/lib/xarray.c b/lib/xarray.c
index be276618f81b..af81d4bf9ae1 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1448,6 +1448,32 @@ unsigned int xa_extract(struct xarray *xa, void **dst, unsigned long start,
}
EXPORT_SYMBOL(xa_extract);

+/**
+ * xa_destroy() - Free all internal data structures.
+ * @xa: XArray.
+ *
+ * After calling this function, the XArray is empty and has freed all memory
+ * allocated for its internal data structures. You are responsible for
+ * freeing the objects referenced by the XArray.
+ */
+void xa_destroy(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ unsigned long flags;
+ void *entry;
+
+ xas.xa_node = NULL;
+ xas_lock_irqsave(&xas, flags);
+ entry = xa_head_locked(xa);
+ RCU_INIT_POINTER(xa->xa_head, NULL);
+ xas_init_tags(&xas);
+ /* lockdep checks we're still holding the lock in xas_free_nodes() */
+ if (xa_is_node(entry))
+ xas_free_nodes(&xas, xa_to_node(entry));
+ xas_unlock_irqrestore(&xas, flags);
+}
+EXPORT_SYMBOL(xa_destroy);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
--
2.15.1


2018-01-17 21:12:46

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 13/99] xarray: Add xa_extract

From: Matthew Wilcox <[email protected]>

This function combines the functionality of radix_tree_gang_lookup() and
radix_tree_gang_lookup_tagged(). It extracts entries matching the
specified filter into a normal array.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 2 ++
lib/xarray.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 82 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index fcd7ef68933a..d79fd48e4957 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -219,6 +219,8 @@ void *xa_find(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
void *xa_find_after(struct xarray *xa, unsigned long *index,
unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
+unsigned int xa_extract(struct xarray *, void **dst, unsigned long start,
+ unsigned long max, unsigned int n, xa_tag_t);

/**
* xa_init() - Initialise an empty XArray.
diff --git a/lib/xarray.c b/lib/xarray.c
index 3e6be0a07525..be276618f81b 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -1368,6 +1368,86 @@ void *xa_find_after(struct xarray *xa, unsigned long *indexp,
}
EXPORT_SYMBOL(xa_find_after);

+static unsigned int xas_extract_present(struct xa_state *xas, void **dst,
+ unsigned long max, unsigned int n)
+{
+ void *entry;
+ unsigned int i = 0;
+
+ rcu_read_lock();
+ xas_for_each(xas, entry, max) {
+ if (xas_retry(xas, entry))
+ continue;
+ dst[i++] = entry;
+ if (i == n)
+ break;
+ }
+ rcu_read_unlock();
+
+ return i;
+}
+
+static unsigned int xas_extract_tag(struct xa_state *xas, void **dst,
+ unsigned long max, unsigned int n, xa_tag_t tag)
+{
+ void *entry;
+ unsigned int i = 0;
+
+ rcu_read_lock();
+ xas_for_each_tag(xas, entry, max, tag) {
+ if (xas_retry(xas, entry))
+ continue;
+ dst[i++] = entry;
+ if (i == n)
+ break;
+ }
+ rcu_read_unlock();
+
+ return i;
+}
+
+/**
+ * xa_extract() - Copy selected entries from the XArray into a normal array.
+ * @xa: The source XArray to copy from.
+ * @dst: The buffer to copy entries into.
+ * @start: The first index in the XArray eligible to be selected.
+ * @max: The last index in the XArray eligible to be selected.
+ * @n: The maximum number of entries to copy.
+ * @filter: Selection criterion.
+ *
+ * Copies up to @n entries that match @filter from the XArray. The
+ * copied entries will have indices between @start and @max, inclusive.
+ *
+ * The @filter may be an XArray tag value, in which case entries which are
+ * tagged with that tag will be copied. It may also be %XA_PRESENT, in
+ * which case non-NULL entries will be copied.
+ *
+ * This function uses the RCU lock to protect itself. That means that the
+ * entries returned may not represent a snapshot of the XArray at a moment
+ * in time. For example, if index 5 is stored to, then index 10 is stored to,
+ * calling xa_extract() may return the old contents of index 5 and the
+ * new contents of index 10. Indices not modified while this function is
+ * running will not be skipped.
+ *
+ * If you need stronger guarantees, holding the xa_lock across calls to this
+ * function will prevent concurrent modification.
+ *
+ * Return: The number of entries copied.
+ */
+unsigned int xa_extract(struct xarray *xa, void **dst, unsigned long start,
+ unsigned long max, unsigned int n, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, start);
+
+ if (!n)
+ return 0;
+
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ return xas_extract_tag(&xas, dst, max, n, filter);
+ return xas_extract_present(&xas, dst, max, n);
+}
+EXPORT_SYMBOL(xa_extract);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
--
2.15.1


2018-01-17 21:13:42

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 11/99] xarray: Add xa_cmpxchg and xa_insert

From: Matthew Wilcox <[email protected]>

Like cmpxchg(), xa_cmpxchg will only store to the index if the current
entry matches the old entry. It returns the current entry, which is
usually more useful than the errno returned by radix_tree_insert().
For the users who really only want the errno, the xa_insert() wrapper
provides a more convenient calling convention.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 56 ++++++++++++++++++++++++++++
lib/xarray.c | 68 ++++++++++++++++++++++++++++++++++
tools/testing/radix-tree/xarray-test.c | 10 +++++
3 files changed, 134 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 139b1c1fd022..fc9ab3b13e60 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -210,6 +210,8 @@ struct xarray {
void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
+void *xa_cmpxchg(struct xarray *, unsigned long index,
+ void *old, void *entry, gfp_t);
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -264,6 +266,32 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
return xa->xa_flags & XA_FLAGS_TAG(tag);
}

+/**
+ * xa_insert() - Store this entry in the XArray unless another entry is
+ * already present.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * If you would rather see the existing entry in the array, use xa_cmpxchg().
+ * This function is for users who don't care what the entry is, only that
+ * one is present.
+ *
+ * Return: -EEXIST if another entry was present, 0 if the store succeeded,
+ * or another negative errno if a different error happened (eg -ENOMEM).
+ */
+static inline int xa_insert(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr = xa_cmpxchg(xa, index, NULL, entry, gfp);
+ if (!curr)
+ return 0;
+ if (xa_is_err(curr))
+ return xa_err(curr);
+ return -EEXIST;
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -283,9 +311,37 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
*/
void *__xa_erase(struct xarray *, unsigned long index);
void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
+void *__xa_cmpxchg(struct xarray *, unsigned long index, void *old,
+ void *entry, gfp_t);
void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);

+/**
+ * __xa_insert() - Store this entry in the XArray unless another entry is
+ * already present.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * If you would rather see the existing entry in the array, use __xa_cmpxchg().
+ * This function is for users who don't care what the entry is, only that
+ * one is present.
+ *
+ * Return: -EEXIST if another entry was present, 0 if the store succeeded,
+ * or another negative errno if a different error happened (eg -ENOMEM).
+ */
+static inline int __xa_insert(struct xarray *xa, unsigned long index,
+ void *entry, gfp_t gfp)
+{
+ void *curr = __xa_cmpxchg(xa, index, NULL, entry, gfp);
+ if (!curr)
+ return 0;
+ if (xa_is_err(curr))
+ return xa_err(curr);
+ return -EEXIST;
+}
+
/* Everything below here is the Advanced API. Proceed with caution. */

/*
diff --git a/lib/xarray.c b/lib/xarray.c
index 45b70e622bf1..d925a98fb9b8 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -928,6 +928,74 @@ void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
}
EXPORT_SYMBOL(__xa_store);

+/**
+ * xa_cmpxchg() - Conditionally replace an entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @old: Old value to test against.
+ * @entry: New value to place in array.
+ * @gfp: Memory allocation flags.
+ *
+ * If the entry at @index is the same as @old, replace it with @entry.
+ * If the return value is equal to @old, then the exchange was successful.
+ *
+ * Return: The old value at this index or xa_err() if an error happened.
+ */
+void *xa_cmpxchg(struct xarray *xa, unsigned long index,
+ void *old, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ xas_lock(&xas);
+ curr = xas_load(&xas);
+ if (curr == old)
+ xas_store(&xas, entry);
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(xa_cmpxchg);
+
+/**
+ * __xa_cmpxchg() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @old: Old value to test against.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * You must already be holding the xa_lock when calling this function.
+ * It will drop the lock if needed to allocate memory, and then reacquire
+ * it afterwards.
+ *
+ *
+ * Return: The old entry at this index or xa_err() if an error happened.
+ */
+void *__xa_cmpxchg(struct xarray *xa, unsigned long index,
+ void *old, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ curr = xas_load(&xas);
+ if (curr == old)
+ xas_store(&xas, entry);
+ } while (__xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(__xa_cmpxchg);
+
/**
* __xa_set_tag() - Set this tag on this entry while locked.
* @xa: XArray.
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 5defd0b9f85c..d6a969d999d9 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -84,6 +84,15 @@ void check_xa_shrink(struct xarray *xa)
assert(xa_load(xa, 0) == xa_mk_value(0));
}

+void check_cmpxchg(struct xarray *xa)
+{
+ assert(xa_empty(xa));
+ assert(!xa_store(xa, 12345678, xa_mk_value(12345678), GFP_KERNEL));
+ assert(!xa_cmpxchg(xa, 5, xa_mk_value(5), NULL, GFP_KERNEL));
+ assert(xa_erase(xa, 12345678) == xa_mk_value(12345678));
+ assert(xa_empty(xa));
+}
+
void check_multi_store(struct xarray *xa)
{
unsigned long i, j, k;
@@ -149,6 +158,7 @@ void xarray_checks(void)
check_xa_shrink(&array);
item_kill_tree(&array);

+ check_cmpxchg(&array);
check_multi_store(&array);
item_kill_tree(&array);
}
--
2.15.1


2018-01-17 21:14:03

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 12/99] xarray: Add xa_for_each

From: Matthew Wilcox <[email protected]>

This iterator allows the user to efficiently walk a range of the array,
executing the loop body once for each entry in that range that matches
the filter. This commit also includes xa_find() and xa_find_above()
which are helper functions for xa_for_each() but may also be useful in
their own right.

In the xas family of functions, we also have xas_for_each(), xas_find(),
xas_next_entry(), xas_for_each_tag(), xas_find_tag(), xas_next_tag()
and xas_pause().

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 171 +++++++++++++++++++++
lib/xarray.c | 272 +++++++++++++++++++++++++++++++++
tools/testing/radix-tree/test.c | 13 ++
tools/testing/radix-tree/test.h | 1 +
tools/testing/radix-tree/xarray-test.c | 122 +++++++++++++++
5 files changed, 579 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index fc9ab3b13e60..fcd7ef68933a 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -215,6 +215,10 @@ void *xa_cmpxchg(struct xarray *, unsigned long index,
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
+void *xa_find(struct xarray *xa, unsigned long *index,
+ unsigned long max, xa_tag_t) __attribute__((nonnull(2)));
+void *xa_find_after(struct xarray *xa, unsigned long *index,
+ unsigned long max, xa_tag_t) __attribute__((nonnull(2)));

/**
* xa_init() - Initialise an empty XArray.
@@ -266,6 +270,33 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
return xa->xa_flags & XA_FLAGS_TAG(tag);
}

+/**
+ * xa_for_each() - Iterate over a portion of an XArray.
+ * @xa: XArray.
+ * @entry: Entry retrieved from array.
+ * @index: Index of @entry.
+ * @max: Maximum index to retrieve from array.
+ * @filter: Selection criterion.
+ *
+ * Initialise @index to the minimum index you want to retrieve from
+ * the array. During the iteration, @entry will have the value of the
+ * entry stored in @xa at @index. The iteration will skip all entries in
+ * the array which do not match @filter. You may modify @index during the
+ * iteration if you want to skip or reprocess indices. It is safe to modify
+ * the array during the iteration. At the end of the iteration, @entry will
+ * be set to NULL and @index will have a value less than or equal to max.
+ *
+ * xa_for_each() is O(n.log(n)) while xas_for_each() is O(n). You have
+ * to handle your own locking with xas_for_each(), and if you have to unlock
+ * after each iteration, it will also end up being O(n.log(n)). xa_for_each()
+ * will spin if it hits a retry entry; if you intend to see retry entries,
+ * you should use the xas_for_each() iterator instead. The xas_for_each()
+ * iterator will expand into more inline code than xa_for_each().
+ */
+#define xa_for_each(xa, entry, index, max, filter) \
+ for (entry = xa_find(xa, &index, max, filter); entry; \
+ entry = xa_find_after(xa, &index, max, filter))
+
/**
* xa_insert() - Store this entry in the XArray unless another entry is
* already present.
@@ -620,6 +651,12 @@ static inline bool xas_valid(const struct xa_state *xas)
return !xas_invalid(xas);
}

+/* True if the pointer is something other than a node */
+static inline bool xas_not_node(struct xa_node *node)
+{
+ return ((unsigned long)node & 3) || !node;
+}
+
/* True if the node represents head-of-tree, RESTART or BOUNDS */
static inline bool xas_top(struct xa_node *node)
{
@@ -648,13 +685,16 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
void *xas_load(struct xa_state *);
void *xas_store(struct xa_state *, void *entry);
void *xas_create(struct xa_state *);
+void *xas_find(struct xa_state *, unsigned long max);

bool xas_get_tag(const struct xa_state *, xa_tag_t);
void xas_set_tag(const struct xa_state *, xa_tag_t);
void xas_clear_tag(const struct xa_state *, xa_tag_t);
+void *xas_find_tag(struct xa_state *, unsigned long max, xa_tag_t);
void xas_init_tags(const struct xa_state *);

bool xas_nomem(struct xa_state *, gfp_t);
+void xas_pause(struct xa_state *);

/**
* xas_reload() - Refetch an entry from the xarray.
@@ -727,6 +767,137 @@ static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update)
xas->xa_update = update;
}

+/* Skip over any of these entries when iterating */
+static inline bool xa_iter_skip(const void *entry)
+{
+ return unlikely(!entry ||
+ (xa_is_internal(entry) && entry < XA_RETRY_ENTRY));
+}
+
+/**
+ * xas_next_entry() - Advance iterator to next present entry.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ *
+ * xas_next_entry() is an inline function to optimise xarray traversal for
+ * speed. It is equivalent to calling xas_find(), and will call xas_find()
+ * for all the hard cases.
+ *
+ * Return: The next present entry after the one currently referred to by @xas.
+ */
+static inline void *xas_next_entry(struct xa_state *xas, unsigned long max)
+{
+ struct xa_node *node = xas->xa_node;
+ void *entry;
+
+ if (unlikely(xas_not_node(node) || node->shift))
+ return xas_find(xas, max);
+
+ do {
+ if (unlikely(xas->xa_index >= max))
+ return xas_find(xas, max);
+ if (unlikely(xas->xa_offset == XA_CHUNK_MASK))
+ return xas_find(xas, max);
+ xas->xa_index++;
+ xas->xa_offset++;
+ entry = xa_entry(xas->xa, node, xas->xa_offset);
+ } while (xa_iter_skip(entry));
+
+ return entry;
+}
+
+/* Private */
+static inline unsigned int xas_find_chunk(struct xa_state *xas, bool advance,
+ xa_tag_t tag)
+{
+ unsigned long *addr = xas->xa_node->tags[(__force unsigned)tag];
+ unsigned int offset = xas->xa_offset;
+
+ if (advance)
+ offset++;
+ if (XA_CHUNK_SIZE == BITS_PER_LONG) {
+ unsigned long data = *addr & (~0UL << offset);
+ if (data)
+ return __ffs(data);
+ return XA_CHUNK_SIZE;
+ }
+
+ return find_next_bit(addr, XA_CHUNK_SIZE, offset);
+}
+
+/**
+ * xas_next_tag() - Advance iterator to next tagged entry.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ * @tag: Tag to search for.
+ *
+ * xas_next_tag() is an inline function to optimise xarray traversal for
+ * speed. It is equivalent to calling xas_find_tag(), and will call
+ * xas_find_tag() for all the hard cases.
+ *
+ * Return: The next tagged entry after the one currently referred to by @xas.
+ */
+static inline void *xas_next_tag(struct xa_state *xas, unsigned long max,
+ xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset;
+
+ if (unlikely(xas_not_node(node) || node->shift))
+ return xas_find_tag(xas, max, tag);
+ offset = xas_find_chunk(xas, true, tag);
+ xas->xa_offset = offset;
+ xas->xa_index = (xas->xa_index & ~XA_CHUNK_MASK) + offset;
+ if (xas->xa_index > max)
+ return NULL;
+ if (offset == XA_CHUNK_SIZE)
+ return xas_find_tag(xas, max, tag);
+ return xa_entry(xas->xa, node, offset);
+}
+
+/*
+ * If iterating while holding a lock, drop the lock and reschedule
+ * every %XA_CHECK_SCHED loops.
+ */
+enum {
+ XA_CHECK_SCHED = 4096,
+};
+
+/**
+ * xas_for_each() - Iterate over a range of an XArray
+ * @xas: XArray operation state.
+ * @entry: Entry retrieved from array.
+ * @max: Maximum index to retrieve from array.
+ *
+ * The loop body will be executed for each entry present in the xarray
+ * between the current xas position and @max. @entry will be set to
+ * the entry retrieved from the xarray. It is safe to delete entries
+ * from the array in the loop body. You should hold either the RCU lock
+ * or the xa_lock while iterating. If you need to drop the lock, call
+ * xas_pause() first.
+ */
+#define xas_for_each(xas, entry, max) \
+ for (entry = xas_find(xas, max); entry; \
+ entry = xas_next_entry(xas, max))
+
+/**
+ * xas_for_each_tag() - Iterate over a range of an XArray
+ * @xas: XArray operation state.
+ * @entry: Entry retrieved from array.
+ * @max: Maximum index to retrieve from array.
+ * @tag: Tag to search for.
+ *
+ * The loop body will be executed for each tagged entry in the xarray
+ * between the current xas position and @max. @entry will be set to
+ * the entry retrieved from the xarray. It is safe to delete entries
+ * from the array in the loop body. You should hold either the RCU lock
+ * or the xa_lock while iterating. If you need to drop the lock, call
+ * xas_pause() first.
+ */
+#define xas_for_each_tag(xas, entry, max, tag) \
+ for (entry = xas_find_tag(xas, max, tag); entry; \
+ entry = xas_next_tag(xas, max, tag))
+
/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
void xas_destroy(struct xa_state *);

diff --git a/lib/xarray.c b/lib/xarray.c
index d925a98fb9b8..3e6be0a07525 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -91,6 +91,11 @@ static unsigned int get_offset(unsigned long index, struct xa_node *node)
return (index >> node->shift) & XA_CHUNK_MASK;
}

+static void xas_set_offset(struct xa_state *xas)
+{
+ xas->xa_offset = get_offset(xas->xa_index, xas->xa_node);
+}
+
/* move the index either forwards (find) or backwards (sibling slot) */
static void xas_move_index(struct xa_state *xas, unsigned long offset)
{
@@ -99,6 +104,12 @@ static void xas_move_index(struct xa_state *xas, unsigned long offset)
xas->xa_index += offset << shift;
}

+static void xas_advance(struct xa_state *xas)
+{
+ xas->xa_offset++;
+ xas_move_index(xas, xas->xa_offset);
+}
+
static void *set_bounds(struct xa_state *xas)
{
xas->xa_node = XAS_BOUNDS;
@@ -791,6 +802,191 @@ void xas_init_tags(const struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_init_tags);

+/**
+ * xas_pause() - Pause a walk to drop a lock.
+ * @xas: XArray operation state.
+ *
+ * Some users need to pause a walk and drop the lock they're holding in
+ * order to yield to a higher priority thread or carry out an operation
+ * on an entry. Those users should call this function before they drop
+ * the lock. It resets the @xas to be suitable for the next iteration
+ * of the loop after the user has reacquired the lock. If most entries
+ * found during a walk require you to call xas_pause(), the xa_for_each()
+ * iterator may be more appropriate.
+ *
+ * Note that xas_pause() only works for forward iteration. If a user needs
+ * to pause a reverse iteration, we will need a xas_pause_rev().
+ */
+void xas_pause(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (xas_invalid(xas))
+ return;
+
+ if (node) {
+ unsigned int offset = xas->xa_offset;
+ while (++offset < XA_CHUNK_SIZE) {
+ if (!xa_is_sibling(xa_entry(xas->xa, node, offset)))
+ break;
+ }
+ xas->xa_index += (offset - xas->xa_offset) << node->shift;
+ } else {
+ xas->xa_index++;
+ }
+ xas->xa_node = XAS_RESTART;
+}
+EXPORT_SYMBOL_GPL(xas_pause);
+
+/**
+ * xas_find() - Find the next present entry in the XArray.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ *
+ * If the xas has not yet been walked to an entry, return the entry
+ * which has an index >= xas.xa_index. If it has been walked, the entry
+ * currently being pointed at has been processed, and so we move to the
+ * next entry.
+ *
+ * If no entry is found and the array is smaller than @max, the iterator
+ * is set to the smallest index not yet in the array. This allows @xas
+ * to be immediately passed to xas_create().
+ *
+ * Return: The entry, if found, otherwise NULL.
+ */
+void *xas_find(struct xa_state *xas, unsigned long max)
+{
+ void *entry;
+
+ if (xas_error(xas))
+ return NULL;
+
+ if (!xas->xa_node) {
+ xas->xa_index = 1;
+ return set_bounds(xas);
+ } else if (xas_top(xas->xa_node)) {
+ entry = xas_load(xas);
+ if (entry || xas_not_node(xas->xa_node))
+ return entry;
+ }
+
+ xas_advance(xas);
+
+ while (xas->xa_node && (xas->xa_index <= max)) {
+ if (unlikely(xas->xa_offset == XA_CHUNK_SIZE)) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ continue;
+ }
+
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (xa_is_node(entry)) {
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = 0;
+ continue;
+ }
+ if (!xa_iter_skip(entry))
+ return entry;
+
+ xas_advance(xas);
+ }
+
+ if (!xas->xa_node)
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(xas_find);
+
+/**
+ * xas_find_tag() - Find the next tagged entry in the XArray.
+ * @xas: XArray operation state.
+ * @max: Highest index to return.
+ * @tag: Tag number to search for.
+ *
+ * If the xas has not yet been walked to an entry, return the tagged entry
+ * which has an index >= xas.xa_index. If it has been walked, the entry
+ * currently being pointed at has been processed, and so we move to the
+ * next tagged entry.
+ *
+ * If no tagged entry is found and the array is smaller than @max, @xas is
+ * set to the bounds state and xas->xa_index is set to the smallest index
+ * not yet in the array. This allows @xas to be immediately passed to
+ * xas_create().
+ *
+ * Return: The entry, if found, otherwise %NULL.
+ */
+void *xas_find_tag(struct xa_state *xas, unsigned long max, xa_tag_t tag)
+{
+ bool advance = true;
+ unsigned int offset;
+ void *entry;
+
+ if (xas_error(xas))
+ return NULL;
+
+ if (!xas->xa_node) {
+ xas->xa_index = 1;
+ goto out;
+ } else if (xas_top(xas->xa_node)) {
+ advance = false;
+ entry = xa_head(xas->xa);
+ if (xas->xa_index > max_index(entry))
+ goto out;
+ if (!xa_is_node(entry)) {
+ if (xa_tagged(xas->xa, tag)) {
+ xas->xa_node = NULL;
+ return entry;
+ }
+ xas->xa_index = 1;
+ goto out;
+ }
+ xas->xa_node = xa_to_node(entry);
+ xas->xa_offset = xas->xa_index >> xas->xa_node->shift;
+ }
+
+ while (xas->xa_index <= max) {
+ if (unlikely(xas->xa_offset == XA_CHUNK_SIZE)) {
+ xas->xa_offset = xas->xa_node->offset + 1;
+ xas->xa_node = xa_parent(xas->xa, xas->xa_node);
+ if (!xas->xa_node)
+ break;
+ advance = false;
+ continue;
+ }
+
+ if (!advance) {
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (xa_is_sibling(entry)) {
+ xas->xa_offset = xa_to_sibling(entry);
+ xas_move_index(xas, xas->xa_offset);
+ }
+ }
+
+ offset = xas_find_chunk(xas, advance, tag);
+ if (offset > xas->xa_offset) {
+ advance = false;
+ xas_move_index(xas, offset);
+ xas->xa_offset = offset;
+ if (offset == XA_CHUNK_SIZE)
+ continue;
+ if (xas->xa_index > max)
+ break;
+ }
+
+ entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
+ if (!xa_is_node(entry))
+ return entry;
+ xas->xa_node = xa_to_node(entry);
+ xas_set_offset(xas);
+ }
+
+ out:
+ if (!xas->xa_node)
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(xas_find_tag);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -1096,6 +1292,82 @@ void xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
}
EXPORT_SYMBOL(xa_clear_tag);

+/**
+ * xa_find() - Search the XArray for an entry.
+ * @xa: XArray.
+ * @indexp: Pointer to an index.
+ * @max: Maximum index to search to.
+ * @filter: Selection criterion.
+ *
+ * Finds the entry in @xa which matches the @filter, and has the lowest
+ * index that is at least @indexp and no more than @max.
+ * If an entry is found, @indexp is updated to be the index of the entry.
+ * This function is protected by the RCU read lock, so it may not find
+ * entries which are being simultaneously added. It will not return an
+ * %XA_RETRY_ENTRY; if you need to see retry entries, use xas_find().
+ *
+ * Return: The entry, if found, otherwise NULL.
+ */
+void *xa_find(struct xarray *xa, unsigned long *indexp,
+ unsigned long max, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, *indexp);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ entry = xas_find_tag(&xas, max, filter);
+ else
+ entry = xas_find(&xas, max);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ if (entry)
+ *indexp = xas.xa_index;
+ return entry;
+}
+EXPORT_SYMBOL(xa_find);
+
+/**
+ * xa_find_after() - Search the XArray for a present entry.
+ * @xa: XArray.
+ * @indexp: Pointer to an index.
+ * @max: Maximum index to search to.
+ * @filter: Selection criterion.
+ *
+ * Finds the entry in @xa which matches the @filter and has the lowest
+ * index that is above @indexp and no more than @max.
+ * If an entry is found, @indexp is updated to be the index of the entry.
+ * This function is protected by the RCU read lock, so it may miss entries
+ * which are being simultaneously added. It will not return an
+ * %XA_RETRY_ENTRY; if you need to see retry entries, use xas_find().
+ *
+ * Return: The pointer, if found, otherwise NULL.
+ */
+void *xa_find_after(struct xarray *xa, unsigned long *indexp,
+ unsigned long max, xa_tag_t filter)
+{
+ XA_STATE(xas, xa, *indexp + 1);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ if ((__force unsigned int)filter < XA_MAX_TAGS)
+ entry = xas_find_tag(&xas, max, filter);
+ else
+ entry = xas_find(&xas, max);
+ if (*indexp >= xas.xa_index)
+ entry = xas_next_entry(&xas, max);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ if (entry)
+ *indexp = xas.xa_index;
+ return entry;
+}
+EXPORT_SYMBOL(xa_find_after);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index f151588d04a0..e9b4a4ed9bf5 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -244,6 +244,19 @@ unsigned long find_item(struct radix_tree_root *root, void *item)
return found;
}

+static LIST_HEAD(item_nodes);
+
+void item_update_node(struct xa_node *node)
+{
+ if (node->count) {
+ if (list_empty(&node->private_list))
+ list_add(&node->private_list, &item_nodes);
+ } else {
+ if (!list_empty(&node->private_list))
+ list_del_init(&node->private_list);
+ }
+}
+
static int verify_node(struct radix_tree_node *slot, unsigned int tag,
int tagged)
{
diff --git a/tools/testing/radix-tree/test.h b/tools/testing/radix-tree/test.h
index ffd162645c11..f97cacd1422d 100644
--- a/tools/testing/radix-tree/test.h
+++ b/tools/testing/radix-tree/test.h
@@ -30,6 +30,7 @@ void item_gang_check_present(struct radix_tree_root *root,
void item_full_scan(struct radix_tree_root *root, unsigned long start,
unsigned long nr, int chunk);
void item_kill_tree(struct radix_tree_root *root);
+void item_update_node(struct xa_node *node);

int tag_tagged_items(struct radix_tree_root *, pthread_mutex_t *,
unsigned long start, unsigned long end, unsigned batch,
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index d6a969d999d9..26b25be81656 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -49,6 +49,29 @@ void check_xa_tag(struct xarray *xa)
assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
}

+void check_xas_retry(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+
+ assert(xas_find(&xas, ULONG_MAX) == xa_mk_value(0));
+ xa_erase(xa, 1);
+ assert(xa_is_retry(xas_reload(&xas)));
+ assert(!xas_retry(&xas, NULL));
+ assert(!xas_retry(&xas, xa_mk_value(0)));
+ assert(xas_retry(&xas, XA_RETRY_ENTRY));
+ assert(xas.xa_node == XAS_RESTART);
+ assert(xas_next_entry(&xas, ULONG_MAX) == xa_mk_value(0));
+ assert(xas.xa_node == NULL);
+
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+ assert(xa_is_internal(xas_reload(&xas)));
+ xas.xa_node = XAS_RESTART;
+ assert(xas_next_entry(&xas, ULONG_MAX) == xa_mk_value(0));
+}
+
void check_xa_load(struct xarray *xa)
{
unsigned long i, j;
@@ -142,6 +165,98 @@ void check_multi_store(struct xarray *xa)
}
}

+void check_multi_find(struct xarray *xa)
+{
+ unsigned long index;
+ xa_store_order(xa, 12, 2, xa_mk_value(12), GFP_KERNEL);
+ xa_store(xa, 16, xa_mk_value(16), GFP_KERNEL);
+
+ index = 0;
+ assert(xa_find(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(12));
+ assert(index == 12);
+ index = 13;
+ assert(xa_find(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(12));
+ assert(index >= 12 && index < 16);
+ assert(xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT) == xa_mk_value(16));
+ assert(index == 16);
+ xa_erase(xa, 12);
+ xa_erase(xa, 16);
+ assert(xa_empty(xa));
+}
+
+void check_find(struct xarray *xa)
+{
+ unsigned long i, j, k;
+
+ assert(xa_empty(xa));
+
+ for (i = 0; i < 100; i++) {
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ xa_set_tag(xa, i, XA_TAG_0);
+ for (j = 0; j < i; j++) {
+ xa_store(xa, j, xa_mk_value(j), GFP_KERNEL);
+ xa_set_tag(xa, j, XA_TAG_0);
+ for (k = 0; k < 100; k++) {
+ unsigned long index = k;
+ void *entry = xa_find(xa, &index, ULONG_MAX,
+ XA_PRESENT);
+ if (k <= j)
+ assert(index == j);
+ else if (k <= i)
+ assert(index == i);
+ else
+ assert(entry == NULL);
+
+ index = k;
+ entry = xa_find(xa, &index, ULONG_MAX,
+ XA_TAG_0);
+ if (k <= j)
+ assert(index == j);
+ else if (k <= i)
+ assert(index == i);
+ else
+ assert(entry == NULL);
+ }
+ xa_erase(xa, j);
+ }
+ xa_erase(xa, i);
+ }
+ assert(xa_empty(xa));
+ check_multi_find(xa);
+}
+
+void check_xas_delete(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 0);
+ void *entry;
+ unsigned long i, j;
+
+ xas_set_update(&xas, item_update_node);
+ for (i = 0; i < 200; i++) {
+ for (j = i; j < 2 * i + 17; j++) {
+ xas_set(&xas, j);
+ do {
+ xas_store(&xas, xa_mk_value(j));
+ } while (xas_nomem(&xas, GFP_KERNEL));
+ }
+
+ xas_set(&xas, ULONG_MAX);
+ do {
+ xas_store(&xas, xa_mk_value(0));
+ } while (xas_nomem(&xas, GFP_KERNEL));
+ xas_store(&xas, NULL);
+
+ xas_set(&xas, 0);
+ j = i;
+ xas_for_each(&xas, entry, ULONG_MAX) {
+ assert(entry == xa_mk_value(j));
+ xas_store(&xas, NULL);
+ j++;
+ }
+ assert(xa_empty(xa));
+ }
+}
+
void xarray_checks(void)
{
DEFINE_XARRAY(array);
@@ -152,6 +267,9 @@ void xarray_checks(void)
check_xa_tag(&array);
item_kill_tree(&array);

+ check_xas_retry(&array);
+ item_kill_tree(&array);
+
check_xa_load(&array);
item_kill_tree(&array);

@@ -161,6 +279,10 @@ void xarray_checks(void)
check_cmpxchg(&array);
check_multi_store(&array);
item_kill_tree(&array);
+
+ check_find(&array);
+ check_xas_delete(&array);
+ item_kill_tree(&array);
}

int __weak main(void)
--
2.15.1


2018-01-17 21:14:39

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 08/99] xarray: Add xa_load

From: Matthew Wilcox <[email protected]>

This first function in the XArray API brings with it a lot of support
infrastructure. The advanced API is based around the xa_state which is
a more capable version of the radix_tree_iter.

As the test-suite demonstrates, it is possible to use the xarray and
radix tree APIs on the same data structure.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 282 ++++++++++++++++++++++++++++
lib/radix-tree.c | 43 -----
lib/xarray.c | 190 +++++++++++++++++++
tools/testing/radix-tree/.gitignore | 1 +
tools/testing/radix-tree/Makefile | 7 +-
tools/testing/radix-tree/linux/kernel.h | 1 +
tools/testing/radix-tree/linux/radix-tree.h | 1 -
tools/testing/radix-tree/linux/rcupdate.h | 1 +
tools/testing/radix-tree/linux/xarray.h | 1 +
tools/testing/radix-tree/xarray-test.c | 56 ++++++
10 files changed, 537 insertions(+), 46 deletions(-)
create mode 100644 tools/testing/radix-tree/xarray-test.c

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 3d5f7804ef45..54c694e5c33f 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -12,6 +12,8 @@
#include <linux/bug.h>
#include <linux/compiler.h>
#include <linux/kconfig.h>
+#include <linux/kernel.h>
+#include <linux/rcupdate.h>
#include <linux/spinlock.h>
#include <linux/types.h>

@@ -30,6 +32,10 @@
*
* 0-62: Sibling entries
* 256: Retry entry
+ *
+ * Errors are also represented as internal entries, but use the negative
+ * space (-4094 to -2). They're never stored in the slots array; only
+ * returned by the normal API.
*/

#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
@@ -101,6 +107,40 @@ static inline bool xa_is_internal(const void *entry)
return ((unsigned long)entry & 3) == 2;
}

+/**
+ * xa_is_err() - Report whether an XArray operation returned an error
+ * @entry: Result from calling an XArray function
+ *
+ * If an XArray operation cannot complete an operation, it will return
+ * a special value indicating an error. This function tells you
+ * whether an error occurred; xa_err() tells you which error occurred.
+ *
+ * Return: %true if the entry indicates an error.
+ */
+static inline bool xa_is_err(const void *entry)
+{
+ return unlikely(xa_is_internal(entry));
+}
+
+/**
+ * xa_err() - Turn an XArray result into an errno.
+ * @entry: Result from calling an XArray function.
+ *
+ * If an XArray operation cannot complete an operation, it will return
+ * a special pointer value which encodes an errno. This function extracts
+ * the errno from the pointer value, or returns 0 if the pointer does not
+ * represent an errno.
+ *
+ * Return: A negative errno or 0.
+ */
+static inline int xa_err(void *entry)
+{
+ /* xa_to_internal() would not do sign extension. */
+ if (xa_is_err(entry))
+ return (long)entry >> 2;
+ return 0;
+}
+
/**
* struct xarray - The anchor of the XArray.
* @xa_lock: Lock that protects the contents of the XArray.
@@ -146,6 +186,7 @@ struct xarray {
struct xarray name = XARRAY_INIT_FLAGS(name, flags)

void xa_init_flags(struct xarray *, gfp_t flags);
+void *xa_load(struct xarray *, unsigned long index);

/**
* xa_init() - Initialise an empty XArray.
@@ -212,6 +253,62 @@ struct xa_node {
unsigned long tags[XA_MAX_TAGS][XA_TAG_LONGS];
};

+#ifdef XA_DEBUG
+void xa_dump(const struct xarray *);
+void xa_dump_node(const struct xa_node *);
+#define XA_BUG_ON(xa, x) do { \
+ if (x) \
+ xa_dump(xa); \
+ BUG_ON(x); \
+ } while (0)
+#define XA_NODE_BUG_ON(node, x) do { \
+ if ((x) && (node)) \
+ xa_dump_node(node); \
+ BUG_ON(x); \
+ } while (0)
+#else
+#define XA_BUG_ON(xa, x) do { } while (0)
+#define XA_NODE_BUG_ON(node, x) do { } while (0)
+#endif
+
+/* Private */
+static inline void *xa_head(struct xarray *xa)
+{
+ return rcu_dereference_check(xa->xa_head,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_head_locked(struct xarray *xa)
+{
+ return rcu_dereference_protected(xa->xa_head,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_entry(struct xarray *xa,
+ const struct xa_node *node, unsigned int offset)
+{
+ XA_NODE_BUG_ON(node, offset >= XA_CHUNK_SIZE);
+ return rcu_dereference_check(node->slots[offset],
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline void *xa_entry_locked(struct xarray *xa,
+ const struct xa_node *node, unsigned int offset)
+{
+ XA_NODE_BUG_ON(node, offset >= XA_CHUNK_SIZE);
+ return rcu_dereference_protected(node->slots[offset],
+ lockdep_is_held(&xa->xa_lock));
+}
+
+/* Private */
+static inline struct xa_node *xa_to_node(const void *entry)
+{
+ return (struct xa_node *)((unsigned long)entry - 2);
+}
+
/* Private */
static inline bool xa_is_node(const void *entry)
{
@@ -245,4 +342,189 @@ static inline bool xa_is_sibling(const void *entry)

#define XA_RETRY_ENTRY xa_mk_internal(256)

+/**
+ * xa_is_retry() - Is the entry a retry entry?
+ * @entry: Entry retrieved from the XArray
+ *
+ * Return: %true if the entry is a retry entry.
+ */
+static inline bool xa_is_retry(const void *entry)
+{
+ return unlikely(entry == XA_RETRY_ENTRY);
+}
+
+/**
+ * typedef xa_update_node_t - A callback function from the XArray.
+ * @node: The node which is being processed
+ *
+ * This function is called every time the XArray updates the count of
+ * present and value entries in a node. It allows advanced users to
+ * maintain the private_list in the node.
+ */
+typedef void (*xa_update_node_t)(struct xa_node *node);
+
+/*
+ * The xa_state is opaque to its users. It contains various different pieces
+ * of state involved in the current operation on the XArray. It should be
+ * declared on the stack and passed between the various internal routines.
+ * The various elements in it should not be accessed directly, but only
+ * through the provided accessor functions. The below documentation is for
+ * the benefit of those working on the code, not for users of the XArray.
+ *
+ * @xa_node usually points to the xa_node containing the slot we're operating
+ * on (and @xa_offset is the offset in the slots array). If there is a
+ * single entry in the array at index 0, there are no allocated xa_nodes to
+ * point to, and so we store %NULL in @xa_node. @xa_node is set to
+ * the value %XAS_RESTART if the xa_state is not walked to the correct
+ * position in the tree of nodes for this operation. If an error occurs
+ * during an operation, it is set to an %XAS_ERROR value. If we run off the
+ * end of the allocated nodes, it is set to %XAS_BOUNDS.
+ */
+struct xa_state {
+ struct xarray *xa;
+ unsigned long xa_index;
+ unsigned char xa_shift;
+ unsigned char xa_sibs;
+ unsigned char xa_offset;
+ unsigned char xa_pad; /* Helps gcc generate better code */
+ struct xa_node *xa_node;
+ struct xa_node *xa_alloc;
+ xa_update_node_t xa_update;
+};
+
+/*
+ * We encode errnos in the xas->xa_node. If an error has happened, we need to
+ * drop the lock to fix it, and once we've done so the xa_state is invalid.
+ */
+#define XA_ERROR(errno) ((struct xa_node *)(((long)errno << 2) | 2UL))
+#define XAS_BOUNDS ((struct xa_node *)1UL)
+#define XAS_RESTART ((struct xa_node *)3UL)
+
+#define __XA_STATE(array, index) { \
+ .xa = array, \
+ .xa_index = index, \
+ .xa_shift = 0, \
+ .xa_sibs = 0, \
+ .xa_offset = 0, \
+ .xa_pad = 0, \
+ .xa_node = XAS_RESTART, \
+ .xa_alloc = NULL, \
+ .xa_update = NULL \
+}
+
+/**
+ * XA_STATE() - Declare an XArray operation state.
+ * @name: Name of this operation state (usually xas).
+ * @array: Array to operate on.
+ * @index: Initial index of interest.
+ *
+ * Declare and initialise an xa_state on the stack.
+ */
+#define XA_STATE(name, array, index) \
+ struct xa_state name = __XA_STATE(array, index)
+
+#define xas_tagged(xas, tag) xa_tagged((xas)->xa, (tag))
+#define xas_trylock(xas) xa_trylock((xas)->xa)
+#define xas_lock(xas) xa_lock((xas)->xa)
+#define xas_unlock(xas) xa_unlock((xas)->xa)
+#define xas_lock_bh(xas) xa_lock_bh((xas)->xa)
+#define xas_unlock_bh(xas) xa_unlock_bh((xas)->xa)
+#define xas_lock_irq(xas) xa_lock_irq((xas)->xa)
+#define xas_unlock_irq(xas) xa_unlock_irq((xas)->xa)
+#define xas_lock_irqsave(xas, flags) \
+ xa_lock_irqsave((xas)->xa, flags)
+#define xas_unlock_irqrestore(xas, flags) \
+ xa_unlock_irqrestore((xas)->xa, flags)
+
+/**
+ * xas_error() - Return an errno stored in the xa_state.
+ * @xas: XArray operation state.
+ *
+ * Return: 0 if no error has been noted. A negative errno if one has.
+ */
+static inline int xas_error(const struct xa_state *xas)
+{
+ return xa_err(xas->xa_node);
+}
+
+/**
+ * xas_set_err() - Note an error in the xa_state.
+ * @xas: XArray operation state.
+ * @err: Negative error number.
+ *
+ * Only call this function with a negative @err; zero or positive errors
+ * will probably not behave the way you think they should. If you want
+ * to clear the error from an xa_state, call xas_retry(xas, XA_RETRY_ENTRY).
+ */
+static inline void xas_set_err(struct xa_state *xas, long err)
+{
+ xas->xa_node = XA_ERROR(err);
+}
+
+/**
+ * xas_invalid() - Is the xas in a retry or error state?
+ * @xas: XArray operation state.
+ *
+ * Return: %true if the xas cannot be used for operations.
+ */
+static inline bool xas_invalid(const struct xa_state *xas)
+{
+ return (unsigned long)xas->xa_node & 3;
+}
+
+/**
+ * xas_valid() - Is the xas a valid cursor into the array?
+ * @xas: XArray operation state.
+ *
+ * Return: %true if the xas can be used for operations.
+ */
+static inline bool xas_valid(const struct xa_state *xas)
+{
+ return !xas_invalid(xas);
+}
+
+/**
+ * xas_retry() - Handle a retry entry.
+ * @xas: XArray operation state.
+ * @entry: Entry from xarray.
+ *
+ * An RCU-protected read may see a retry entry as a side-effect of a
+ * simultaneous modification. This function sets up the @xas to retry
+ * the walk from the head of the array.
+ *
+ * Return: true if the operation needs to be retried.
+ */
+static inline bool xas_retry(struct xa_state *xas, const void *entry)
+{
+ if (!xa_is_retry(entry))
+ return false;
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+
+void *xas_load(struct xa_state *);
+
+/**
+ * xas_reload() - Refetch an entry from the xarray.
+ * @xas: XArray operation state.
+ *
+ * Use this function to check that a previously loaded entry still has
+ * the same value. This is useful for the lockless pagecache lookup where
+ * we walk the array with only the RCU lock to protect us, lock the page,
+ * then check that the page hasn't moved since we looked it up.
+ *
+ * The caller guarantees that @xas is still valid. If it may be in an
+ * error or restart state, call xas_load() instead.
+ *
+ * Return: The entry at this location in the xarray.
+ */
+static inline void *xas_reload(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ if (node)
+ return xa_entry(xas->xa, node, xas->xa_offset);
+ return xa_head(xas->xa);
+}
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 74a6ddd1d6ad..be9ace9f9145 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -255,49 +255,6 @@ static unsigned long next_index(unsigned long index,
}

#ifndef __KERNEL__
-static void dump_node(struct radix_tree_node *node, unsigned long index)
-{
- unsigned long i;
-
- pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d nr_values %d\n",
- node, node->offset, index, index | node_maxindex(node),
- node->parent,
- node->tags[0][0], node->tags[1][0], node->tags[2][0],
- node->shift, node->count, node->nr_values);
-
- for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
- unsigned long first = index | (i << node->shift);
- unsigned long last = first | ((1UL << node->shift) - 1);
- void *entry = node->slots[i];
- if (!entry)
- continue;
- if (entry == RADIX_TREE_RETRY) {
- pr_debug("radix retry offset %ld indices %lu-%lu parent %p\n",
- i, first, last, node);
- } else if (!radix_tree_is_internal_node(entry)) {
- pr_debug("radix entry %p offset %ld indices %lu-%lu parent %p\n",
- entry, i, first, last, node);
- } else if (xa_is_sibling(entry)) {
- pr_debug("radix sblng %p offset %ld indices %lu-%lu parent %p val %p\n",
- entry, i, first, last, node,
- node->slots[xa_to_sibling(entry)]);
- } else {
- dump_node(entry_to_node(entry), first);
- }
- }
-}
-
-/* For debug */
-static void radix_tree_dump(struct radix_tree_root *root)
-{
- pr_debug("radix root: %p xa_head %p tags %x\n",
- root, root->xa_head,
- root->xa_flags >> ROOT_TAG_SHIFT);
- if (!radix_tree_is_internal_node(root->xa_head))
- return;
- dump_node(entry_to_node(root->xa_head), 0);
-}
-
static void dump_ida_node(void *entry, unsigned long index)
{
unsigned long i;
diff --git a/lib/xarray.c b/lib/xarray.c
index c56b0f858e10..83b9c25de415 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -24,6 +24,100 @@
* @entry refers to something stored in a slot in the xarray
*/

+/* extracts the offset within this node from the index */
+static unsigned int get_offset(unsigned long index, struct xa_node *node)
+{
+ return (index >> node->shift) & XA_CHUNK_MASK;
+}
+
+/* move the index either forwards (find) or backwards (sibling slot) */
+static void xas_move_index(struct xa_state *xas, unsigned long offset)
+{
+ unsigned int shift = xas->xa_node->shift;
+ xas->xa_index &= ~XA_CHUNK_MASK << shift;
+ xas->xa_index += offset << shift;
+}
+
+static void *set_bounds(struct xa_state *xas)
+{
+ xas->xa_node = XAS_BOUNDS;
+ return NULL;
+}
+
+/*
+ * Starts a walk. If the @xas is already valid, we assume that it's on
+ * the right path and just return where we've got to. If we're in an
+ * error state, return NULL. If the index is outside the current scope
+ * of the xarray, return NULL without changing @xas->xa_node. Otherwise
+ * set @xas->xa_node to NULL and return the current head of the array.
+ */
+static void *xas_start(struct xa_state *xas)
+{
+ void *entry;
+
+ if (xas_valid(xas))
+ return xas_reload(xas);
+ if (xas_error(xas))
+ return NULL;
+
+ entry = xa_head(xas->xa);
+ if (!xa_is_node(entry)) {
+ if (xas->xa_index)
+ return set_bounds(xas);
+ } else {
+ if ((xas->xa_index >> xa_to_node(entry)->shift) > XA_CHUNK_MASK)
+ return set_bounds(xas);
+ }
+
+ xas->xa_node = NULL;
+ return entry;
+}
+
+static void *xas_descend(struct xa_state *xas, struct xa_node *node)
+{
+ unsigned int offset = get_offset(xas->xa_index, node);
+ void *entry = xa_entry(xas->xa, node, offset);
+
+ xas->xa_node = node;
+ if (xa_is_sibling(entry)) {
+ offset = xa_to_sibling(entry);
+ entry = xa_entry(xas->xa, node, offset);
+ xas_move_index(xas, offset);
+ }
+
+ xas->xa_offset = offset;
+ return entry;
+}
+
+/**
+ * xas_load() - Load an entry from the XArray (advanced).
+ * @xas: XArray operation state.
+ *
+ * Usually walks the @xas to the appropriate state to load the entry stored
+ * at xa_index. However, it will do nothing and return NULL if @xas is
+ * holding an error. If the xa_shift indicates we're operating on a
+ * multislot entry, it will terminate early and potentially return an
+ * internal entry. xas_load() will never expand the tree (see xas_create()).
+ *
+ * The caller should hold the xa_lock or the RCU lock.
+ *
+ * Return: Usually an entry in the XArray, but see description for exceptions.
+ */
+void *xas_load(struct xa_state *xas)
+{
+ void *entry = xas_start(xas);
+
+ while (xa_is_node(entry)) {
+ struct xa_node *node = xa_to_node(entry);
+
+ if (xas->xa_shift > node->shift)
+ break;
+ entry = xas_descend(xas, node);
+ }
+ return entry;
+}
+EXPORT_SYMBOL_GPL(xas_load);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -40,3 +134,99 @@ void xa_init_flags(struct xarray *xa, gfp_t flags)
xa->xa_head = NULL;
}
EXPORT_SYMBOL(xa_init_flags);
+
+/**
+ * xa_load() - Load an entry from an XArray.
+ * @xa: XArray.
+ * @index: index into array.
+ *
+ * Return: The entry at @index in @xa.
+ */
+void *xa_load(struct xarray *xa, unsigned long index)
+{
+ XA_STATE(xas, xa, index);
+ void *entry;
+
+ rcu_read_lock();
+ do {
+ entry = xas_load(&xas);
+ } while (xas_retry(&xas, entry));
+ rcu_read_unlock();
+
+ return entry;
+}
+EXPORT_SYMBOL(xa_load);
+
+#ifdef XA_DEBUG
+void xa_dump_node(const struct xa_node *node)
+{
+ unsigned i, j;
+
+ if (!node)
+ return;
+ if ((unsigned long)node & 3) {
+ pr_cont("node %px\n", node);
+ return;
+ }
+
+ pr_cont("node %px %s %d parent %px shift %d count %d values %d "
+ "array %px list %px %px tags",
+ node, node->parent ? "offset" : "max", node->offset,
+ node->parent, node->shift, node->count, node->nr_values,
+ node->array, node->private_list.prev, node->private_list.next);
+ for (i = 0; i < XA_MAX_TAGS; i++)
+ for (j = 0; j < XA_TAG_LONGS; j++)
+ pr_cont(" %lx", node->tags[i][j]);
+ pr_cont("\n");
+}
+
+void xa_dump_index(unsigned long index, unsigned int shift)
+{
+ if (!shift)
+ pr_info("%lu: ", index);
+ else if (shift >= BITS_PER_LONG)
+ pr_info("0-%lu: ", ~0UL);
+ else
+ pr_info("%lu-%lu: ", index, index | ((1UL << shift) - 1));
+}
+
+void xa_dump_entry(const void *entry, unsigned long index, unsigned long shift)
+{
+ if (!entry)
+ return;
+
+ xa_dump_index(index, shift);
+
+ if (xa_is_node(entry)) {
+ unsigned long i;
+ struct xa_node *node = xa_to_node(entry);
+ xa_dump_node(node);
+ for (i = 0; i < XA_CHUNK_SIZE; i++)
+ xa_dump_entry(node->slots[i],
+ index + (i << node->shift), node->shift);
+ } else if (xa_is_value(entry))
+ pr_cont("value %ld (0x%lx)\n", xa_to_value(entry),
+ xa_to_value(entry));
+ else if (!xa_is_internal(entry))
+ pr_cont("%px\n", entry);
+ else if (xa_is_retry(entry))
+ pr_cont("retry (%ld)\n", xa_to_internal(entry));
+ else if (xa_is_sibling(entry))
+ pr_cont("sibling (slot %ld)\n", xa_to_sibling(entry));
+ else
+ pr_cont("UNKNOWN ENTRY (%px)\n", entry);
+}
+
+void xa_dump(const struct xarray *xa)
+{
+ void *entry = xa->xa_head;
+ unsigned int shift = 0;
+
+ pr_info("xarray: %px head %px flags %x tags %d %d %d\n", xa, entry,
+ xa->xa_flags, xa_tagged(xa, XA_TAG_0),
+ xa_tagged(xa, XA_TAG_1), xa_tagged(xa, XA_TAG_2));
+ if (xa_is_node(entry))
+ shift = xa_to_node(entry)->shift + XA_CHUNK_SHIFT;
+ xa_dump_entry(entry, 0, shift);
+}
+#endif
diff --git a/tools/testing/radix-tree/.gitignore b/tools/testing/radix-tree/.gitignore
index 8d4df7a72a8e..833136896b91 100644
--- a/tools/testing/radix-tree/.gitignore
+++ b/tools/testing/radix-tree/.gitignore
@@ -5,3 +5,4 @@ main
multiorder
radix-tree.c
xarray.c
+xarray-test
diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
index 3868bc189199..951a8fbf15bd 100644
--- a/tools/testing/radix-tree/Makefile
+++ b/tools/testing/radix-tree/Makefile
@@ -3,10 +3,11 @@
CFLAGS += -I. -I../../include -g -O2 -Wall -D_LGPL_SOURCE -fsanitize=address
LDFLAGS += -fsanitize=address
LDLIBS+= -lpthread -lurcu
-TARGETS = main idr-test multiorder
+TARGETS = main idr-test multiorder xarray-test
CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o
OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \
- tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o
+ tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o \
+ xarray-test.o

ifndef SHIFT
SHIFT=3
@@ -23,6 +24,8 @@ main: $(OFILES)

idr-test: idr-test.o $(CORE_OFILES)

+xarray-test: $(CORE_OFILES)
+
multiorder: multiorder.o $(CORE_OFILES)

clean:
diff --git a/tools/testing/radix-tree/linux/kernel.h b/tools/testing/radix-tree/linux/kernel.h
index 426f32f28547..5d06ac75a14d 100644
--- a/tools/testing/radix-tree/linux/kernel.h
+++ b/tools/testing/radix-tree/linux/kernel.h
@@ -14,6 +14,7 @@
#include "../../../include/linux/kconfig.h"

#define printk printf
+#define pr_info printk
#define pr_debug printk
#define pr_cont printk

diff --git a/tools/testing/radix-tree/linux/radix-tree.h b/tools/testing/radix-tree/linux/radix-tree.h
index 40c9671ee365..36fb716d5557 100644
--- a/tools/testing/radix-tree/linux/radix-tree.h
+++ b/tools/testing/radix-tree/linux/radix-tree.h
@@ -5,7 +5,6 @@
#include "generated/map-shift.h"
#include "linux/bug.h"
#include "../../../../include/linux/radix-tree.h"
-#include <linux/xarray.h>

extern int kmalloc_verbose;
extern int test_verbose;
diff --git a/tools/testing/radix-tree/linux/rcupdate.h b/tools/testing/radix-tree/linux/rcupdate.h
index 73ed33658203..25010bf86c1d 100644
--- a/tools/testing/radix-tree/linux/rcupdate.h
+++ b/tools/testing/radix-tree/linux/rcupdate.h
@@ -6,5 +6,6 @@

#define rcu_dereference_raw(p) rcu_dereference(p)
#define rcu_dereference_protected(p, cond) rcu_dereference(p)
+#define rcu_dereference_check(p, cond) rcu_dereference(p)

#endif
diff --git a/tools/testing/radix-tree/linux/xarray.h b/tools/testing/radix-tree/linux/xarray.h
index df3812cda376..3eaf9596c2a6 100644
--- a/tools/testing/radix-tree/linux/xarray.h
+++ b/tools/testing/radix-tree/linux/xarray.h
@@ -1,2 +1,3 @@
#include "generated/map-shift.h"
+#define XA_DEBUG
#include "../../../../include/linux/xarray.h"
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
new file mode 100644
index 000000000000..3f8f19cb3739
--- /dev/null
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -0,0 +1,56 @@
+/*
+ * xarray-test.c: Test the XArray API
+ * Copyright (c) 2017 Microsoft Corporation <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/bitmap.h>
+#include <linux/xarray.h>
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+
+#include "test.h"
+
+void check_xa_load(struct xarray *xa)
+{
+ unsigned long i, j;
+
+ for (i = 0; i < 1024; i++) {
+ for (j = 0; j < 1024; j++) {
+ void *entry = xa_load(xa, j);
+ if (j < i)
+ assert(xa_to_value(entry) == j);
+ else
+ assert(!entry);
+ }
+ radix_tree_insert(xa, i, xa_mk_value(i));
+ }
+}
+
+void xarray_checks(void)
+{
+ RADIX_TREE(array, GFP_KERNEL);
+
+ check_xa_load(&array);
+
+ item_kill_tree(&array);
+}
+
+int __weak main(void)
+{
+ radix_tree_init();
+ xarray_checks();
+ radix_tree_cpu_dead(1);
+ rcu_barrier();
+ if (nr_allocated)
+ printf("nr_allocated = %d\n", nr_allocated);
+ return 0;
+}
--
2.15.1


2018-01-17 21:16:07

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 10/99] xarray: Add xa_store

From: Matthew Wilcox <[email protected]>

xa_store() differs from radix_tree_insert() in that it will overwrite an
existing element in the array rather than returning an error. This is
the behaviour which most users want, and those that want more complex
behaviour generally want to use the xas family of routines anyway.

For memory allocation, xa_store() will first attempt to request memory
from the slab allocator; if memory is not immediately available, it will
drop the xa_lock and allocate memory, keeping a pointer in the xa_state.
It does not use the per-CPU cache, although those will continue to exist
until all radix tree users are converted to the xarray.

This patch also includes xa_erase() and __xa_erase() for a streamlined
way to store NULL. Since there is no need to allocate memory in order
to store a NULL in the XArray, we do not need to trouble the user with
deciding what memory allocation flags to use.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 107 +++++
lib/radix-tree.c | 4 +-
lib/xarray.c | 642 ++++++++++++++++++++++++++++++
tools/include/linux/spinlock.h | 2 +
tools/testing/radix-tree/linux/kernel.h | 4 +
tools/testing/radix-tree/linux/lockdep.h | 11 +
tools/testing/radix-tree/linux/rcupdate.h | 1 +
tools/testing/radix-tree/test.c | 32 ++
tools/testing/radix-tree/test.h | 5 +
tools/testing/radix-tree/xarray-test.c | 113 +++++-
10 files changed, 917 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/radix-tree/linux/lockdep.h

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index ddeb49b8bfc1..139b1c1fd022 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -149,10 +149,17 @@ typedef unsigned __bitwise xa_tag_t;
#define XA_PRESENT ((__force xa_tag_t)8U)
#define XA_TAG_MAX XA_TAG_2

+enum xa_lock_type {
+ XA_LOCK_IRQ = 1,
+ XA_LOCK_BH = 2,
+};
+
/*
* Values for xa_flags. The radix tree stores its GFP flags in the xa_flags,
* and we remain compatible with that.
*/
+#define XA_FLAGS_LOCK_IRQ ((__force gfp_t)XA_LOCK_IRQ)
+#define XA_FLAGS_LOCK_BH ((__force gfp_t)XA_LOCK_BH)
#define XA_FLAGS_TAG(tag) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
(__force unsigned)(tag)))

@@ -202,6 +209,7 @@ struct xarray {

void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
+void *xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
@@ -217,6 +225,33 @@ static inline void xa_init(struct xarray *xa)
xa_init_flags(xa, 0);
}

+/**
+ * xa_erase() - Erase this entry from the XArray.
+ * @xa: XArray.
+ * @index: Index of entry.
+ *
+ * This function is the equivalent of calling xa_store() with %NULL as
+ * the third argument. The XArray does not need to allocate memory, so
+ * the user does not need to provide GFP flags.
+ *
+ * Return: The entry which used to be at this index.
+ */
+static inline void *xa_erase(struct xarray *xa, unsigned long index)
+{
+ return xa_store(xa, index, NULL, 0);
+}
+
+/**
+ * xa_empty() - Determine if an array has any present entries.
+ * @xa: XArray.
+ *
+ * Return: %true if the array contains only NULL pointers.
+ */
+static inline bool xa_empty(const struct xarray *xa)
+{
+ return xa->xa_head == NULL;
+}
+
/**
* xa_tagged() - Inquire whether any entry in this array has a tag set
* @xa: Array
@@ -243,7 +278,11 @@ static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)

/*
* Versions of the normal API which require the caller to hold the xa_lock.
+ * If the GFP flags allow it, will drop the lock in order to allocate
+ * memory, then reacquire it afterwards.
*/
+void *__xa_erase(struct xarray *, unsigned long index);
+void *__xa_store(struct xarray *, unsigned long index, void *entry, gfp_t);
void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);

@@ -339,6 +378,12 @@ static inline void *xa_entry_locked(struct xarray *xa,
lockdep_is_held(&xa->xa_lock));
}

+/* Private */
+static inline void *xa_mk_node(const struct xa_node *node)
+{
+ return (void *)((unsigned long)node | 2);
+}
+
/* Private */
static inline struct xa_node *xa_to_node(const void *entry)
{
@@ -519,6 +564,12 @@ static inline bool xas_valid(const struct xa_state *xas)
return !xas_invalid(xas);
}

+/* True if the node represents head-of-tree, RESTART or BOUNDS */
+static inline bool xas_top(struct xa_node *node)
+{
+ return node <= XAS_RESTART;
+}
+
/**
* xas_retry() - Handle a retry entry.
* @xas: XArray operation state.
@@ -539,10 +590,15 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)
}

void *xas_load(struct xa_state *);
+void *xas_store(struct xa_state *, void *entry);
+void *xas_create(struct xa_state *);

bool xas_get_tag(const struct xa_state *, xa_tag_t);
void xas_set_tag(const struct xa_state *, xa_tag_t);
void xas_clear_tag(const struct xa_state *, xa_tag_t);
+void xas_init_tags(const struct xa_state *);
+
+bool xas_nomem(struct xa_state *, gfp_t);

/**
* xas_reload() - Refetch an entry from the xarray.
@@ -567,4 +623,55 @@ static inline void *xas_reload(struct xa_state *xas)
return xa_head(xas->xa);
}

+/**
+ * xas_set() - Set up XArray operation state for a different index.
+ * @xas: XArray operation state.
+ * @index: New index into the XArray.
+ *
+ * Move the operation state to refer to a different index. This will
+ * have the effect of starting a walk from the top; see xas_next()
+ * to move to an adjacent index.
+ */
+static inline void xas_set(struct xa_state *xas, unsigned long index)
+{
+ xas->xa_index = index;
+ xas->xa_node = XAS_RESTART;
+}
+
+/**
+ * xas_set_order() - Set up XArray operation state for a multislot entry.
+ * @xas: XArray operation state.
+ * @index: Target of the operation.
+ * @order: Entry occupies 2^@order indices.
+ */
+static inline void xas_set_order(struct xa_state *xas, unsigned long index,
+ unsigned int order)
+{
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ xas->xa_index = (index >> order) << order;
+ xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
+ xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
+ xas->xa_node = XAS_RESTART;
+#else
+ BUG_ON(order > 0);
+ xas_set(xas, index);
+#endif
+}
+
+/**
+ * xas_set_update() - Set up XArray operation state for a callback.
+ * @xas: XArray operation state.
+ * @update: Function to call when updating a node.
+ *
+ * The XArray can notify a caller after it has updated an xa_node.
+ * This is advanced functionality and is only needed by the page cache.
+ */
+static inline void xas_set_update(struct xa_state *xas, xa_update_node_t update)
+{
+ xas->xa_update = update;
+}
+
+/* Internal functions, mostly shared between radix-tree.c, xarray.c and idr.c */
+void xas_destroy(struct xa_state *);
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index be9ace9f9145..a0fdea68ce9c 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -46,7 +46,7 @@ static unsigned long height_to_maxnodes[RADIX_TREE_MAX_PATH + 1] __read_mostly;
/*
* Radix tree node cache.
*/
-static struct kmem_cache *radix_tree_node_cachep;
+struct kmem_cache *radix_tree_node_cachep;

/*
* The radix tree is variable-height, so an insert operation not only has
@@ -364,7 +364,7 @@ radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
return ret;
}

-static void radix_tree_node_rcu_free(struct rcu_head *head)
+void radix_tree_node_rcu_free(struct rcu_head *head)
{
struct radix_tree_node *node =
container_of(head, struct radix_tree_node, rcu_head);
diff --git a/lib/xarray.c b/lib/xarray.c
index 59b57e6f80de..45b70e622bf1 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -7,6 +7,8 @@

#include <linux/bitmap.h>
#include <linux/export.h>
+#include <linux/list.h>
+#include <linux/slab.h>
#include <linux/xarray.h>

/*
@@ -39,6 +41,11 @@ static inline struct xa_node *xa_parent_locked(struct xarray *xa,
lockdep_is_held(&xa->xa_lock));
}

+static inline unsigned int xa_lock_type(const struct xarray *xa)
+{
+ return (__force unsigned int)xa->xa_flags & 3;
+}
+
static inline void xa_tag_set(struct xarray *xa, xa_tag_t tag)
{
if (!(xa->xa_flags & XA_FLAGS_TAG(tag)))
@@ -74,6 +81,10 @@ static inline bool node_any_tag(struct xa_node *node, xa_tag_t tag)
return !bitmap_empty(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
}

+#define tag_inc(tag) do { \
+ tag = (__force xa_tag_t)((__force unsigned)(tag) + 1); \
+} while (0)
+
/* extracts the offset within this node from the index */
static unsigned int get_offset(unsigned long index, struct xa_node *node)
{
@@ -168,6 +179,515 @@ void *xas_load(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_load);

+/* Move the radix tree node cache here */
+extern struct kmem_cache *radix_tree_node_cachep;
+extern void radix_tree_node_rcu_free(struct rcu_head *head);
+
+#define XA_RCU_FREE ((struct xarray *)1)
+
+static void xa_node_free(struct xa_node *node)
+{
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ node->array = XA_RCU_FREE;
+ call_rcu(&node->rcu_head, radix_tree_node_rcu_free);
+}
+
+/*
+ * xas_destroy() - Free any resources allocated during the XArray operation.
+ * @xas: XArray operation state.
+ *
+ * This function is now internal-only (and will be made static once
+ * idr_preload() is removed).
+ */
+void xas_destroy(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_alloc;
+
+ if (!node)
+ return;
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ kmem_cache_free(radix_tree_node_cachep, node);
+ xas->xa_alloc = NULL;
+}
+
+/**
+ * xas_nomem() - Allocate memory if needed.
+ * @xas: XArray operation state.
+ * @gfp: Memory allocation flags.
+ *
+ * If we need to add new nodes to the XArray, we try to allocate memory
+ * with GFP_NOWAIT while holding the lock, which will usually succeed.
+ * If it fails, @xas is flagged as needing memory to continue. The caller
+ * should drop the lock and call xas_nomem(). If xas_nomem() succeeds,
+ * the caller should retry the operation.
+ *
+ * Forward progress is guaranteed as one node is allocated here and
+ * stored in the xa_state where it will be found by xas_alloc(). More
+ * nodes will likely be found in the slab allocator, but we do not tie
+ * them up here.
+ *
+ * Return: true if memory was needed, and was successfully allocated.
+ */
+bool xas_nomem(struct xa_state *xas, gfp_t gfp)
+{
+ if (xas->xa_node != XA_ERROR(-ENOMEM)) {
+ xas_destroy(xas);
+ return false;
+ }
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ if (!xas->xa_alloc)
+ return false;
+ XA_NODE_BUG_ON(xas->xa_alloc, !list_empty(&xas->xa_alloc->private_list));
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+EXPORT_SYMBOL_GPL(xas_nomem);
+
+/*
+ * __xas_nomem() - Drop locks and allocate memory if needed.
+ * @xas: XArray operation state.
+ * @gfp: Memory allocation flags.
+ *
+ * Internal variant of xas_nomem().
+ *
+ * Return: true if memory was needed, and was successfully allocated.
+ */
+static bool __xas_nomem(struct xa_state *xas, gfp_t gfp)
+ __must_hold(xas->xa->xa_lock)
+{
+ unsigned int lock_type = xa_lock_type(xas->xa);
+
+ if (xas->xa_node != XA_ERROR(-ENOMEM)) {
+ xas_destroy(xas);
+ return false;
+ }
+ if (gfpflags_allow_blocking(gfp)) {
+ if (lock_type == XA_LOCK_IRQ)
+ xas_unlock_irq(xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_unlock_bh(xas);
+ else
+ xas_unlock(xas);
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ if (lock_type == XA_LOCK_IRQ)
+ xas_lock_irq(xas);
+ else if (lock_type == XA_LOCK_BH)
+ xas_lock_bh(xas);
+ else
+ xas_lock(xas);
+ } else {
+ xas->xa_alloc = kmem_cache_alloc(radix_tree_node_cachep, gfp);
+ }
+ if (!xas->xa_alloc)
+ return false;
+ XA_NODE_BUG_ON(xas->xa_alloc, !list_empty(&xas->xa_alloc->private_list));
+ xas->xa_node = XAS_RESTART;
+ return true;
+}
+
+static void xas_update(struct xa_state *xas, struct xa_node *node)
+{
+ if (xas->xa_update)
+ xas->xa_update(node);
+ else
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+}
+
+static void *xas_alloc(struct xa_state *xas, unsigned int shift)
+{
+ struct xa_node *parent = xas->xa_node;
+ struct xa_node *node = xas->xa_alloc;
+
+ if (xas_invalid(xas))
+ return NULL;
+
+ if (node) {
+ xas->xa_alloc = NULL;
+ } else {
+ node = kmem_cache_alloc(radix_tree_node_cachep,
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (!node) {
+ xas_set_err(xas, -ENOMEM);
+ return NULL;
+ }
+ }
+
+ if (parent) {
+ node->offset = xas->xa_offset;
+ parent->count++;
+ XA_NODE_BUG_ON(node, parent->count > XA_CHUNK_SIZE);
+ xas_update(xas, parent);
+ }
+ XA_NODE_BUG_ON(node, shift > BITS_PER_LONG);
+ XA_NODE_BUG_ON(node, !list_empty(&node->private_list));
+ node->shift = shift;
+ node->count = 0;
+ node->nr_values = 0;
+ RCU_INIT_POINTER(node->parent, xas->xa_node);
+ node->array = xas->xa;
+
+ return node;
+}
+
+/*
+ * Use this to calculate the maximum index that will need to be created
+ * in order to add the entry described by @xas. Because we cannot store a
+ * multiple-index entry at index 0, the calculation is a little more complex
+ * than you might expect.
+ */
+static unsigned long xas_max(struct xa_state *xas)
+{
+ unsigned long max = xas->xa_index;
+
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ if (xas->xa_shift || xas->xa_sibs) {
+ unsigned long mask;
+ mask = (((xas->xa_sibs + 1UL) << xas->xa_shift) - 1);
+ max |= mask;
+ if (mask == max)
+ max++;
+ }
+#endif
+
+ return max;
+}
+
+/* The maximum index that can be contained in the array without expanding it */
+static unsigned long max_index(void *entry)
+{
+ if (!xa_is_node(entry))
+ return 0;
+ return (XA_CHUNK_SIZE << xa_to_node(entry)->shift) - 1;
+}
+
+static void xas_shrink(struct xa_state *xas)
+{
+ struct xarray *xa = xas->xa;
+ struct xa_node *node = xas->xa_node;
+
+ for (;;) {
+ void *entry;
+
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ if (node->count != 1)
+ break;
+ entry = xa_entry_locked(xa, node, 0);
+ if (!entry)
+ break;
+ if (!xa_is_node(entry) && node->shift)
+ break;
+ xas->xa_node = XAS_BOUNDS;
+
+ RCU_INIT_POINTER(xa->xa_head, entry);
+
+ node->count = 0;
+ node->nr_values = 0;
+ if (!xa_is_node(entry))
+ RCU_INIT_POINTER(node->slots[0], XA_RETRY_ENTRY);
+ xas_update(xas, node);
+ xa_node_free(node);
+ if (!xa_is_node(entry))
+ break;
+ node = xa_to_node(entry);
+ node->parent = NULL;
+ }
+}
+
+/*
+ * xas_delete_node() - Attempt to delete an xa_node
+ * @xas: Array operation state.
+ *
+ * Attempts to delete the @xas->xa_node. This will fail if xa->node has
+ * a non-zero reference count.
+ */
+static void xas_delete_node(struct xa_state *xas)
+{
+ struct xa_node *node = xas->xa_node;
+
+ for (;;) {
+ struct xa_node *parent;
+
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ if (node->count)
+ break;
+
+ parent = xa_parent_locked(xas->xa, node);
+ xas->xa_node = parent;
+ xas->xa_offset = node->offset;
+ xa_node_free(node);
+
+ if (!parent) {
+ xas->xa->xa_head = NULL;
+ xas->xa_node = XAS_BOUNDS;
+ return;
+ }
+
+ parent->slots[xas->xa_offset] = NULL;
+ parent->count--;
+ XA_NODE_BUG_ON(parent, parent->count > XA_CHUNK_SIZE);
+ node = parent;
+ xas_update(xas, node);
+ }
+
+ if (!node->parent)
+ xas_shrink(xas);
+}
+
+/**
+ * xas_free_nodes() - Free this node and all nodes that it references
+ * @xas: Array operation state.
+ * @top: Node to free
+ *
+ * This node has been removed from the tree. We must now free it and all
+ * of its subnodes. There may be RCU walkers with references into the tree,
+ * so we must replace all entries with retry markers.
+ */
+static void xas_free_nodes(struct xa_state *xas, struct xa_node *top)
+{
+ unsigned int offset = 0;
+ struct xa_node *node = top;
+
+ for (;;) {
+ void *entry = xa_entry_locked(xas->xa, node, offset);
+
+ if (xa_is_node(entry)) {
+ node = xa_to_node(entry);
+ offset = 0;
+ continue;
+ }
+ if (entry)
+ RCU_INIT_POINTER(node->slots[offset], XA_RETRY_ENTRY);
+ offset++;
+ while (offset == XA_CHUNK_SIZE) {
+ struct xa_node *parent;
+
+ parent = xa_parent_locked(xas->xa, node);
+ offset = node->offset + 1;
+ node->count = 0;
+ node->nr_values = 0;
+ xas_update(xas, node);
+ xa_node_free(node);
+ if (node == top)
+ return;
+ node = parent;
+ }
+ }
+}
+
+/*
+ * xas_expand adds nodes to the head of the tree until it has reached
+ * sufficient height to be able to contain @xas->xa_index
+ */
+static int xas_expand(struct xa_state *xas, void *head)
+{
+ struct xarray *xa = xas->xa;
+ struct xa_node *node = NULL;
+ unsigned int shift = 0;
+ unsigned long max = xas_max(xas);
+
+ if (!head) {
+ if (max == 0)
+ return 0;
+ while ((max >> shift) >= XA_CHUNK_SIZE)
+ shift += XA_CHUNK_SHIFT;
+ return shift + XA_CHUNK_SHIFT;
+ } else if (xa_is_node(head)) {
+ node = xa_to_node(head);
+ shift = node->shift + XA_CHUNK_SHIFT;
+ }
+ xas->xa_node = NULL;
+
+ while (max > max_index(head)) {
+ xa_tag_t tag = 0;
+
+ XA_NODE_BUG_ON(node, shift > BITS_PER_LONG);
+ node = xas_alloc(xas, shift);
+ if (!node)
+ return -ENOMEM;
+
+ node->count = 1;
+ if (xa_is_value(head))
+ node->nr_values = 1;
+ RCU_INIT_POINTER(node->slots[0], head);
+
+ /* Propagate the aggregated tag info to the new child */
+ for (;;) {
+ if (xa_tagged(xa, tag))
+ node_set_tag(node, 0, tag);
+ if (tag == XA_TAG_MAX)
+ break;
+ tag_inc(tag);
+ }
+
+ /*
+ * Now that the new node is fully initialised, we can add
+ * it to the tree
+ */
+ if (xa_is_node(head)) {
+ xa_to_node(head)->offset = 0;
+ rcu_assign_pointer(xa_to_node(head)->parent, node);
+ }
+ head = xa_mk_node(node);
+ rcu_assign_pointer(xa->xa_head, head);
+ xas_update(xas, node);
+
+ shift += XA_CHUNK_SHIFT;
+ }
+
+ xas->xa_node = node;
+ return shift;
+}
+
+/**
+ * xas_create() - Create a slot to store an entry in.
+ * @xas: XArray operation state.
+ *
+ * Most users will not need to call this function directly, as it is called
+ * by xas_store(). It is useful for doing conditional store operations
+ * (see the xa_cmpxchg() implementation for an example).
+ *
+ * Return: If the slot already existed, returns the contents of this slot.
+ * If the slot was newly created, returns NULL. If it failed to create the
+ * slot, returns NULL and indicates the error in @xas.
+ */
+void *xas_create(struct xa_state *xas)
+{
+ struct xarray *xa = xas->xa;
+ void *entry;
+ void __rcu **slot;
+ struct xa_node *node = xas->xa_node;
+ int shift;
+ unsigned int order = xas->xa_shift;
+
+ if (xas_top(node)) {
+ entry = xa_head_locked(xa);
+ xas->xa_node = NULL;
+ shift = xas_expand(xas, entry);
+ if (shift < 0)
+ return NULL;
+ entry = xa_head_locked(xa);
+ slot = &xa->xa_head;
+ } else if (xas_error(xas)) {
+ return NULL;
+ } else if (node) {
+ unsigned int offset = xas->xa_offset;
+
+ shift = node->shift;
+ entry = xa_entry_locked(xa, node, offset);
+ slot = &node->slots[offset];
+ } else {
+ shift = 0;
+ entry = xa_head_locked(xa);
+ slot = &xa->xa_head;
+ }
+
+ while (shift > order) {
+ shift -= XA_CHUNK_SHIFT;
+ if (!entry) {
+ node = xas_alloc(xas, shift);
+ if (!node)
+ break;
+ rcu_assign_pointer(*slot, xa_mk_node(node));
+ } else if (xa_is_node(entry)) {
+ node = xa_to_node(entry);
+ } else {
+ break;
+ }
+ entry = xas_descend(xas, node);
+ slot = &node->slots[xas->xa_offset];
+ }
+
+ return entry;
+}
+EXPORT_SYMBOL_GPL(xas_create);
+
+static void store_siblings(struct xa_state *xas, void *entry, void *curr,
+ int *countp, int *valuesp)
+{
+#ifdef CONFIG_RADIX_TREE_MULTIORDER
+ struct xa_node *node = xas->xa_node;
+ unsigned int sibs, offset = xas->xa_offset;
+ void *sibling = entry ? xa_mk_sibling(offset) : NULL;
+
+ if (!entry)
+ sibs = XA_CHUNK_MASK - offset;
+ else if (xas->xa_shift < node->shift)
+ sibs = 0;
+ else
+ sibs = xas->xa_sibs;
+
+ while (sibs--) {
+ void *next = xa_entry(xas->xa, node, ++offset);
+
+ if (!xa_is_sibling(next)) {
+ if (!entry)
+ break;
+ curr = next;
+ }
+ RCU_INIT_POINTER(node->slots[offset], sibling);
+ if (xa_is_node(next))
+ xas_free_nodes(xas, xa_to_node(next));
+ *countp += !next - !entry;
+ *valuesp += !xa_is_value(curr) - !xa_is_value(entry);
+ }
+#endif
+}
+
+/**
+ * xas_store() - Store this entry in the XArray.
+ * @xas: XArray operation state.
+ * @entry: New entry.
+ *
+ * Return: The old entry at this index.
+ */
+void *xas_store(struct xa_state *xas, void *entry)
+{
+ struct xa_node *node;
+ int count, values;
+ void *curr;
+
+ if (entry)
+ curr = xas_create(xas);
+ else
+ curr = xas_load(xas);
+ if (xas_invalid(xas))
+ return curr;
+ if ((curr == entry) && !xas->xa_sibs)
+ return curr;
+
+ node = xas->xa_node;
+ if (!entry)
+ xas_init_tags(xas);
+ /*
+ * Must clear the tags before setting the entry to NULL otherwise
+ * xas_for_each_tag may find a NULL entry and stop early.
+ */
+ if (node)
+ rcu_assign_pointer(node->slots[xas->xa_offset], entry);
+ else
+ rcu_assign_pointer(xas->xa->xa_head, entry);
+
+ values = !xa_is_value(curr) - !xa_is_value(entry);
+ count = !curr - !entry;
+ if (xa_is_node(curr))
+ xas_free_nodes(xas, xa_to_node(curr));
+
+ if (node) {
+ store_siblings(xas, entry, curr, &count, &values);
+ node->count += count;
+ XA_NODE_BUG_ON(node, node->count > XA_CHUNK_SIZE);
+ node->nr_values += values;
+ XA_NODE_BUG_ON(node, node->nr_values > XA_CHUNK_SIZE);
+ if (count || values)
+ xas_update(xas, node);
+ if (count < 0)
+ xas_delete_node(xas);
+ }
+
+ return curr;
+}
+EXPORT_SYMBOL_GPL(xas_store);
+
/**
* xas_get_tag() - Returns the state of this tag.
* @xas: XArray operation state.
@@ -247,6 +767,30 @@ void xas_clear_tag(const struct xa_state *xas, xa_tag_t tag)
}
EXPORT_SYMBOL_GPL(xas_clear_tag);

+/**
+ * xas_init_tags() - Initialise all tags for the entry
+ * @xas: Array operations state.
+ *
+ * Initialise all tags for the entry specified by @xas. If we're tracking
+ * free entries with a tag, we need to set it on all entries. All other
+ * tags are cleared.
+ *
+ * This implementation is not as efficient as it could be; we may walk
+ * up the tree multiple times.
+ */
+void xas_init_tags(const struct xa_state *xas)
+{
+ xa_tag_t tag = 0;
+
+ for (;;) {
+ xas_clear_tag(xas, tag);
+ if (tag == XA_TAG_MAX)
+ break;
+ tag_inc(tag);
+ }
+}
+EXPORT_SYMBOL_GPL(xas_init_tags);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -258,9 +802,19 @@ EXPORT_SYMBOL_GPL(xas_clear_tag);
*/
void xa_init_flags(struct xarray *xa, gfp_t flags)
{
+ unsigned int lock_type;
+ static struct lock_class_key xa_lock_irq;
+ static struct lock_class_key xa_lock_bh;
+
spin_lock_init(&xa->xa_lock);
xa->xa_flags = flags;
xa->xa_head = NULL;
+
+ lock_type = xa_lock_type(xa);
+ if (lock_type == XA_LOCK_IRQ)
+ lockdep_set_class(&xa->xa_lock, &xa_lock_irq);
+ else if (lock_type == XA_LOCK_BH)
+ lockdep_set_class(&xa->xa_lock, &xa_lock_bh);
}
EXPORT_SYMBOL(xa_init_flags);

@@ -286,6 +840,94 @@ void *xa_load(struct xarray *xa, unsigned long index)
}
EXPORT_SYMBOL(xa_load);

+static void *xas_result(struct xa_state *xas, void *curr)
+{
+ XA_NODE_BUG_ON(xas->xa_node, xa_is_internal(curr));
+ if (xas_error(xas))
+ curr = xas->xa_node;
+ return curr;
+}
+
+/**
+ * __xa_erase() - Erase this entry from the XArray while locked.
+ * @xa: XArray.
+ * @index: Index into array.
+ *
+ * If the entry at this index is a multi-index entry then all indices will
+ * be erased, and the entry will no longer be a multi-index entry.
+ * This function expects the xa_lock to be held on entry.
+ *
+ * Return: The old entry at this index.
+ */
+void *__xa_erase(struct xarray *xa, unsigned long index)
+{
+ XA_STATE(xas, xa, index);
+ return xas_result(&xas, xas_store(&xas, NULL));
+}
+EXPORT_SYMBOL_GPL(__xa_erase);
+
+/**
+ * xa_store() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * Stores almost always succeed. The notable exceptions:
+ * - Attempted to store a reserved pointer entry (-EINVAL)
+ * - Ran out of memory trying to allocate new nodes (-ENOMEM)
+ *
+ * Storing into an existing multislot entry updates the entry of every index.
+ *
+ * Return: The old entry at this index or xa_err() if an error happened.
+ */
+void *xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ xas_lock(&xas);
+ curr = xas_store(&xas, entry);
+ xas_unlock(&xas);
+ } while (xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(xa_store);
+
+/**
+ * __xa_store() - Store this entry in the XArray.
+ * @xa: XArray.
+ * @index: Index into array.
+ * @entry: New entry.
+ * @gfp: Memory allocation flags.
+ *
+ * You must already be holding the xa_lock when calling this function.
+ * It will drop the lock if needed to allocate memory, and then reacquire
+ * it afterwards.
+ *
+ * Return: The old entry at this index or xa_err() if an error happened.
+ */
+void *__xa_store(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, index);
+ void *curr;
+
+ if (WARN_ON_ONCE(xa_is_internal(entry)))
+ return XA_ERROR(-EINVAL);
+
+ do {
+ curr = xas_store(&xas, entry);
+ } while (__xas_nomem(&xas, gfp));
+
+ return xas_result(&xas, curr);
+}
+EXPORT_SYMBOL(__xa_store);
+
/**
* __xa_set_tag() - Set this tag on this entry while locked.
* @xa: XArray.
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 85a009001109..f900a2b7509d 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -37,4 +37,6 @@ static inline bool arch_spin_is_locked(arch_spinlock_t *mutex)
return true;
}

+#include <linux/lockdep.h>
+
#endif
diff --git a/tools/testing/radix-tree/linux/kernel.h b/tools/testing/radix-tree/linux/kernel.h
index 5d06ac75a14d..4568248222ae 100644
--- a/tools/testing/radix-tree/linux/kernel.h
+++ b/tools/testing/radix-tree/linux/kernel.h
@@ -18,4 +18,8 @@
#define pr_debug printk
#define pr_cont printk

+#define __acquires(x)
+#define __releases(x)
+#define __must_hold(x)
+
#endif /* _KERNEL_H */
diff --git a/tools/testing/radix-tree/linux/lockdep.h b/tools/testing/radix-tree/linux/lockdep.h
new file mode 100644
index 000000000000..565fccdfe6e9
--- /dev/null
+++ b/tools/testing/radix-tree/linux/lockdep.h
@@ -0,0 +1,11 @@
+#ifndef _LINUX_LOCKDEP_H
+#define _LINUX_LOCKDEP_H
+struct lock_class_key {
+ unsigned int a;
+};
+
+static inline void lockdep_set_class(spinlock_t *lock,
+ struct lock_class_key *key)
+{
+}
+#endif /* _LINUX_LOCKDEP_H */
diff --git a/tools/testing/radix-tree/linux/rcupdate.h b/tools/testing/radix-tree/linux/rcupdate.h
index 25010bf86c1d..fd280b070fdb 100644
--- a/tools/testing/radix-tree/linux/rcupdate.h
+++ b/tools/testing/radix-tree/linux/rcupdate.h
@@ -7,5 +7,6 @@
#define rcu_dereference_raw(p) rcu_dereference(p)
#define rcu_dereference_protected(p, cond) rcu_dereference(p)
#define rcu_dereference_check(p, cond) rcu_dereference(p)
+#define RCU_INIT_POINTER(p, v) (p) = (v)

#endif
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 6e1cc2040817..f151588d04a0 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -8,6 +8,38 @@

#include "test.h"

+void *xa_store_order(struct xarray *xa, unsigned long index, unsigned order,
+ void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, 0);
+ void *curr;
+
+ xas_set_order(&xas, index, order);
+ do {
+ curr = xas_store(&xas, entry);
+ } while (xas_nomem(&xas, gfp));
+
+ return curr;
+}
+
+int xa_insert_order(struct xarray *xa, unsigned long index, unsigned order,
+ void *entry, gfp_t gfp)
+{
+ XA_STATE(xas, xa, 0);
+ void *curr;
+
+ xas_set_order(&xas, index, order);
+ do {
+ curr = xas_create(&xas);
+ if (!curr)
+ xas_store(&xas, entry);
+ } while (xas_nomem(&xas, gfp));
+
+ if (xas_error(&xas))
+ return xas_error(&xas);
+ return curr ? -EEXIST : 0;
+}
+
struct item *
item_tag_set(struct radix_tree_root *root, unsigned long index, int tag)
{
diff --git a/tools/testing/radix-tree/test.h b/tools/testing/radix-tree/test.h
index d9c031dbeb1a..ffd162645c11 100644
--- a/tools/testing/radix-tree/test.h
+++ b/tools/testing/radix-tree/test.h
@@ -4,6 +4,11 @@
#include <linux/radix-tree.h>
#include <linux/rcupdate.h>

+void *xa_store_order(struct xarray *, unsigned long index, unsigned order,
+ void *entry, gfp_t);
+int xa_insert_order(struct xarray *, unsigned long index, unsigned order,
+ void *entry, gfp_t);
+
struct item {
unsigned long index;
unsigned int order;
diff --git a/tools/testing/radix-tree/xarray-test.c b/tools/testing/radix-tree/xarray-test.c
index 3f8f19cb3739..5defd0b9f85c 100644
--- a/tools/testing/radix-tree/xarray-test.c
+++ b/tools/testing/radix-tree/xarray-test.c
@@ -19,6 +19,36 @@

#include "test.h"

+void check_xa_err(struct xarray *xa)
+{
+ assert(xa_err(xa_store(xa, 0, xa_mk_value(0), GFP_NOWAIT)) == 0);
+ assert(xa_err(xa_store(xa, 0, NULL, 0)) == 0);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_NOWAIT)) == -ENOMEM);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_NOWAIT)) == -ENOMEM);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL)) == 0);
+ assert(xa_err(xa_store(xa, 1, xa_mk_value(0), GFP_KERNEL)) == 0);
+ assert(xa_err(xa_store(xa, 1, NULL, 0)) == 0);
+// kills the test-suite :-(
+// assert(xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) == -EINVAL);
+}
+
+void check_xa_tag(struct xarray *xa)
+{
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ assert(xa_store(xa, 0, xa, GFP_KERNEL) == NULL);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == true);
+ assert(xa_get_tag(xa, 1, XA_TAG_0) == false);
+ assert(xa_store(xa, 0, NULL, GFP_KERNEL) == xa);
+ assert(xa_empty(xa));
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+ xa_set_tag(xa, 0, XA_TAG_0);
+ assert(xa_get_tag(xa, 0, XA_TAG_0) == false);
+}
+
void check_xa_load(struct xarray *xa)
{
unsigned long i, j;
@@ -31,16 +61,95 @@ void check_xa_load(struct xarray *xa)
else
assert(!entry);
}
- radix_tree_insert(xa, i, xa_mk_value(i));
+ xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
+ }
+}
+
+void check_xa_shrink(struct xarray *xa)
+{
+ XA_STATE(xas, xa, 1);
+ struct xa_node *node;
+
+ xa_store(xa, 0, xa_mk_value(0), GFP_KERNEL);
+ xa_store(xa, 1, xa_mk_value(1), GFP_KERNEL);
+
+ assert(xas_load(&xas) == xa_mk_value(1));
+ node = xas.xa_node;
+ assert(node->slots[0] == xa_mk_value(0));
+ rcu_read_lock();
+ xas_store(&xas, NULL);
+ assert(xas.xa_node == XAS_BOUNDS);
+ assert(node->slots[0] == XA_RETRY_ENTRY);
+ rcu_read_unlock();
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+}
+
+void check_multi_store(struct xarray *xa)
+{
+ unsigned long i, j, k;
+
+ xa_store_order(xa, 0, 1, xa_mk_value(0), GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+ assert(xa_load(xa, 1) == xa_mk_value(0));
+ assert(xa_load(xa, 2) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 2);
+ assert(xa_to_node(xa_head(xa))->nr_values == 2);
+
+ xa_store(xa, 3, xa, GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(0));
+ assert(xa_load(xa, 1) == xa_mk_value(0));
+ assert(xa_load(xa, 2) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 3);
+ assert(xa_to_node(xa_head(xa))->nr_values == 2);
+
+ xa_store_order(xa, 0, 2, xa_mk_value(1), GFP_KERNEL);
+ assert(xa_load(xa, 0) == xa_mk_value(1));
+ assert(xa_load(xa, 1) == xa_mk_value(1));
+ assert(xa_load(xa, 2) == xa_mk_value(1));
+ assert(xa_load(xa, 3) == xa_mk_value(1));
+ assert(xa_load(xa, 4) == NULL);
+ assert(xa_to_node(xa_head(xa))->count == 4);
+ assert(xa_to_node(xa_head(xa))->nr_values == 4);
+
+ xa_store_order(xa, 0, 64, NULL, GFP_KERNEL);
+ assert(xa_empty(xa));
+
+ for (i = 0; i < 60; i++) {
+ for (j = 0; j < 60; j++) {
+ xa_store_order(xa, 0, i, xa_mk_value(i), GFP_KERNEL);
+ xa_store_order(xa, 0, j, xa_mk_value(j), GFP_KERNEL);
+
+ for (k = 0; k < 60; k++) {
+ void *entry = xa_load(xa, (1UL << k) - 1);
+ if ((i < k) && (j < k))
+ assert(entry == NULL);
+ else
+ assert(entry == xa_mk_value(j));
+ }
+
+ xa_erase(xa, 0);
+ assert(xa_empty(xa));
+ }
}
}

void xarray_checks(void)
{
- RADIX_TREE(array, GFP_KERNEL);
+ DEFINE_XARRAY(array);
+
+ check_xa_err(&array);
+ item_kill_tree(&array);
+
+ check_xa_tag(&array);
+ item_kill_tree(&array);

check_xa_load(&array);
+ item_kill_tree(&array);
+
+ check_xa_shrink(&array);
+ item_kill_tree(&array);

+ check_multi_store(&array);
item_kill_tree(&array);
}

--
2.15.1


2018-01-17 21:16:17

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 09/99] xarray: Add xa_get_tag, xa_set_tag and xa_clear_tag

From: Matthew Wilcox <[email protected]>

XArray tags are slightly more strongly typed than the radix tree tags,
but occupy the same bits. This commit also adds the xas_ family of tag
operations, for cases where the caller is already holding the lock, and
xa_tagged() to ask whether any array member has a particular tag set.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 40 +++++++
lib/xarray.c | 229 +++++++++++++++++++++++++++++++++++++++++
tools/include/linux/spinlock.h | 6 ++
3 files changed, 275 insertions(+)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 54c694e5c33f..ddeb49b8bfc1 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -11,6 +11,7 @@

#include <linux/bug.h>
#include <linux/compiler.h>
+#include <linux/gfp.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/rcupdate.h>
@@ -141,6 +142,20 @@ static inline int xa_err(void *entry)
return 0;
}

+typedef unsigned __bitwise xa_tag_t;
+#define XA_TAG_0 ((__force xa_tag_t)0U)
+#define XA_TAG_1 ((__force xa_tag_t)1U)
+#define XA_TAG_2 ((__force xa_tag_t)2U)
+#define XA_PRESENT ((__force xa_tag_t)8U)
+#define XA_TAG_MAX XA_TAG_2
+
+/*
+ * Values for xa_flags. The radix tree stores its GFP flags in the xa_flags,
+ * and we remain compatible with that.
+ */
+#define XA_FLAGS_TAG(tag) ((__force gfp_t)((1U << __GFP_BITS_SHIFT) << \
+ (__force unsigned)(tag)))
+
/**
* struct xarray - The anchor of the XArray.
* @xa_lock: Lock that protects the contents of the XArray.
@@ -187,6 +202,9 @@ struct xarray {

void xa_init_flags(struct xarray *, gfp_t flags);
void *xa_load(struct xarray *, unsigned long index);
+bool xa_get_tag(struct xarray *, unsigned long index, xa_tag_t);
+void xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
+void xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);

/**
* xa_init() - Initialise an empty XArray.
@@ -199,6 +217,18 @@ static inline void xa_init(struct xarray *xa)
xa_init_flags(xa, 0);
}

+/**
+ * xa_tagged() - Inquire whether any entry in this array has a tag set
+ * @xa: Array
+ * @tag: Tag value
+ *
+ * Return: %true if any entry has this tag set.
+ */
+static inline bool xa_tagged(const struct xarray *xa, xa_tag_t tag)
+{
+ return xa->xa_flags & XA_FLAGS_TAG(tag);
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -211,6 +241,12 @@ static inline void xa_init(struct xarray *xa)
#define xa_unlock_irqrestore(xa, flags) \
spin_unlock_irqrestore(&(xa)->xa_lock, flags)

+/*
+ * Versions of the normal API which require the caller to hold the xa_lock.
+ */
+void __xa_set_tag(struct xarray *, unsigned long index, xa_tag_t);
+void __xa_clear_tag(struct xarray *, unsigned long index, xa_tag_t);
+
/* Everything below here is the Advanced API. Proceed with caution. */

/*
@@ -504,6 +540,10 @@ static inline bool xas_retry(struct xa_state *xas, const void *entry)

void *xas_load(struct xa_state *);

+bool xas_get_tag(const struct xa_state *, xa_tag_t);
+void xas_set_tag(const struct xa_state *, xa_tag_t);
+void xas_clear_tag(const struct xa_state *, xa_tag_t);
+
/**
* xas_reload() - Refetch an entry from the xarray.
* @xas: XArray operation state.
diff --git a/lib/xarray.c b/lib/xarray.c
index 83b9c25de415..59b57e6f80de 100644
--- a/lib/xarray.c
+++ b/lib/xarray.c
@@ -5,6 +5,7 @@
* Author: Matthew Wilcox <[email protected]>
*/

+#include <linux/bitmap.h>
#include <linux/export.h>
#include <linux/xarray.h>

@@ -24,6 +25,55 @@
* @entry refers to something stored in a slot in the xarray
*/

+static inline struct xa_node *xa_parent(struct xarray *xa,
+ const struct xa_node *node)
+{
+ return rcu_dereference_check(node->parent,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+static inline struct xa_node *xa_parent_locked(struct xarray *xa,
+ const struct xa_node *node)
+{
+ return rcu_dereference_protected(node->parent,
+ lockdep_is_held(&xa->xa_lock));
+}
+
+static inline void xa_tag_set(struct xarray *xa, xa_tag_t tag)
+{
+ if (!(xa->xa_flags & XA_FLAGS_TAG(tag)))
+ xa->xa_flags |= XA_FLAGS_TAG(tag);
+}
+
+static inline void xa_tag_clear(struct xarray *xa, xa_tag_t tag)
+{
+ if (xa->xa_flags & XA_FLAGS_TAG(tag))
+ xa->xa_flags &= ~(XA_FLAGS_TAG(tag));
+}
+
+static inline bool node_get_tag(const struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ return test_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline void node_set_tag(struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ __set_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline void node_clear_tag(struct xa_node *node, unsigned int offset,
+ xa_tag_t tag)
+{
+ __clear_bit(offset, node->tags[(__force unsigned)tag]);
+}
+
+static inline bool node_any_tag(struct xa_node *node, xa_tag_t tag)
+{
+ return !bitmap_empty(node->tags[(__force unsigned)tag], XA_CHUNK_SIZE);
+}
+
/* extracts the offset within this node from the index */
static unsigned int get_offset(unsigned long index, struct xa_node *node)
{
@@ -118,6 +168,85 @@ void *xas_load(struct xa_state *xas)
}
EXPORT_SYMBOL_GPL(xas_load);

+/**
+ * xas_get_tag() - Returns the state of this tag.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Return: true if the tag is set, false if the tag is clear or @xas
+ * is in an error state.
+ */
+bool xas_get_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ if (xas_invalid(xas))
+ return false;
+ if (!xas->xa_node)
+ return xa_tagged(xas->xa, tag);
+ return node_get_tag(xas->xa_node, xas->xa_offset, tag);
+}
+EXPORT_SYMBOL_GPL(xas_get_tag);
+
+/**
+ * xas_set_tag() - Sets the tag on this entry and its parents.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Sets the specified tag on this entry, and walks up the tree setting it
+ * on all the ancestor entries. Does nothing if @xas has not been walked to
+ * an entry, or is in an error state.
+ */
+void xas_set_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset = xas->xa_offset;
+
+ if (xas_invalid(xas))
+ return;
+
+ while (node) {
+ if (node_get_tag(node, offset, tag))
+ return;
+ node_set_tag(node, offset, tag);
+ offset = node->offset;
+ node = xa_parent_locked(xas->xa, node);
+ }
+
+ if (!xa_tagged(xas->xa, tag))
+ xa_tag_set(xas->xa, tag);
+}
+EXPORT_SYMBOL_GPL(xas_set_tag);
+
+/**
+ * xas_clear_tag() - Clears the tag on this entry and its parents.
+ * @xas: XArray operation state.
+ * @tag: Tag number.
+ *
+ * Clears the specified tag on this entry, and walks back to the head
+ * attempting to clear it on all the ancestor entries. Does nothing if
+ * @xas has not been walked to an entry, or is in an error state.
+ */
+void xas_clear_tag(const struct xa_state *xas, xa_tag_t tag)
+{
+ struct xa_node *node = xas->xa_node;
+ unsigned int offset = xas->xa_offset;
+
+ if (xas_invalid(xas))
+ return;
+
+ while (node) {
+ node_clear_tag(node, offset, tag);
+ if (node_any_tag(node, tag))
+ return;
+
+ offset = node->offset;
+ node = xa_parent_locked(xas->xa, node);
+ }
+
+ if (xa_tagged(xas->xa, tag))
+ xa_tag_clear(xas->xa, tag);
+}
+EXPORT_SYMBOL_GPL(xas_clear_tag);
+
/**
* xa_init_flags() - Initialise an empty XArray with flags.
* @xa: XArray.
@@ -157,6 +286,106 @@ void *xa_load(struct xarray *xa, unsigned long index)
}
EXPORT_SYMBOL(xa_load);

+/**
+ * __xa_set_tag() - Set this tag on this entry while locked.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Attempting to set a tag on a NULL entry does not succeed.
+ * This function expects the xa_lock to be held on entry.
+ */
+void __xa_set_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry = xas_load(&xas);
+
+ if (entry)
+ xas_set_tag(&xas, tag);
+}
+EXPORT_SYMBOL_GPL(__xa_set_tag);
+
+/**
+ * __xa_clear_tag() - Clear this tag on this entry while locked.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * This function expects the xa_lock to be held on entry.
+ */
+void __xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry = xas_load(&xas);
+
+ if (entry)
+ xas_clear_tag(&xas, tag);
+}
+EXPORT_SYMBOL_GPL(__xa_clear_tag);
+
+/**
+ * xa_get_tag() - Inquire whether this tag is set on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * This function uses the RCU read lock, so the result may be out of date
+ * by the time it returns. If you need the result to be stable, use a lock.
+ *
+ * Return: True if the entry at @index has this tag set, false if it doesn't.
+ */
+bool xa_get_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ XA_STATE(xas, xa, index);
+ void *entry;
+
+ rcu_read_lock();
+ entry = xas_start(&xas);
+ while (xas_get_tag(&xas, tag)) {
+ if (!xa_is_node(entry))
+ goto found;
+ entry = xas_descend(&xas, xa_to_node(entry));
+ }
+ rcu_read_unlock();
+ return false;
+ found:
+ rcu_read_unlock();
+ return true;
+}
+EXPORT_SYMBOL(xa_get_tag);
+
+/**
+ * xa_set_tag() - Set this tag on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Attempting to set a tag on a NULL entry does not succeed.
+ */
+void xa_set_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ xa_lock(xa);
+ __xa_set_tag(xa, index, tag);
+ xa_unlock(xa);
+}
+EXPORT_SYMBOL(xa_set_tag);
+
+/**
+ * xa_clear_tag() - Clear this tag on this entry.
+ * @xa: XArray.
+ * @index: Index of entry.
+ * @tag: Tag number.
+ *
+ * Clearing a tag always succeeds.
+ */
+void xa_clear_tag(struct xarray *xa, unsigned long index, xa_tag_t tag)
+{
+ xa_lock(xa);
+ __xa_clear_tag(xa, index, tag);
+ xa_unlock(xa);
+}
+EXPORT_SYMBOL(xa_clear_tag);
+
#ifdef XA_DEBUG
void xa_dump_node(const struct xa_node *node)
{
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index 34fed5c38da2..85a009001109 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -10,6 +10,12 @@
#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
#define spin_lock_init(x) pthread_mutex_init(x, NULL);

+#define spin_lock(x) pthread_mutex_lock(x)
+#define spin_unlock(x) pthread_mutex_unlock(x)
+#define spin_lock_bh(x) pthread_mutex_lock(x)
+#define spin_unlock_bh(x) pthread_mutex_unlock(x)
+#define spin_lock_irq(x) pthread_mutex_lock(x)
+#define spin_unlock_irq(x) pthread_mutex_unlock(x)
#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)

--
2.15.1


2018-01-17 21:17:02

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 06/99] xarray: Define struct xa_node

From: Matthew Wilcox <[email protected]>

This is a direct replacement for struct radix_tree_node. A couple of
struct members have changed name, so convert those. Use a #define so
that radix tree users continue to work without change.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/radix-tree.h | 29 +++------------------
include/linux/xarray.h | 24 ++++++++++++++++++
lib/radix-tree.c | 48 +++++++++++++++++------------------
mm/workingset.c | 16 ++++++------
tools/testing/radix-tree/multiorder.c | 30 +++++++++++-----------
5 files changed, 74 insertions(+), 73 deletions(-)

diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index c8a33e9e9a3c..f64beb9ba175 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -32,6 +32,7 @@

/* Keep unconverted code working */
#define radix_tree_root xarray
+#define radix_tree_node xa_node

/*
* The bottom two bits of the slot determine how the remaining bits in the
@@ -60,41 +61,17 @@ static inline bool radix_tree_is_internal_node(void *ptr)

/*** radix-tree API starts here ***/

-#define RADIX_TREE_MAX_TAGS 3
-
#define RADIX_TREE_MAP_SHIFT XA_CHUNK_SHIFT
#define RADIX_TREE_MAP_SIZE (1UL << RADIX_TREE_MAP_SHIFT)
#define RADIX_TREE_MAP_MASK (RADIX_TREE_MAP_SIZE-1)

-#define RADIX_TREE_TAG_LONGS \
- ((RADIX_TREE_MAP_SIZE + BITS_PER_LONG - 1) / BITS_PER_LONG)
+#define RADIX_TREE_MAX_TAGS XA_MAX_TAGS
+#define RADIX_TREE_TAG_LONGS XA_TAG_LONGS

#define RADIX_TREE_INDEX_BITS (8 /* CHAR_BIT */ * sizeof(unsigned long))
#define RADIX_TREE_MAX_PATH (DIV_ROUND_UP(RADIX_TREE_INDEX_BITS, \
RADIX_TREE_MAP_SHIFT))

-/*
- * @count is the count of every non-NULL element in the ->slots array
- * whether that is a data entry, a retry entry, a user pointer,
- * a sibling entry or a pointer to the next level of the tree.
- * @exceptional is the count of every element in ->slots which is
- * either a data entry or a sibling entry for data.
- */
-struct radix_tree_node {
- unsigned char shift; /* Bits remaining in each slot */
- unsigned char offset; /* Slot offset in parent */
- unsigned char count; /* Total entry count */
- unsigned char exceptional; /* Exceptional entry count */
- struct radix_tree_node *parent; /* Used when ascending tree */
- struct radix_tree_root *root; /* The tree we belong to */
- union {
- struct list_head private_list; /* For tree user */
- struct rcu_head rcu_head; /* Used when freeing node */
- };
- void __rcu *slots[RADIX_TREE_MAP_SIZE];
- unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
-};
-
/* The IDR tag is stored in the low bits of xa_flags */
#define ROOT_IS_IDR ((__force gfp_t)4)
/* The top bits of xa_flags are used to store the root tags */
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 3d2f1fafb7ec..3d5f7804ef45 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -187,6 +187,30 @@ static inline void xa_init(struct xarray *xa)
#endif
#define XA_CHUNK_SIZE (1UL << XA_CHUNK_SHIFT)
#define XA_CHUNK_MASK (XA_CHUNK_SIZE - 1)
+#define XA_MAX_TAGS 3
+#define XA_TAG_LONGS DIV_ROUND_UP(XA_CHUNK_SIZE, BITS_PER_LONG)
+
+/*
+ * @count is the count of every non-NULL element in the ->slots array
+ * whether that is a value entry, a retry entry, a user pointer,
+ * a sibling entry or a pointer to the next level of the tree.
+ * @nr_values is the count of every element in ->slots which is
+ * either a value entry or a sibling entry to a value entry.
+ */
+struct xa_node {
+ unsigned char shift; /* Bits remaining in each slot */
+ unsigned char offset; /* Slot offset in parent */
+ unsigned char count; /* Total entry count */
+ unsigned char nr_values; /* Value entry count */
+ struct xa_node __rcu *parent; /* NULL at top of tree */
+ struct xarray *array; /* The array we belong to */
+ union {
+ struct list_head private_list; /* For tree user */
+ struct rcu_head rcu_head; /* Used when freeing node */
+ };
+ void __rcu *slots[XA_CHUNK_SIZE];
+ unsigned long tags[XA_MAX_TAGS][XA_TAG_LONGS];
+};

/* Private */
static inline bool xa_is_node(const void *entry)
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 126eeb06cfef..74a6ddd1d6ad 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -259,11 +259,11 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
{
unsigned long i;

- pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d exceptional %d\n",
+ pr_debug("radix node: %p offset %d indices %lu-%lu parent %p tags %lx %lx %lx shift %d count %d nr_values %d\n",
node, node->offset, index, index | node_maxindex(node),
node->parent,
node->tags[0][0], node->tags[1][0], node->tags[2][0],
- node->shift, node->count, node->exceptional);
+ node->shift, node->count, node->nr_values);

for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
unsigned long first = index | (i << node->shift);
@@ -353,7 +353,7 @@ static struct radix_tree_node *
radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
struct radix_tree_root *root,
unsigned int shift, unsigned int offset,
- unsigned int count, unsigned int exceptional)
+ unsigned int count, unsigned int nr_values)
{
struct radix_tree_node *ret = NULL;

@@ -400,9 +400,9 @@ radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent,
ret->shift = shift;
ret->offset = offset;
ret->count = count;
- ret->exceptional = exceptional;
+ ret->nr_values = nr_values;
ret->parent = parent;
- ret->root = root;
+ ret->array = root;
}
return ret;
}
@@ -632,8 +632,8 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
} else if (xa_is_value(entry)) {
- /* Moving an exceptional root->xa_head to a node */
- node->exceptional = 1;
+ /* Moving a value entry root->xa_head to a node */
+ node->nr_values = 1;
}
/*
* entry was already in the radix tree, so we do not need
@@ -919,12 +919,12 @@ static inline int insert_entries(struct radix_tree_node *node,
if (xa_is_node(old))
radix_tree_free_nodes(old);
if (xa_is_value(old))
- node->exceptional--;
+ node->nr_values--;
}
if (node) {
node->count += n;
if (xa_is_value(item))
- node->exceptional += n;
+ node->nr_values += n;
}
return n;
}
@@ -938,7 +938,7 @@ static inline int insert_entries(struct radix_tree_node *node,
if (node) {
node->count++;
if (xa_is_value(item))
- node->exceptional++;
+ node->nr_values++;
}
return 1;
}
@@ -1072,7 +1072,7 @@ void *radix_tree_lookup(const struct radix_tree_root *root, unsigned long index)
EXPORT_SYMBOL(radix_tree_lookup);

static inline void replace_sibling_entries(struct radix_tree_node *node,
- void __rcu **slot, int count, int exceptional)
+ void __rcu **slot, int count, int values)
{
#ifdef CONFIG_RADIX_TREE_MULTIORDER
unsigned offset = get_slot_offset(node, slot);
@@ -1085,21 +1085,21 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
node->slots[offset] = NULL;
node->count--;
}
- node->exceptional += exceptional;
+ node->nr_values += values;
}
#endif
}

static void replace_slot(void __rcu **slot, void *item,
- struct radix_tree_node *node, int count, int exceptional)
+ struct radix_tree_node *node, int count, int values)
{
if (WARN_ON_ONCE(radix_tree_is_internal_node(item)))
return;

- if (node && (count || exceptional)) {
+ if (node && (count || values)) {
node->count += count;
- node->exceptional += exceptional;
- replace_sibling_entries(node, slot, count, exceptional);
+ node->nr_values += values;
+ replace_sibling_entries(node, slot, count, values);
}

rcu_assign_pointer(*slot, item);
@@ -1153,17 +1153,17 @@ void __radix_tree_replace(struct radix_tree_root *root,
radix_tree_update_node_t update_node)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = !!xa_is_value(item) - !!xa_is_value(old);
+ int values = !!xa_is_value(item) - !!xa_is_value(old);
int count = calculate_count(root, node, slot, item, old);

/*
- * This function supports replacing exceptional entries and
+ * This function supports replacing value entries and
* deleting entries, but that needs accounting against the
* node unless the slot is root->xa_head.
*/
WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->xa_head) &&
- (count || exceptional));
- replace_slot(slot, item, node, count, exceptional);
+ (count || values));
+ replace_slot(slot, item, node, count, values);

if (!node)
return;
@@ -1185,7 +1185,7 @@ void __radix_tree_replace(struct radix_tree_root *root,
* across slot lookup and replacement.
*
* NOTE: This cannot be used to switch between non-entries (empty slots),
- * regular entries, and exceptional entries, as that requires accounting
+ * regular entries, and value entries, as that requires accounting
* inside the radix tree node. When switching from one type of entry or
* deleting, use __radix_tree_lookup() and __radix_tree_replace() or
* radix_tree_iter_replace().
@@ -1293,7 +1293,7 @@ int radix_tree_split(struct radix_tree_root *root, unsigned long index,
rcu_assign_pointer(parent->slots[end], RADIX_TREE_RETRY);
}
rcu_assign_pointer(parent->slots[offset], RADIX_TREE_RETRY);
- parent->exceptional -= (end - offset);
+ parent->nr_values -= (end - offset);

if (order == parent->shift)
return 0;
@@ -1953,7 +1953,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
struct radix_tree_node *node, void __rcu **slot)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = xa_is_value(old) ? -1 : 0;
+ int values = xa_is_value(old) ? -1 : 0;
unsigned offset = get_slot_offset(node, slot);
int tag;

@@ -1963,7 +1963,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
node_tag_clear(root, node, tag, offset);

- replace_slot(slot, NULL, node, -1, exceptional);
+ replace_slot(slot, NULL, node, -1, values);
return node && delete_node(root, node, NULL);
}

diff --git a/mm/workingset.c b/mm/workingset.c
index 3afeb84720f4..91b6e16ad4c1 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -348,7 +348,7 @@ void workingset_update_node(struct radix_tree_node *node)
* already where they should be. The list_empty() test is safe
* as node->private_list is protected by mapping->pages.xa_lock.
*/
- if (node->count && node->count == node->exceptional) {
+ if (node->count && node->count == node->nr_values) {
if (list_empty(&node->private_list))
list_lru_add(&shadow_nodes, &node->private_list);
} else {
@@ -427,8 +427,8 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* to reclaim, take the node off-LRU, and drop the lru_lock.
*/

- node = container_of(item, struct radix_tree_node, private_list);
- mapping = container_of(node->root, struct address_space, pages);
+ node = container_of(item, struct xa_node, private_list);
+ mapping = container_of(node->array, struct address_space, pages);

/* Coming from the list, invert the lock order */
if (!xa_trylock(&mapping->pages)) {
@@ -445,25 +445,25 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
* no pages, so we expect to be able to remove them all and
* delete and free the empty node afterwards.
*/
- if (WARN_ON_ONCE(!node->exceptional))
+ if (WARN_ON_ONCE(!node->nr_values))
goto out_invalid;
- if (WARN_ON_ONCE(node->count != node->exceptional))
+ if (WARN_ON_ONCE(node->count != node->nr_values))
goto out_invalid;
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
if (node->slots[i]) {
if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
goto out_invalid;
- if (WARN_ON_ONCE(!node->exceptional))
+ if (WARN_ON_ONCE(!node->nr_values))
goto out_invalid;
if (WARN_ON_ONCE(!mapping->nrexceptional))
goto out_invalid;
node->slots[i] = NULL;
- node->exceptional--;
+ node->nr_values--;
node->count--;
mapping->nrexceptional--;
}
}
- if (WARN_ON_ONCE(node->exceptional))
+ if (WARN_ON_ONCE(node->nr_values))
goto out_invalid;
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
__radix_tree_delete_node(&mapping->pages, node,
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 24293a2fd82d..ed51edc008fd 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -392,7 +392,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
radix_tree_insert(&tree, 1 << order2, xa_mk_value(5));
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
assert(item2 == xa_mk_value(5));
- assert(node->exceptional == 1);
+ assert(node->nr_values == 1);

item2 = radix_tree_lookup(&tree, 0);
free(item2);
@@ -400,7 +400,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
radix_tree_join(&tree, 0, order1, item1);
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
assert(item2 == item1);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
item_kill_tree(&tree);
}

@@ -408,7 +408,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
* This test revealed an accounting bug for inline data entries at one point.
* Nodes were being freed back into the pool with an elevated exception count
* by radix_tree_join() and then radix_tree_split() was failing to zero the
- * count of exceptional entries.
+ * count of value entries.
*/
static void multiorder_join3(unsigned int order)
{
@@ -432,7 +432,7 @@ static void multiorder_join3(unsigned int order)
}

__radix_tree_lookup(&tree, 0, &node, NULL);
- assert(node->exceptional == node->count);
+ assert(node->nr_values == node->count);

item_kill_tree(&tree);
}
@@ -519,7 +519,7 @@ static void __multiorder_split2(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);

radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -529,7 +529,7 @@ static void __multiorder_split2(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item != xa_mk_value(5));
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);

item_kill_tree(&tree);
}
@@ -546,7 +546,7 @@ static void __multiorder_split3(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);

radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -555,7 +555,7 @@ static void __multiorder_split3(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(7));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);

item_kill_tree(&tree);

@@ -563,7 +563,7 @@ static void __multiorder_split3(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 0, &node, NULL);
assert(item == xa_mk_value(5));
- assert(node->exceptional > 0);
+ assert(node->nr_values > 0);

radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
@@ -576,13 +576,13 @@ static void __multiorder_split3(int old_order, int new_order)

item = __radix_tree_lookup(&tree, 1 << new_order, &node, NULL);
assert(item == xa_mk_value(7));
- assert(node->count == node->exceptional);
+ assert(node->count == node->nr_values);
do {
node = node->parent;
if (!node)
break;
assert(node->count == 1);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);
} while (1);

item_kill_tree(&tree);
@@ -610,15 +610,15 @@ static void multiorder_account(void)

__radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 0, &node, NULL);
- assert(node->count == node->exceptional * 2);
+ assert(node->count == node->nr_values * 2);
radix_tree_delete(&tree, 1 << 5);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);

__radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 1 << 5, &node, &slot);
- assert(node->count == node->exceptional * 2);
+ assert(node->count == node->nr_values * 2);
__radix_tree_replace(&tree, node, slot, NULL, NULL);
- assert(node->exceptional == 0);
+ assert(node->nr_values == 0);

item_kill_tree(&tree);
}
--
2.15.1


2018-01-17 21:17:24

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 07/99] xarray: Add documentation

From: Matthew Wilcox <[email protected]>

This is documentation on how to use the XArray, not details about its
internal implementation.

Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/core-api/index.rst | 1 +
Documentation/core-api/xarray.rst | 361 ++++++++++++++++++++++++++++++++++++++
2 files changed, 362 insertions(+)
create mode 100644 Documentation/core-api/xarray.rst

diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index d5bbe035316d..eb16ba30aeb6 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -18,6 +18,7 @@ Core utilities
local_ops
workqueue
genericirq
+ xarray
flexible-arrays
librs
genalloc
diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
new file mode 100644
index 000000000000..914999c0bf3f
--- /dev/null
+++ b/Documentation/core-api/xarray.rst
@@ -0,0 +1,361 @@
+.. SPDX-License-Identifier: CC-BY-SA-4.0
+
+======
+XArray
+======
+
+:Author: Matthew Wilcox
+
+Overview
+========
+
+The XArray is an abstract data type which behaves like a very large array
+of pointers. It meets many of the same needs as a hash or a conventional
+resizable array. Unlike a hash, it allows you to sensibly go to the
+next or previous entry in a cache-efficient manner. In contrast to
+a resizable array, there is no need for copying data or changing MMU
+mappings in order to grow the array. It is more memory-efficient,
+parallelisable and cache friendly than a doubly-linked list. It takes
+advantage of RCU to perform lookups without locking.
+
+The XArray implementation is efficient when the indices used are densely
+clustered; hashing the object and using the hash as the index will not
+perform well. The XArray is optimised for small indices, but still has
+good performance with large indices. If your index can be larger than
+``ULONG_MAX`` then the XArray is not the data type for you. The most
+important user of the XArray is the page cache.
+
+A freshly-initialised XArray contains a ``NULL`` pointer at every index.
+Each non-``NULL`` entry in the array has three bits associated with it
+called tags. Each tag may be set or cleared independently of the others.
+You can iterate over entries which are tagged.
+
+Normal pointers may be stored in the XArray directly. They must be 4-byte
+aligned, which is true for any pointer returned from :c:func:`kmalloc` and
+:c:func:`alloc_page`. It isn't true for arbitrary user-space pointers,
+nor for function pointers. You can store pointers to statically allocated
+objects, as long as those objects have an alignment of at least 4.
+
+You can also store integers between 0 and ``LONG_MAX`` in the XArray.
+You must first convert it into an entry using :c:func:`xa_mk_value`.
+When you retrieve an entry from the XArray, you can check whether it is
+a value entry by calling :c:func:`xa_is_value`, and convert it back to
+an integer by calling :c:func:`xa_to_value`.
+
+The XArray does not support storing :c:func:`IS_ERR` pointers as some
+conflict with value entries or internal entries.
+
+An unusual feature of the XArray is the ability to create entries which
+occupy a range of indices. Once stored to, looking up any index in
+the range will return the same entry as looking up any other index in
+the range. Setting a tag on one index will set it on all of them.
+Storing to any index will store to all of them. Multi-index entries can
+be explicitly split into smaller entries, or storing ``NULL`` into any
+entry will cause the XArray to forget about the range.
+
+Normal API
+==========
+
+Start by initialising an XArray, either with :c:func:`DEFINE_XARRAY`
+for statically allocated XArrays or :c:func:`xa_init` for dynamically
+allocated ones.
+
+You can then set entries using :c:func:`xa_store` and get entries
+using :c:func:`xa_load`. xa_store will overwrite any entry with the
+new entry and return the previous entry stored at that index. You can
+use :c:func:`xa_erase` instead of calling :c:func:`xa_store` with a
+%NULL entry. There is no difference between an entry that has never
+been stored to and one that has most recently had ``NULL`` stored to it.
+
+You can conditionally replace an entry at an index by using
+:c:func:`xa_cmpxchg`. Like :c:func:`cmpxchg`, it will only succeed if
+the entry at that index has the 'old' value. It also returns the entry
+which was at that index; if it returns the same entry which was passed as
+'old', then :c:func:`xa_cmpxchg` succeeded.
+
+If you want to only store a new entry to an index if the current entry
+at that index is ``NULL``, you can use :c:func:`xa_insert` which
+returns ``-EEXIST`` if the entry is not empty.
+
+Calling :c:func:`xa_reserve` ensures that there is enough memory allocated
+to store an entry at the specified index. This is not normally needed,
+but some users have a complicated locking scheme.
+
+You can enquire whether a tag is set on an entry by using
+:c:func:`xa_get_tag`. If the entry is not ``NULL``, you can set a tag
+on it by using :c:func:`xa_set_tag` and remove the tag from an entry by
+calling :c:func:`xa_clear_tag`. You can ask whether any entry in the
+XArray has a particular tag set by calling :c:func:`xa_tagged`.
+
+You can copy entries out of the XArray into a plain array by calling
+:c:func:`xa_extract`. Or you can iterate over the present entries in
+the XArray by calling :c:func:`xa_for_each`. You may prefer to use
+:c:func:`xa_find` or :c:func:`xa_find_after` to move to the next present
+entry in the XArray.
+
+Finally, you can remove all entries from an XArray by calling
+:c:func:`xa_destroy`. If the XArray entries are pointers, you may wish
+to free the entries first. You can do this by iterating over all present
+entries in the XArray using the :c:func:`xa_for_each` iterator.
+
+Memory allocation
+-----------------
+
+The :c:func:`xa_store`, :c:func:`xa_cmpxchg`, :c:func:`xa_reserve`
+and :c:func:`xa_insert` functions take a gfp_t parameter in case
+the XArray needs to allocate memory to store this entry. If the entry
+being stored is ``NULL``, no memory allocation needs to be performed,
+and the GFP flags specified will be ignored.
+
+It is possible for no memory to be allocatable, particularly if you pass
+a restrictive set of GFP flags. In that case, the functions return a
+special value which can be turned into an errno using :c:func:`xa_err`.
+If you don't need to know exactly which error occurred, using
+:c:func:`xa_is_err` is slightly more efficient.
+
+Locking
+-------
+
+When using the Normal API, you do not have to worry about locking.
+The XArray uses RCU and an internal spinlock to synchronise access:
+
+No lock needed:
+ * :c:func:`xa_empty`
+ * :c:func:`xa_tagged`
+
+Takes RCU read lock:
+ * :c:func:`xa_load`
+ * :c:func:`xa_for_each`
+ * :c:func:`xa_find`
+ * :c:func:`xa_find_after`
+ * :c:func:`xa_extract`
+ * :c:func:`xa_get_tag`
+
+Takes xa_lock internally:
+ * :c:func:`xa_store`
+ * :c:func:`xa_insert`
+ * :c:func:`xa_erase`
+ * :c:func:`xa_cmpxchg`
+ * :c:func:`xa_reserve`
+ * :c:func:`xa_destroy`
+ * :c:func:`xa_set_tag`
+ * :c:func:`xa_clear_tag`
+
+Assumes xa_lock held on entry:
+ * :c:func:`__xa_store`
+ * :c:func:`__xa_insert`
+ * :c:func:`__xa_erase`
+ * :c:func:`__xa_cmpxchg`
+ * :c:func:`__xa_set_tag`
+ * :c:func:`__xa_clear_tag`
+
+If you want to take advantage of the lock to protect the data structures
+that you are storing in the XArray, you can call :c:func:`xa_lock`
+before calling :c:func:`xa_load`, then take a reference count on the
+object you have found before calling :c:func:`xa_unlock`. This will
+prevent stores from removing the object from the array between looking
+up the object and incrementing the refcount. You can also use RCU to
+avoid dereferencing freed memory, but an explanation of that is beyond
+the scope of this document.
+
+The XArray does not disable interrupts or softirqs while modifying
+the array. It is safe to read the XArray from interrupt or softirq
+context as the RCU lock provides enough protection.
+
+If, for example, you want to store entries in the XArray in process
+context and then erase them in softirq context, you can do that this way::
+
+ foo_init(struct foo *foo)
+ {
+ xa_init_flags(&foo->array, XA_FLAGS_LOCK_BH);
+ }
+
+ foo_store(struct foo *foo, unsigned long index, void *entry)
+ {
+ xa_lock_bh(&foo->array);
+ __xa_store(&foo->array, index, entry, GFP_KERNEL);
+ foo->count++;
+ xa_unlock_bh(&foo->array);
+ }
+
+ /* foo_erase() is only called from softirq context */
+ foo_erase(struct foo *foo, unsigned long index)
+ {
+ xa_erase(&foo->array, index);
+ }
+
+If you are going to modify the XArray from interrupt or softirq context,
+you need to initialise the array using :c:func:`xa_init_flags`, passing
+``XA_FLAGS_LOCK_IRQ`` or ``XA_FLAGS_LOCK_BH``.
+
+The above example also shows a common pattern of wanting to extend the
+coverage of the xa_lock on the store side to protect some statistics
+associated with the array.
+
+Sharing the XArray with interrupt context is also possible, either
+using :c:func:`xa_lock_irqsave` in both the interrupt handler and process
+context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock`
+in the interrupt handler.
+
+Sometimes you need to protect access to the XArray with a mutex because
+that lock sits above another mutex in the locking hierarchy. That does
+not entitle you to use functions like :c:func:`__xa_erase` without taking
+the xa_lock; the xa_lock is used for lockdep validation and will be used
+for other purposes in the future.
+
+The :c:func:`__xa_set_tag` and :c:func:`__xa_clear_tag` functions are also
+available for situations where you look up an entry and want to atomically
+set or clear a tag. It may be more efficient to use the advanced API
+in this case, as it will save you from walking the tree twice.
+
+Advanced API
+============
+
+The advanced API offers more flexibility and better performance at the
+cost of an interface which can be harder to use and has fewer safeguards.
+No locking is done for you by the advanced API, and you are required
+to use the xa_lock while modifying the array. You can choose whether
+to use the xa_lock or the RCU lock while doing read-only operations on
+the array. You can mix advanced and normal operations on the same array;
+indeed the normal API is implemented in terms of the advanced API. The
+advanced API is only available to modules with a GPL-compatible license.
+
+The advanced API is based around the xa_state. This is an opaque data
+structure which you declare on the stack using the :c:func:`XA_STATE`
+macro. This macro initialises the xa_state ready to start walking
+around the XArray. It is used as a cursor to maintain the position
+in the XArray and let you compose various operations together without
+having to restart from the top every time.
+
+The xa_state is also used to store errors. You can call
+:c:func:`xas_error` to retrieve the error. All operations check whether
+the xa_state is in an error state before proceeding, so there's no need
+for you to check for an error after each call; you can make multiple
+calls in succession and only check at a convenient point. The only
+errors currently generated by the xarray code itself are %ENOMEM and
+%EINVAL, but it supports arbitrary errors in case you want to call
+:c:func:`xas_set_err` yourself.
+
+If the xa_state is holding an %ENOMEM error, calling :c:func:`xas_nomem`
+will attempt to allocate more memory using the specified gfp flags and
+cache it in the xa_state for the next attempt. The idea is that you take
+the xa_lock, attempt the operation and drop the lock. The operation
+attempts to allocate memory while holding the lock, but it is more
+likely to fail. Once you have dropped the lock, :c:func:`xas_nomem`
+can try harder to allocate more memory. It will return ``true`` if it
+is worth retrying the operation (i.e. that there was a memory error *and*
+more memory was allocated). If it has previously allocated memory, and
+that memory wasn't used, and there is no error (or some error that isn't
+%ENOMEM), then it will free the memory previously allocated.
+
+Internal Entries
+----------------
+
+The XArray reserves some entries for its own purposes. These are never
+exposed through the normal API, but when using the advanced API, it's
+possible to see them. Usually the best way to handle them is to pass them
+to :c:func:`xas_retry`, and retry the operation if it returns ``true``.
+
+.. flat-table::
+ :widths: 1 1 6
+
+ * - Name
+ - Test
+ - Usage
+
+ * - Node
+ - :c:func:`xa_is_node`
+ - An XArray node. Should never be visible; all functions should recurse
+ into an XArray node.
+
+ * - Sibling
+ - :c:func:`xa_is_sibling`
+ - A non-canonical entry for a multi-index entry. The value indicates
+ which slot in this node has the canonical entry.
+
+ * - Retry
+ - :c:func:`xa_is_retry`
+ - This entry is currently being modified by a thread which has the
+ xa_lock. The node containing this entry may be freed at the end of
+ this RCU period. You should restart the lookup from the head of the
+ array.
+
+Other internal entries may be added in the future. As far as possible, they
+will be handled by :c:func:`xas_retry`.
+
+Additional functionality
+------------------------
+
+The :c:func:`xas_create` function ensures that there is somewhere in the
+XArray to store an entry. It will store ENOMEM in the xa_state if it
+cannot allocate memory. You do not normally need to call this function
+yourself as it is called by :c:func:`xas_store`.
+
+You can use :c:func:`xas_init_tags` to reset the tags on an entry
+to their default state. This is usually all tags clear, unless the
+XArray is marked with ``XA_FLAGS_TRACK_FREE``, in which case tag 0 is set
+and all other tags are clear. Replacing one entry with another using
+:c:func:`xas_store` will not reset the tags on that entry; if you want
+the tags reset, you should do that explicitly.
+
+The :c:func:`xas_load` will walk the xa_state as close to the entry
+as it can. If you know the xa_state has already been walked to the
+entry and need to check that the entry hasn't changed, you can use
+:c:func:`xas_reload` to save a function call.
+
+If you need to move to a different index in the XArray, call
+:c:func:`xas_set`. This reinitialises the cursor, which will generally
+have the effect of making the next operation walk the cursor to the
+desired spot in the tree. If you want to move to the next or previous
+index, call :c:func:`xas_next` or :c:func:`xas_prev`. Setting the index
+does not walk the cursor around the array so does not require a lock to
+be held, while moving to the next or previous index does.
+
+You can create a multi-index entry by using :c:func:`xas_set_order`.
+If a load or find operation finds a multi-index entry, the index in the
+xa_state will be the one searched for, and not necessarily the
+lowest or highest index used by the entry.
+Currently the only supported multi-index entries supported are powers
+of two, but there are two potential users of arbitrary ranges, so that
+functionality may be added soon.
+
+You can search for the next present entry using :c:func:`xas_find`. This
+is the equivalent of both :c:func:`xa_find` and :c:func:`xa_find_after`;
+if the cursor has been walked to an entry, then it will find the next
+entry after the one currently referenced. If not, it will return the
+entry at the index of the xa_state. Using :c:func:`xas_next_entry` to
+move to the next present entry instead of :c:func:`xas_find` will save
+a function call in the majority of cases at the expense of emitting more
+inline code.
+
+The :c:func:`xas_find_tag` function is similar, returning the first tagged
+entry after the entry referenced by the xa_state if it has already been
+walked, and returning the entry at the index of the xa_state if it is
+tagged, and the xa_state has not been walked. The :c:func:`xas_next_tag`
+function is the equivalent of :c:func:`xas_next_entry`.
+
+When iterating over a range of the XArray using :c:func:`xas_for_each`
+or :c:func:`xas_for_each_tag`, it may be necessary to temporarily stop
+the iteration. The :c:func:`xas_pause` function exists for this purpose.
+After you have done the necessary work and wish to resume, the xa_state
+is in an appropriate state to continue the iteration after the entry
+you last processed. If you have interrupts disabled while iterating,
+then it is good manners to pause the iteration and reenable interrupts
+every ``XA_CHECK_SCHED`` entries.
+
+The :c:func:`xas_get_tag`, :c:func:`xas_set_tag` and
+:c:func:`xas_clear_tag` functions require the xa_state cursor to have
+been moved to the appropriate location in the xarray; they will do
+nothing if you have called :c:func:`xas_pause` or :c:func:`xas_set`
+immediately before.
+
+You can call :c:func:`xas_set_update` to have a callback function
+called each time the XArray updates a node. This is used by the page
+cache workingset code to maintain its list of nodes which contain only
+shadow entries.
+
+Functions and structures
+========================
+
+.. kernel-doc:: include/linux/xarray.h
+.. kernel-doc:: lib/xarray.c
--
2.15.1


2018-01-17 21:18:07

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 03/99] xarray: Replace exceptional entries

From: Matthew Wilcox <[email protected]>

Introduce xarray value entries to replace the radix tree exceptional
entry code. This is a slight change in encoding to allow the use of an
extra bit (we can now store BITS_PER_LONG - 1 bits in a value entry).
It is also a change in emphasis; exceptional entries are intimidating
and different. As the comment explains, you can choose to store values
or pointers in the xarray and they are both first-class citizens.

Signed-off-by: Matthew Wilcox <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 4 +-
arch/powerpc/include/asm/nohash/64/pgtable.h | 4 +-
drivers/gpu/drm/i915/i915_gem.c | 17 ++--
drivers/staging/lustre/lustre/mdc/mdc_request.c | 2 +-
fs/btrfs/compression.c | 2 +-
fs/btrfs/inode.c | 4 +-
fs/dax.c | 107 ++++++++++++------------
fs/proc/task_mmu.c | 2 +-
include/linux/fs.h | 48 +++++++----
include/linux/radix-tree.h | 36 ++------
include/linux/swapops.h | 19 ++---
include/linux/xarray.h | 51 +++++++++++
lib/idr.c | 63 ++++++--------
lib/radix-tree.c | 21 ++---
mm/filemap.c | 10 +--
mm/khugepaged.c | 2 +-
mm/madvise.c | 2 +-
mm/memcontrol.c | 2 +-
mm/mincore.c | 2 +-
mm/readahead.c | 2 +-
mm/shmem.c | 10 +--
mm/swap.c | 2 +-
mm/truncate.c | 12 +--
mm/workingset.c | 12 ++-
tools/testing/radix-tree/idr-test.c | 6 +-
tools/testing/radix-tree/linux/radix-tree.h | 1 +
tools/testing/radix-tree/multiorder.c | 47 +++++------
tools/testing/radix-tree/test.c | 2 +-
28 files changed, 256 insertions(+), 236 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 44697817ccc6..5025c26f1acd 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -649,9 +649,7 @@ static inline bool pte_user(pte_t pte)
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
} while (0)
-/*
- * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
- */
+
#define SWP_TYPE_BITS 5
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
& ((1UL << SWP_TYPE_BITS) - 1))
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index abddf5830ad5..f711773568d7 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -329,9 +329,7 @@ static inline void __ptep_set_access_flags(struct mm_struct *mm,
*/ \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
} while (0)
-/*
- * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
- */
+
#define SWP_TYPE_BITS 5
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
& ((1UL << SWP_TYPE_BITS) - 1))
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5cfba89ed586..25ce7bcf9988 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5369,7 +5369,8 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
count = __sg_page_count(sg);

while (idx + count <= n) {
- unsigned long exception, i;
+ void *entry;
+ unsigned long i;
int ret;

/* If we cannot allocate and insert this entry, or the
@@ -5384,12 +5385,9 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
if (ret && ret != -EEXIST)
goto scan;

- exception =
- RADIX_TREE_EXCEPTIONAL_ENTRY |
- idx << RADIX_TREE_EXCEPTIONAL_SHIFT;
+ entry = xa_mk_value(idx);
for (i = 1; i < count; i++) {
- ret = radix_tree_insert(&iter->radix, idx + i,
- (void *)exception);
+ ret = radix_tree_insert(&iter->radix, idx + i, entry);
if (ret && ret != -EEXIST)
goto scan;
}
@@ -5427,15 +5425,14 @@ i915_gem_object_get_sg(struct drm_i915_gem_object *obj,
GEM_BUG_ON(!sg);

/* If this index is in the middle of multi-page sg entry,
- * the radixtree will contain an exceptional entry that points
+ * the radix tree will contain a value entry that points
* to the start of that range. We will return the pointer to
* the base page and the offset of this page within the
* sg entry's range.
*/
*offset = 0;
- if (unlikely(radix_tree_exception(sg))) {
- unsigned long base =
- (unsigned long)sg >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ if (unlikely(xa_is_value(sg))) {
+ unsigned long base = xa_to_value(sg);

sg = radix_tree_lookup(&iter->radix, base);
GEM_BUG_ON(!sg);
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 45dcf9f958d4..2ec79a6b17da 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -940,7 +940,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
xa_lock_irq(&mapping->pages);
found = radix_tree_gang_lookup(&mapping->pages,
(void **)&page, offset, 1);
- if (found > 0 && !radix_tree_exceptional_entry(page)) {
+ if (found > 0 && !xa_is_value(page)) {
struct lu_dirpage *dp;

get_page(page);
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 280717b26224..e687d06cd97c 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -452,7 +452,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
rcu_read_lock();
page = radix_tree_lookup(&mapping->pages, pg_index);
rcu_read_unlock();
- if (page && !radix_tree_exceptional_entry(page)) {
+ if (page && !xa_is_value(page)) {
misses++;
if (misses > 4)
break;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 712ae1dd572b..dbdb5bf6bca1 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7578,8 +7578,8 @@ bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
}
/*
* Otherwise, shmem/tmpfs must be storing a swap entry
- * here as an exceptional entry: so return it without
- * attempting to raise page count.
+ * here so return it without attempting to raise page
+ * count.
*/
page = NULL;
break; /* TODO: Is this relevant for this use case? */
diff --git a/fs/dax.c b/fs/dax.c
index 0f3844bcf881..5097a606da1a 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -58,57 +58,57 @@ static int __init init_dax_wait_table(void)
fs_initcall(init_dax_wait_table);

/*
- * We use lowest available bit in exceptional entry for locking, one bit for
- * the entry size (PMD) and two more to tell us if the entry is a zero page or
- * an empty entry that is just used for locking. In total four special bits.
+ * DAX pagecache entries use XArray value entries so they can't be mistaken
+ * for pages. We use one bit for locking, one bit for the entry size (PMD)
+ * and two more to tell us if the entry is a zero page or an empty entry that
+ * is just used for locking. In total four special bits.
*
* If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
* and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
* block allocation.
*/
-#define RADIX_DAX_SHIFT (RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
-#define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
-#define RADIX_DAX_PMD (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
-#define RADIX_DAX_ZERO_PAGE (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
-#define RADIX_DAX_EMPTY (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
+#define DAX_SHIFT (4)
+#define DAX_ENTRY_LOCK (1UL << 0)
+#define DAX_PMD (1UL << 1)
+#define DAX_ZERO_PAGE (1UL << 2)
+#define DAX_EMPTY (1UL << 3)

static unsigned long dax_radix_sector(void *entry)
{
- return (unsigned long)entry >> RADIX_DAX_SHIFT;
+ return xa_to_value(entry) >> DAX_SHIFT;
}

static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
{
- return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
- ((unsigned long)sector << RADIX_DAX_SHIFT) |
- RADIX_DAX_ENTRY_LOCK);
+ return xa_mk_value(flags | ((unsigned long)sector << DAX_SHIFT) |
+ DAX_ENTRY_LOCK);
}

static unsigned int dax_radix_order(void *entry)
{
- if ((unsigned long)entry & RADIX_DAX_PMD)
+ if (xa_to_value(entry) & DAX_PMD)
return PMD_SHIFT - PAGE_SHIFT;
return 0;
}

static int dax_is_pmd_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_PMD;
+ return xa_to_value(entry) & DAX_PMD;
}

static int dax_is_pte_entry(void *entry)
{
- return !((unsigned long)entry & RADIX_DAX_PMD);
+ return !(xa_to_value(entry) & DAX_PMD);
}

static int dax_is_zero_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_ZERO_PAGE;
+ return xa_to_value(entry) & DAX_ZERO_PAGE;
}

static int dax_is_empty_entry(void *entry)
{
- return (unsigned long)entry & RADIX_DAX_EMPTY;
+ return xa_to_value(entry) & DAX_EMPTY;
}

/*
@@ -185,9 +185,9 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
*/
static inline int slot_locked(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
- return entry & RADIX_DAX_ENTRY_LOCK;
+ unsigned long entry = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ return entry & DAX_ENTRY_LOCK;
}

/*
@@ -195,12 +195,11 @@ static inline int slot_locked(struct address_space *mapping, void **slot)
*/
static inline void *lock_slot(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
-
- entry |= RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
- return (void *)entry;
+ unsigned long v = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ void *entry = xa_mk_value(v | DAX_ENTRY_LOCK);
+ radix_tree_replace_slot(&mapping->pages, slot, entry);
+ return entry;
}

/*
@@ -208,17 +207,16 @@ static inline void *lock_slot(struct address_space *mapping, void **slot)
*/
static inline void *unlock_slot(struct address_space *mapping, void **slot)
{
- unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
-
- entry &= ~(unsigned long)RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
- return (void *)entry;
+ unsigned long v = xa_to_value(
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock));
+ void *entry = xa_mk_value(v & ~DAX_ENTRY_LOCK);
+ radix_tree_replace_slot(&mapping->pages, slot, entry);
+ return entry;
}

/*
* Lookup entry in radix tree, wait for it to become unlocked if it is
- * exceptional entry and return it. The caller must call
+ * a DAX entry and return it. The caller must call
* put_unlocked_mapping_entry() when he decided not to lock the entry or
* put_locked_mapping_entry() when he locked the entry and now wants to
* unlock it.
@@ -239,7 +237,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
entry = __radix_tree_lookup(&mapping->pages, index, NULL,
&slot);
if (!entry ||
- WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
+ WARN_ON_ONCE(!xa_is_value(entry)) ||
!slot_locked(mapping, slot)) {
if (slotp)
*slotp = slot;
@@ -263,7 +261,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,

xa_lock_irq(&mapping->pages);
entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
- if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
+ if (WARN_ON_ONCE(!entry || !xa_is_value(entry) ||
!slot_locked(mapping, slot))) {
xa_unlock_irq(&mapping->pages);
return;
@@ -294,12 +292,11 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
}

/*
- * Find radix tree entry at given index. If it points to an exceptional entry,
- * return it with the radix tree entry locked. If the radix tree doesn't
- * contain given index, create an empty exceptional entry for the index and
- * return with it locked.
+ * Find radix tree entry at given index. If it is a DAX entry, return it
+ * with the radix tree entry locked. If the radix tree doesn't contain the
+ * given index, create an empty entry for the index and return with it locked.
*
- * When requesting an entry with size RADIX_DAX_PMD, grab_mapping_entry() will
+ * When requesting an entry with size DAX_PMD, grab_mapping_entry() will
* either return that locked entry or will return an error. This error will
* happen if there are any 4k entries within the 2MiB range that we are
* requesting.
@@ -329,13 +326,13 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);

- if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
+ if (WARN_ON_ONCE(entry && !xa_is_value(entry))) {
entry = ERR_PTR(-EIO);
goto out_unlock;
}

if (entry) {
- if (size_flag & RADIX_DAX_PMD) {
+ if (size_flag & DAX_PMD) {
if (dax_is_pte_entry(entry)) {
put_unlocked_mapping_entry(mapping, index,
entry);
@@ -405,7 +402,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
true);
}

- entry = dax_radix_locked_entry(0, size_flag | RADIX_DAX_EMPTY);
+ entry = dax_radix_locked_entry(0, size_flag | DAX_EMPTY);

err = __radix_tree_insert(&mapping->pages, index,
dax_radix_order(entry), entry);
@@ -442,7 +439,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,

xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, NULL);
- if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
+ if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
goto out;
if (!trunc &&
(radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
@@ -457,8 +454,8 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
return ret;
}
/*
- * Delete exceptional DAX entry at @index from @mapping. Wait for radix tree
- * entry to get unlocked before deleting it.
+ * Delete DAX entry at @index from @mapping. Wait for it
+ * to be unlocked before deleting it.
*/
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
{
@@ -468,7 +465,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
* This gets called from truncate / punch_hole path. As such, the caller
* must hold locks protecting against concurrent modifications of the
* radix tree (usually fs-private i_mmap_sem for writing). Since the
- * caller has seen exceptional entry for this index, we better find it
+ * caller has seen a DAX entry for this index, we better find it
* at that index as well...
*/
WARN_ON_ONCE(!ret);
@@ -476,7 +473,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
}

/*
- * Invalidate exceptional DAX entry if it is clean.
+ * Invalidate DAX entry if it is clean.
*/
int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
pgoff_t index)
@@ -530,7 +527,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
if (dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);

- if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) {
+ if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
/* we are replacing a zero page with block mapping */
if (dax_is_pmd_entry(entry))
unmap_mapping_range(mapping,
@@ -669,13 +666,13 @@ static int dax_writeback_one(struct block_device *bdev,
* A page got tagged dirty in DAX mapping? Something is seriously
* wrong.
*/
- if (WARN_ON(!radix_tree_exceptional_entry(entry)))
+ if (WARN_ON(!xa_is_value(entry)))
return -EIO;

xa_lock_irq(&mapping->pages);
entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
/* Entry got punched out / reallocated? */
- if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
+ if (!entry2 || WARN_ON_ONCE(!xa_is_value(entry2)))
goto put_unlocked;
/*
* Entry got reallocated elsewhere? No need to writeback. We have to
@@ -881,7 +878,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
}

entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0,
- RADIX_DAX_ZERO_PAGE, false);
+ DAX_ZERO_PAGE, false);
if (IS_ERR(entry2)) {
ret = VM_FAULT_SIGBUS;
goto out;
@@ -1287,7 +1284,7 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
goto fallback;

ret = dax_insert_mapping_entry(mapping, vmf, entry, 0,
- RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE, false);
+ DAX_PMD | DAX_ZERO_PAGE, false);
if (IS_ERR(ret))
goto fallback;

@@ -1372,7 +1369,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* is already in the tree, for instance), it will return -EEXIST and
* we just fall back to 4k entries.
*/
- entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD);
+ entry = grab_mapping_entry(mapping, pgoff, DAX_PMD);
if (IS_ERR(entry))
goto fallback;

@@ -1411,7 +1408,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,

entry = dax_insert_mapping_entry(mapping, vmf, entry,
dax_iomap_sector(&iomap, pos),
- RADIX_DAX_PMD, write && !sync);
+ DAX_PMD, write && !sync);
if (IS_ERR(entry))
goto finish_iomap;

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 339e4c1c044d..fadc6dbe17d6 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -553,7 +553,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
if (!page)
return;

- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
mss->swap += PAGE_SIZE;
else
put_page(page);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c07169cfb44a..e4345c13e237 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -389,23 +389,41 @@ int pagecache_write_end(struct file *, struct address_space *mapping,
loff_t pos, unsigned len, unsigned copied,
struct page *page, void *fsdata);

+/**
+ * struct address_space - Contents of a cachable, mappable object
+ *
+ * @host: Owner, either the inode or the block_device
+ * @pages: Cached pages
+ * @gfp_mask: Memory allocation flags to use for allocating pages
+ * @i_mmap_writable: count VM_SHARED mappings
+ * @i_mmap: tree of private and shared mappings
+ * @i_mmap_rwsem: Protects @i_mmap and @i_mmap_writable
+ * @nrpages: Number of total pages, protected by pages.xa_lock
+ * @nrexceptional: Shadow or DAX entries, protected by pages.xa_lock
+ * @writeback_index: writeback starts here
+ * @a_ops: methods
+ * @flags: Error bits and flags (AS_*)
+ * @wb_err: The most recent error which has occurred
+ * @private_lock: For use by the owner of the address_space
+ * @private_list: For use by the owner of the address space
+ * @private_data: For use by the owner of the address space
+ */
struct address_space {
- struct inode *host; /* owner: inode, block_device */
- struct radix_tree_root pages; /* cached pages */
- gfp_t gfp_mask; /* for allocating pages */
- atomic_t i_mmap_writable;/* count VM_SHARED mappings */
- struct rb_root_cached i_mmap; /* tree of private and shared mappings */
- struct rw_semaphore i_mmap_rwsem; /* protect tree, count, list */
- /* Protected by pages.xa_lock */
- unsigned long nrpages; /* number of total pages */
- unsigned long nrexceptional; /* shadow or DAX entries */
- pgoff_t writeback_index;/* writeback starts here */
- const struct address_space_operations *a_ops; /* methods */
- unsigned long flags; /* error bits */
+ struct inode *host;
+ struct radix_tree_root pages;
+ gfp_t gfp_mask;
+ atomic_t i_mmap_writable;
+ struct rb_root_cached i_mmap;
+ struct rw_semaphore i_mmap_rwsem;
+ unsigned long nrpages;
+ unsigned long nrexceptional;
+ pgoff_t writeback_index;
+ const struct address_space_operations *a_ops;
+ unsigned long flags;
errseq_t wb_err;
- spinlock_t private_lock; /* for use by the address_space */
- struct list_head private_list; /* ditto */
- void *private_data; /* ditto */
+ spinlock_t private_lock;
+ struct list_head private_list;
+ void *private_data;
} __attribute__((aligned(sizeof(long)))) __randomize_layout;
/*
* On most architectures that alignment is already the case; but
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 34149e8b5f73..87f35fe00e55 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -28,34 +28,26 @@
#include <linux/rcupdate.h>
#include <linux/spinlock.h>
#include <linux/types.h>
+#include <linux/xarray.h>

/*
* The bottom two bits of the slot determine how the remaining bits in the
* slot are interpreted:
*
* 00 - data pointer
- * 01 - internal entry
- * 10 - exceptional entry
- * 11 - this bit combination is currently unused/reserved
+ * 10 - internal entry
+ * x1 - value entry
*
* The internal entry may be a pointer to the next level in the tree, a
* sibling entry, or an indicator that the entry in this slot has been moved
* to another location in the tree and the lookup should be restarted. While
* NULL fits the 'data pointer' pattern, it means that there is no entry in
* the tree for this index (no matter what level of the tree it is found at).
- * This means that you cannot store NULL in the tree as a value for the index.
+ * This means that storing a NULL entry in the tree is the same as deleting
+ * the entry from the tree.
*/
#define RADIX_TREE_ENTRY_MASK 3UL
-#define RADIX_TREE_INTERNAL_NODE 1UL
-
-/*
- * Most users of the radix tree store pointers but shmem/tmpfs stores swap
- * entries in the same tree. They are marked as exceptional entries to
- * distinguish them from pointers to struct page.
- * EXCEPTIONAL_ENTRY tests the bit, EXCEPTIONAL_SHIFT shifts content past it.
- */
-#define RADIX_TREE_EXCEPTIONAL_ENTRY 2
-#define RADIX_TREE_EXCEPTIONAL_SHIFT 2
+#define RADIX_TREE_INTERNAL_NODE 2UL

static inline bool radix_tree_is_internal_node(void *ptr)
{
@@ -83,11 +75,10 @@ static inline bool radix_tree_is_internal_node(void *ptr)

/*
* @count is the count of every non-NULL element in the ->slots array
- * whether that is an exceptional entry, a retry entry, a user pointer,
+ * whether that is a data entry, a retry entry, a user pointer,
* a sibling entry or a pointer to the next level of the tree.
* @exceptional is the count of every element in ->slots which is
- * either radix_tree_exceptional_entry() or is a sibling entry for an
- * exceptional entry.
+ * either a data entry or a sibling entry for data.
*/
struct radix_tree_node {
unsigned char shift; /* Bits remaining in each slot */
@@ -268,17 +259,6 @@ static inline int radix_tree_deref_retry(void *arg)
return unlikely(radix_tree_is_internal_node(arg));
}

-/**
- * radix_tree_exceptional_entry - radix_tree_deref_slot gave exceptional entry?
- * @arg: value returned by radix_tree_deref_slot
- * Returns: 0 if well-aligned pointer, non-0 if exceptional entry.
- */
-static inline int radix_tree_exceptional_entry(void *arg)
-{
- /* Not unlikely because radix_tree_exception often tested first */
- return (unsigned long)arg & RADIX_TREE_EXCEPTIONAL_ENTRY;
-}
-
/**
* radix_tree_exception - radix_tree_deref_slot returned either exception?
* @arg: value returned by radix_tree_deref_slot
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 9c5a2628d6ce..5e93c7b500da 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -17,9 +17,8 @@
*
* swp_entry_t's are *never* stored anywhere in their arch-dependent format.
*/
-#define SWP_TYPE_SHIFT(e) ((sizeof(e.val) * 8) - \
- (MAX_SWAPFILES_SHIFT + RADIX_TREE_EXCEPTIONAL_SHIFT))
-#define SWP_OFFSET_MASK(e) ((1UL << SWP_TYPE_SHIFT(e)) - 1)
+#define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT)
+#define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1)

/*
* Store a type+offset into a swp_entry_t in an arch-independent format
@@ -28,8 +27,7 @@ static inline swp_entry_t swp_entry(unsigned long type, pgoff_t offset)
{
swp_entry_t ret;

- ret.val = (type << SWP_TYPE_SHIFT(ret)) |
- (offset & SWP_OFFSET_MASK(ret));
+ ret.val = (type << SWP_TYPE_SHIFT) | (offset & SWP_OFFSET_MASK);
return ret;
}

@@ -39,7 +37,7 @@ static inline swp_entry_t swp_entry(unsigned long type, pgoff_t offset)
*/
static inline unsigned swp_type(swp_entry_t entry)
{
- return (entry.val >> SWP_TYPE_SHIFT(entry));
+ return (entry.val >> SWP_TYPE_SHIFT);
}

/*
@@ -48,7 +46,7 @@ static inline unsigned swp_type(swp_entry_t entry)
*/
static inline pgoff_t swp_offset(swp_entry_t entry)
{
- return entry.val & SWP_OFFSET_MASK(entry);
+ return entry.val & SWP_OFFSET_MASK;
}

#ifdef CONFIG_MMU
@@ -89,16 +87,13 @@ static inline swp_entry_t radix_to_swp_entry(void *arg)
{
swp_entry_t entry;

- entry.val = (unsigned long)arg >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ entry.val = xa_to_value(arg);
return entry;
}

static inline void *swp_to_radix_entry(swp_entry_t entry)
{
- unsigned long value;
-
- value = entry.val << RADIX_TREE_EXCEPTIONAL_SHIFT;
- return (void *)(value | RADIX_TREE_EXCEPTIONAL_ENTRY);
+ return xa_mk_value(entry.val);
}

#if IS_ENABLED(CONFIG_DEVICE_PRIVATE)
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 2dfc8006fe64..1aa4ff0c19b6 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -5,9 +5,60 @@
* eXtensible Arrays
* Copyright (c) 2017 Microsoft Corporation
* Author: Matthew Wilcox <[email protected]>
+ *
+ * See Documentation/core-api/xarray.rst for how to use the XArray.
*/

+#include <linux/bug.h>
#include <linux/spinlock.h>
+#include <linux/types.h>
+
+/*
+ * The bottom two bits of the entry determine how the XArray interprets
+ * the contents:
+ *
+ * 00: Pointer entry
+ * 10: Internal entry
+ * x1: Value entry
+ *
+ * Attempting to store internal entries in the XArray is a bug.
+ */
+
+#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
+
+/**
+ * xa_mk_value() - Create an XArray entry from an integer.
+ * @v: Value to store in XArray.
+ *
+ * Return: An entry suitable for storing in the XArray.
+ */
+static inline void *xa_mk_value(unsigned long v)
+{
+ WARN_ON((long)v < 0);
+ return (void *)((v << 1) | 1);
+}
+
+/**
+ * xa_to_value() - Get value stored in an XArray entry.
+ * @entry: XArray entry.
+ *
+ * Return: The value stored in the XArray entry.
+ */
+static inline unsigned long xa_to_value(const void *entry)
+{
+ return (unsigned long)entry >> 1;
+}
+
+/**
+ * xa_is_value() - Determine if an entry is a value.
+ * @entry: XArray entry.
+ *
+ * Return: True if the entry is a value, false if it is a pointer.
+ */
+static inline bool xa_is_value(const void *entry)
+{
+ return (unsigned long)entry & 1;
+}

#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
diff --git a/lib/idr.c b/lib/idr.c
index 2cd9429e11e3..48c53890adc0 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -3,6 +3,7 @@
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
+#include <linux/xarray.h>

DEFINE_PER_CPU(struct ida_bitmap *, ida_bitmap);
static DEFINE_SPINLOCK(simple_ida_lock);
@@ -273,11 +274,8 @@ EXPORT_SYMBOL(idr_replace);
* by the number of bits in the leaf bitmap before doing a radix tree lookup.
*
* As an optimisation, if there are only a few low bits set in any given
- * leaf, instead of allocating a 128-byte bitmap, we use the 'exceptional
- * entry' functionality of the radix tree to store BITS_PER_LONG - 2 bits
- * directly in the entry. By being really tricksy, we could store
- * BITS_PER_LONG - 1 bits, but there're diminishing returns after optimising
- * for 0-3 allocated IDs.
+ * leaf, instead of allocating a 128-byte bitmap, we store the bits
+ * directly in the entry.
*
* We allow the radix tree 'exceptional' count to get out of date. Nothing
* in the IDA nor the radix tree code checks it. If it becomes important
@@ -319,12 +317,11 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
struct radix_tree_iter iter;
struct ida_bitmap *bitmap;
unsigned long index;
- unsigned bit, ebit;
+ unsigned bit;
int new;

index = start / IDA_BITMAP_BITS;
bit = start % IDA_BITMAP_BITS;
- ebit = bit + RADIX_TREE_EXCEPTIONAL_SHIFT;

slot = radix_tree_iter_init(&iter, index);
for (;;) {
@@ -339,26 +336,25 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
return PTR_ERR(slot);
}
}
- if (iter.index > index) {
+ if (iter.index > index)
bit = 0;
- ebit = RADIX_TREE_EXCEPTIONAL_SHIFT;
- }
new = iter.index * IDA_BITMAP_BITS;
bitmap = rcu_dereference_raw(*slot);
- if (radix_tree_exception(bitmap)) {
- unsigned long tmp = (unsigned long)bitmap;
- ebit = find_next_zero_bit(&tmp, BITS_PER_LONG, ebit);
- if (ebit < BITS_PER_LONG) {
- tmp |= 1UL << ebit;
- rcu_assign_pointer(*slot, (void *)tmp);
- *id = new + ebit - RADIX_TREE_EXCEPTIONAL_SHIFT;
+ if (xa_is_value(bitmap)) {
+ unsigned long tmp = xa_to_value(bitmap);
+ int vbit = find_next_zero_bit(&tmp, BITS_PER_XA_VALUE,
+ bit);
+ if (vbit < BITS_PER_XA_VALUE) {
+ tmp |= 1UL << vbit;
+ rcu_assign_pointer(*slot, xa_mk_value(tmp));
+ *id = new + vbit;
return 0;
}
bitmap = this_cpu_xchg(ida_bitmap, NULL);
if (!bitmap)
return -EAGAIN;
memset(bitmap, 0, sizeof(*bitmap));
- bitmap->bitmap[0] = tmp >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ bitmap->bitmap[0] = tmp;
rcu_assign_pointer(*slot, bitmap);
}

@@ -379,19 +375,15 @@ int ida_get_new_above(struct ida *ida, int start, int *id)
new += bit;
if (new < 0)
return -ENOSPC;
- if (ebit < BITS_PER_LONG) {
- bitmap = (void *)((1UL << ebit) |
- RADIX_TREE_EXCEPTIONAL_ENTRY);
- radix_tree_iter_replace(root, &iter, slot,
- bitmap);
- *id = new;
- return 0;
+ if (bit < BITS_PER_XA_VALUE) {
+ bitmap = xa_mk_value(1UL << bit);
+ } else {
+ bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (!bitmap)
+ return -EAGAIN;
+ memset(bitmap, 0, sizeof(*bitmap));
+ __set_bit(bit, bitmap->bitmap);
}
- bitmap = this_cpu_xchg(ida_bitmap, NULL);
- if (!bitmap)
- return -EAGAIN;
- memset(bitmap, 0, sizeof(*bitmap));
- __set_bit(bit, bitmap->bitmap);
radix_tree_iter_replace(root, &iter, slot, bitmap);
}

@@ -422,9 +414,9 @@ void ida_remove(struct ida *ida, int id)
goto err;

bitmap = rcu_dereference_raw(*slot);
- if (radix_tree_exception(bitmap)) {
+ if (xa_is_value(bitmap)) {
btmp = (unsigned long *)slot;
- offset += RADIX_TREE_EXCEPTIONAL_SHIFT;
+ offset += 1; /* Intimate knowledge of the xa_data encoding */
if (offset >= BITS_PER_LONG)
goto err;
} else {
@@ -435,9 +427,8 @@ void ida_remove(struct ida *ida, int id)

__clear_bit(offset, btmp);
radix_tree_iter_tag_set(&ida->ida_rt, &iter, IDR_FREE);
- if (radix_tree_exception(bitmap)) {
- if (rcu_dereference_raw(*slot) ==
- (void *)RADIX_TREE_EXCEPTIONAL_ENTRY)
+ if (xa_is_value(bitmap)) {
+ if (xa_to_value(rcu_dereference_raw(*slot)) == 0)
radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
} else if (bitmap_empty(btmp, IDA_BITMAP_BITS)) {
kfree(bitmap);
@@ -465,7 +456,7 @@ void ida_destroy(struct ida *ida)

radix_tree_for_each_slot(slot, &ida->ida_rt, &iter, 0) {
struct ida_bitmap *bitmap = rcu_dereference_raw(*slot);
- if (!radix_tree_exception(bitmap))
+ if (!xa_is_value(bitmap))
kfree(bitmap);
radix_tree_iter_delete(&ida->ida_rt, &iter, slot);
}
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 8428cc8af3fc..012e4869f99b 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -339,14 +339,12 @@ static void dump_ida_node(void *entry, unsigned long index)
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++)
dump_ida_node(node->slots[i],
index | (i << node->shift));
- } else if (radix_tree_exceptional_entry(entry)) {
+ } else if (xa_is_value(entry)) {
pr_debug("ida excp: %p offset %d indices %lu-%lu data %lx\n",
entry, (int)(index & RADIX_TREE_MAP_MASK),
index * IDA_BITMAP_BITS,
- index * IDA_BITMAP_BITS + BITS_PER_LONG -
- RADIX_TREE_EXCEPTIONAL_SHIFT,
- (unsigned long)entry >>
- RADIX_TREE_EXCEPTIONAL_SHIFT);
+ index * IDA_BITMAP_BITS + BITS_PER_XA_VALUE,
+ xa_to_value(entry));
} else {
struct ida_bitmap *bitmap = entry;

@@ -655,7 +653,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
BUG_ON(shift > BITS_PER_LONG);
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
- } else if (radix_tree_exceptional_entry(entry)) {
+ } else if (xa_is_value(entry)) {
/* Moving an exceptional root->rnode to a node */
node->exceptional = 1;
}
@@ -946,12 +944,12 @@ static inline int insert_entries(struct radix_tree_node *node,
!is_sibling_entry(node, old) &&
(old != RADIX_TREE_RETRY))
radix_tree_free_nodes(old);
- if (radix_tree_exceptional_entry(old))
+ if (xa_is_value(old))
node->exceptional--;
}
if (node) {
node->count += n;
- if (radix_tree_exceptional_entry(item))
+ if (xa_is_value(item))
node->exceptional += n;
}
return n;
@@ -965,7 +963,7 @@ static inline int insert_entries(struct radix_tree_node *node,
rcu_assign_pointer(*slot, item);
if (node) {
node->count++;
- if (radix_tree_exceptional_entry(item))
+ if (xa_is_value(item))
node->exceptional++;
}
return 1;
@@ -1182,8 +1180,7 @@ void __radix_tree_replace(struct radix_tree_root *root,
radix_tree_update_node_t update_node)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = !!radix_tree_exceptional_entry(item) -
- !!radix_tree_exceptional_entry(old);
+ int exceptional = !!xa_is_value(item) - !!xa_is_value(old);
int count = calculate_count(root, node, slot, item, old);

/*
@@ -1986,7 +1983,7 @@ static bool __radix_tree_delete(struct radix_tree_root *root,
struct radix_tree_node *node, void __rcu **slot)
{
void *old = rcu_dereference_raw(*slot);
- int exceptional = radix_tree_exceptional_entry(old) ? -1 : 0;
+ int exceptional = xa_is_value(old) ? -1 : 0;
unsigned offset = get_slot_offset(node, slot);
int tag;

diff --git a/mm/filemap.c b/mm/filemap.c
index e9172cefeb36..309be963140c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -128,7 +128,7 @@ static int page_cache_tree_insert(struct address_space *mapping,

p = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (!radix_tree_exceptional_entry(p))
+ if (!xa_is_value(p))
return -EEXIST;

mapping->nrexceptional--;
@@ -337,7 +337,7 @@ page_cache_tree_delete_batch(struct address_space *mapping,
break;
page = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;
if (!tail_pages) {
/*
@@ -1356,7 +1356,7 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
struct page *page;

page = radix_tree_lookup(&mapping->pages, index);
- if (!page || radix_tree_exceptional_entry(page))
+ if (!page || xa_is_value(page))
break;
index++;
if (index == 0)
@@ -1397,7 +1397,7 @@ pgoff_t page_cache_prev_hole(struct address_space *mapping,
struct page *page;

page = radix_tree_lookup(&mapping->pages, index);
- if (!page || radix_tree_exceptional_entry(page))
+ if (!page || xa_is_value(page))
break;
index--;
if (index == ULONG_MAX)
@@ -1540,7 +1540,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,

repeat:
page = find_get_entry(mapping, offset);
- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
page = NULL;
if (!page)
goto no_page;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cb4d199bf328..55ade70c33bb 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1363,7 +1363,7 @@ static void collapse_shmem(struct mm_struct *mm,

page = radix_tree_deref_slot_protected(slot,
&mapping->pages.xa_lock);
- if (radix_tree_exceptional_entry(page) || !PageUptodate(page)) {
+ if (xa_is_value(page) || !PageUptodate(page)) {
xa_unlock_irq(&mapping->pages);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
diff --git a/mm/madvise.c b/mm/madvise.c
index 751e97aa2210..83f8a1a8e6b5 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -251,7 +251,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;

page = find_get_entry(mapping, index);
- if (!radix_tree_exceptional_entry(page)) {
+ if (!xa_is_value(page)) {
if (page)
put_page(page);
continue;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1d4acb7f33e9..f25ef5367fe7 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4526,7 +4526,7 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
/* shmem/tmpfs may report page out on swap: account for that too. */
if (shmem_mapping(mapping)) {
page = find_get_entry(mapping, pgoff);
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swp_entry_t swp = radix_to_swp_entry(page);
if (do_memsw_account())
*entry = swp;
diff --git a/mm/mincore.c b/mm/mincore.c
index fc37afe226e6..4985965aa20a 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -66,7 +66,7 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff)
* shmem/tmpfs may return swap: account for swapcache
* page too.
*/
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swp_entry_t swp = radix_to_swp_entry(page);
page = find_get_page(swap_address_space(swp),
swp_offset(swp));
diff --git a/mm/readahead.c b/mm/readahead.c
index 514188fd2489..4851f002605f 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -177,7 +177,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
rcu_read_lock();
page = radix_tree_lookup(&mapping->pages, page_offset);
rcu_read_unlock();
- if (page && !radix_tree_exceptional_entry(page))
+ if (page && !xa_is_value(page))
continue;

page = __page_cache_alloc(gfp_mask);
diff --git a/mm/shmem.c b/mm/shmem.c
index 9b1766e7c8cf..c5731bb954a1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -690,7 +690,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
continue;
}

- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
swapped++;

if (need_resched()) {
@@ -805,7 +805,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
if (index >= end)
break;

- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (unfalloc)
continue;
nr_swaps_freed += !shmem_free_swap(mapping,
@@ -902,7 +902,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
if (index >= end)
break;

- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (unfalloc)
continue;
if (shmem_free_swap(mapping, index, page)) {
@@ -1614,7 +1614,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
repeat:
swap.val = 0;
page = find_lock_entry(mapping, index);
- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
swap = radix_to_swp_entry(page);
page = NULL;
}
@@ -2547,7 +2547,7 @@ static pgoff_t shmem_seek_hole_data(struct address_space *mapping,
index = indices[i];
}
page = pvec.pages[i];
- if (page && !radix_tree_exceptional_entry(page)) {
+ if (page && !xa_is_value(page)) {
if (!PageUptodate(page))
page = NULL;
}
diff --git a/mm/swap.c b/mm/swap.c
index 38e1b6374a97..8d7773cb2c3f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -953,7 +953,7 @@ void pagevec_remove_exceptionals(struct pagevec *pvec)

for (i = 0, j = 0; i < pagevec_count(pvec); i++) {
struct page *page = pvec->pages[i];
- if (!radix_tree_exceptional_entry(page))
+ if (!xa_is_value(page))
pvec->pages[j++] = page;
}
pvec->nr = j;
diff --git a/mm/truncate.c b/mm/truncate.c
index 094158f2e447..69bb743dd7e5 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -70,7 +70,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
return;

for (j = 0; j < pagevec_count(pvec); j++)
- if (radix_tree_exceptional_entry(pvec->pages[j]))
+ if (xa_is_value(pvec->pages[j]))
break;

if (j == pagevec_count(pvec))
@@ -85,7 +85,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
struct page *page = pvec->pages[i];
pgoff_t index = indices[i];

- if (!radix_tree_exceptional_entry(page)) {
+ if (!xa_is_value(page)) {
pvec->pages[j++] = page;
continue;
}
@@ -351,7 +351,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
if (index >= end)
break;

- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;

if (!trylock_page(page))
@@ -446,7 +446,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
break;
}

- if (radix_tree_exceptional_entry(page))
+ if (xa_is_value(page))
continue;

lock_page(page);
@@ -565,7 +565,7 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
if (index > end)
break;

- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
invalidate_exceptional_entry(mapping, index,
page);
continue;
@@ -696,7 +696,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
if (index > end)
break;

- if (radix_tree_exceptional_entry(page)) {
+ if (xa_is_value(page)) {
if (!invalidate_exceptional_entry2(mapping,
index, page))
ret = -EBUSY;
diff --git a/mm/workingset.c b/mm/workingset.c
index 3cb3586181e6..3afeb84720f4 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -155,8 +155,8 @@
* refault distance will immediately activate the refaulting page.
*/

-#define EVICTION_SHIFT (RADIX_TREE_EXCEPTIONAL_ENTRY + \
- NODES_SHIFT + \
+#define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \
+ NODES_SHIFT + \
MEM_CGROUP_ID_SHIFT)
#define EVICTION_MASK (~0UL >> EVICTION_SHIFT)

@@ -175,18 +175,16 @@ static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction)
eviction >>= bucket_order;
eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid;
eviction = (eviction << NODES_SHIFT) | pgdat->node_id;
- eviction = (eviction << RADIX_TREE_EXCEPTIONAL_SHIFT);

- return (void *)(eviction | RADIX_TREE_EXCEPTIONAL_ENTRY);
+ return xa_mk_value(eviction);
}

static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
unsigned long *evictionp)
{
- unsigned long entry = (unsigned long)shadow;
+ unsigned long entry = xa_to_value(shadow);
int memcgid, nid;

- entry >>= RADIX_TREE_EXCEPTIONAL_SHIFT;
nid = entry & ((1UL << NODES_SHIFT) - 1);
entry >>= NODES_SHIFT;
memcgid = entry & ((1UL << MEM_CGROUP_ID_SHIFT) - 1);
@@ -453,7 +451,7 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
goto out_invalid;
for (i = 0; i < RADIX_TREE_MAP_SIZE; i++) {
if (node->slots[i]) {
- if (WARN_ON_ONCE(!radix_tree_exceptional_entry(node->slots[i])))
+ if (WARN_ON_ONCE(!xa_is_value(node->slots[i])))
goto out_invalid;
if (WARN_ON_ONCE(!node->exceptional))
goto out_invalid;
diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c
index 193450b29bf0..7499319e85f8 100644
--- a/tools/testing/radix-tree/idr-test.c
+++ b/tools/testing/radix-tree/idr-test.c
@@ -19,7 +19,7 @@

#include "test.h"

-#define DUMMY_PTR ((void *)0x12)
+#define DUMMY_PTR ((void *)0x10)

int item_idr_free(int id, void *p, void *data)
{
@@ -320,11 +320,11 @@ void ida_check_conv(void)
for (i = 0; i < 1000000; i++) {
int err = ida_get_new(&ida, &id);
if (err == -EAGAIN) {
- assert((i % IDA_BITMAP_BITS) == (BITS_PER_LONG - 2));
+ assert((i % IDA_BITMAP_BITS) == (BITS_PER_LONG - 1));
assert(ida_pre_get(&ida, GFP_KERNEL));
err = ida_get_new(&ida, &id);
} else {
- assert((i % IDA_BITMAP_BITS) != (BITS_PER_LONG - 2));
+ assert((i % IDA_BITMAP_BITS) != (BITS_PER_LONG - 1));
}
assert(!err);
assert(id == i);
diff --git a/tools/testing/radix-tree/linux/radix-tree.h b/tools/testing/radix-tree/linux/radix-tree.h
index 36fb716d5557..40c9671ee365 100644
--- a/tools/testing/radix-tree/linux/radix-tree.h
+++ b/tools/testing/radix-tree/linux/radix-tree.h
@@ -5,6 +5,7 @@
#include "generated/map-shift.h"
#include "linux/bug.h"
#include "../../../../include/linux/radix-tree.h"
+#include <linux/xarray.h>

extern int kmalloc_verbose;
extern int test_verbose;
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 59245b3d587c..684e76f79f4a 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -38,12 +38,11 @@ static void __multiorder_tag_test(int index, int order)

/*
* Verify we get collisions for covered indices. We try and fail to
- * insert an exceptional entry so we don't leak memory via
+ * insert a data entry so we don't leak memory via
* item_insert_order().
*/
for_each_index(i, base, order) {
- err = __radix_tree_insert(&tree, i, order,
- (void *)(0xA0 | RADIX_TREE_EXCEPTIONAL_ENTRY));
+ err = __radix_tree_insert(&tree, i, order, xa_mk_value(0xA0));
assert(err == -EEXIST);
}

@@ -379,8 +378,8 @@ static void multiorder_join1(unsigned long index,
}

/*
- * Check that the accounting of exceptional entries is handled correctly
- * by joining an exceptional entry to a normal pointer.
+ * Check that the accounting of inline data entries is handled correctly
+ * by joining a data entry to a normal pointer.
*/
static void multiorder_join2(unsigned order1, unsigned order2)
{
@@ -390,9 +389,9 @@ static void multiorder_join2(unsigned order1, unsigned order2)
void *item2;

item_insert_order(&tree, 0, order2);
- radix_tree_insert(&tree, 1 << order2, (void *)0x12UL);
+ radix_tree_insert(&tree, 1 << order2, xa_mk_value(5));
item2 = __radix_tree_lookup(&tree, 1 << order2, &node, NULL);
- assert(item2 == (void *)0x12UL);
+ assert(item2 == xa_mk_value(5));
assert(node->exceptional == 1);

item2 = radix_tree_lookup(&tree, 0);
@@ -406,7 +405,7 @@ static void multiorder_join2(unsigned order1, unsigned order2)
}

/*
- * This test revealed an accounting bug for exceptional entries at one point.
+ * This test revealed an accounting bug for inline data entries at one point.
* Nodes were being freed back into the pool with an elevated exception count
* by radix_tree_join() and then radix_tree_split() was failing to zero the
* count of exceptional entries.
@@ -420,16 +419,16 @@ static void multiorder_join3(unsigned int order)
unsigned long i;

for (i = 0; i < (1 << order); i++) {
- radix_tree_insert(&tree, i, (void *)0x12UL);
+ radix_tree_insert(&tree, i, xa_mk_value(5));
}

- radix_tree_join(&tree, 0, order, (void *)0x16UL);
+ radix_tree_join(&tree, 0, order, xa_mk_value(7));
rcu_barrier();

radix_tree_split(&tree, 0, 0);

radix_tree_for_each_slot(slot, &tree, &iter, 0) {
- radix_tree_iter_replace(&tree, &iter, slot, (void *)0x12UL);
+ radix_tree_iter_replace(&tree, &iter, slot, xa_mk_value(5));
}

__radix_tree_lookup(&tree, 0, &node, NULL);
@@ -516,10 +515,10 @@ static void __multiorder_split2(int old_order, int new_order)
struct radix_tree_node *node;
void *item;

- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));

item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);

radix_tree_split(&tree, 0, new_order);
@@ -529,7 +528,7 @@ static void __multiorder_split2(int old_order, int new_order)
}

item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item != (void *)0x12);
+ assert(item != xa_mk_value(5));
assert(node->exceptional == 0);

item_kill_tree(&tree);
@@ -543,40 +542,40 @@ static void __multiorder_split3(int old_order, int new_order)
struct radix_tree_node *node;
void *item;

- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));

item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);

radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
- radix_tree_iter_replace(&tree, &iter, slot, (void *)0x16);
+ radix_tree_iter_replace(&tree, &iter, slot, xa_mk_value(7));
}

item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x16);
+ assert(item == xa_mk_value(7));
assert(node->exceptional > 0);

item_kill_tree(&tree);

- __radix_tree_insert(&tree, 0, old_order, (void *)0x12);
+ __radix_tree_insert(&tree, 0, old_order, xa_mk_value(5));

item = __radix_tree_lookup(&tree, 0, &node, NULL);
- assert(item == (void *)0x12);
+ assert(item == xa_mk_value(5));
assert(node->exceptional > 0);

radix_tree_split(&tree, 0, new_order);
radix_tree_for_each_slot(slot, &tree, &iter, 0) {
if (iter.index == (1 << new_order))
radix_tree_iter_replace(&tree, &iter, slot,
- (void *)0x16);
+ xa_mk_value(7));
else
radix_tree_iter_replace(&tree, &iter, slot, NULL);
}

item = __radix_tree_lookup(&tree, 1 << new_order, &node, NULL);
- assert(item == (void *)0x16);
+ assert(item == xa_mk_value(7));
assert(node->count == node->exceptional);
do {
node = node->parent;
@@ -609,13 +608,13 @@ static void multiorder_account(void)

item_insert_order(&tree, 0, 5);

- __radix_tree_insert(&tree, 1 << 5, 5, (void *)0x12);
+ __radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 0, &node, NULL);
assert(node->count == node->exceptional * 2);
radix_tree_delete(&tree, 1 << 5);
assert(node->exceptional == 0);

- __radix_tree_insert(&tree, 1 << 5, 5, (void *)0x12);
+ __radix_tree_insert(&tree, 1 << 5, 5, xa_mk_value(5));
__radix_tree_lookup(&tree, 1 << 5, &node, &slot);
assert(node->count == node->exceptional * 2);
__radix_tree_replace(&tree, node, slot, NULL, NULL);
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 5978ab1f403d..0d69c49177c6 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -276,7 +276,7 @@ void item_kill_tree(struct radix_tree_root *root)
int nfound;

radix_tree_for_each_slot(slot, root, &iter, 0) {
- if (radix_tree_exceptional_entry(*slot))
+ if (xa_is_value(*slot))
radix_tree_delete(root, iter.index);
}

--
2.15.1


2018-01-17 21:18:09

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 04/99] xarray: Change definition of sibling entries

From: Matthew Wilcox <[email protected]>

Instead of storing a pointer to the slot containing the canonical entry,
store the offset of the slot. Produces slightly more efficient code
(~300 bytes) and simplifies the implementation.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/xarray.h | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++
lib/radix-tree.c | 66 +++++++++++-------------------------
2 files changed, 109 insertions(+), 47 deletions(-)

diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 1aa4ff0c19b6..c308152fde7f 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -22,6 +22,12 @@
* x1: Value entry
*
* Attempting to store internal entries in the XArray is a bug.
+ *
+ * Most internal entries are pointers to the next node in the tree.
+ * The following internal entries have a special meaning:
+ *
+ * 0-62: Sibling entries
+ * 256: Retry entry
*/

#define BITS_PER_XA_VALUE (BITS_PER_LONG - 1)
@@ -60,6 +66,39 @@ static inline bool xa_is_value(const void *entry)
return (unsigned long)entry & 1;
}

+/*
+ * xa_mk_internal() - Create an internal entry.
+ * @v: Value to turn into an internal entry.
+ *
+ * Return: An XArray internal entry corresponding to this value.
+ */
+static inline void *xa_mk_internal(unsigned long v)
+{
+ return (void *)((v << 2) | 2);
+}
+
+/*
+ * xa_to_internal() - Extract the value from an internal entry.
+ * @entry: XArray entry.
+ *
+ * Return: The value which was stored in the internal entry.
+ */
+static inline unsigned long xa_to_internal(const void *entry)
+{
+ return (unsigned long)entry >> 2;
+}
+
+/*
+ * xa_is_internal() - Is the entry an internal entry?
+ * @entry: XArray entry.
+ *
+ * Return: %true if the entry is an internal entry.
+ */
+static inline bool xa_is_internal(const void *entry)
+{
+ return ((unsigned long)entry & 3) == 2;
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
@@ -72,4 +111,55 @@ static inline bool xa_is_value(const void *entry)
#define xa_unlock_irqrestore(xa, flags) \
spin_unlock_irqrestore(&(xa)->xa_lock, flags)

+/* Everything below here is the Advanced API. Proceed with caution. */
+
+/*
+ * The xarray is constructed out of a set of 'chunks' of pointers. Choosing
+ * the best chunk size requires some tradeoffs. A power of two recommends
+ * itself so that we can walk the tree based purely on shifts and masks.
+ * Generally, the larger the better; as the number of slots per level of the
+ * tree increases, the less tall the tree needs to be. But that needs to be
+ * balanced against the memory consumption of each node. On a 64-bit system,
+ * xa_node is currently 576 bytes, and we get 7 of them per 4kB page. If we
+ * doubled the number of slots per node, we'd get only 3 nodes per 4kB page.
+ */
+#ifndef XA_CHUNK_SHIFT
+#define XA_CHUNK_SHIFT (CONFIG_BASE_SMALL ? 4 : 6)
+#endif
+#define XA_CHUNK_SIZE (1UL << XA_CHUNK_SHIFT)
+#define XA_CHUNK_MASK (XA_CHUNK_SIZE - 1)
+
+/* Private */
+static inline bool xa_is_node(const void *entry)
+{
+ return xa_is_internal(entry) && (unsigned long)entry > 4096;
+}
+
+/* Private */
+static inline void *xa_mk_sibling(unsigned int offset)
+{
+ return xa_mk_internal(offset);
+}
+
+/* Private */
+static inline unsigned long xa_to_sibling(const void *entry)
+{
+ return xa_to_internal(entry);
+}
+
+/**
+ * xa_is_sibling() - Is the entry a sibling entry?
+ * @entry: Entry retrieved from the XArray
+ *
+ * Return: %true if the entry is a sibling entry.
+ */
+static inline bool xa_is_sibling(const void *entry)
+{
+ return IS_ENABLED(CONFIG_RADIX_TREE_MULTIORDER) &&
+ xa_is_internal(entry) &&
+ (entry < xa_mk_sibling(XA_CHUNK_SIZE - 1));
+}
+
+#define XA_RETRY_ENTRY xa_mk_internal(256)
+
#endif /* _LINUX_XARRAY_H */
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 012e4869f99b..f16f63d15edc 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -37,6 +37,7 @@
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/string.h>
+#include <linux/xarray.h>


/* Number of nodes in fully populated tree of given height */
@@ -97,24 +98,7 @@ static inline void *node_to_entry(void *ptr)
return (void *)((unsigned long)ptr | RADIX_TREE_INTERNAL_NODE);
}

-#define RADIX_TREE_RETRY node_to_entry(NULL)
-
-#ifdef CONFIG_RADIX_TREE_MULTIORDER
-/* Sibling slots point directly to another slot in the same node */
-static inline
-bool is_sibling_entry(const struct radix_tree_node *parent, void *node)
-{
- void __rcu **ptr = node;
- return (parent->slots <= ptr) &&
- (ptr < parent->slots + RADIX_TREE_MAP_SIZE);
-}
-#else
-static inline
-bool is_sibling_entry(const struct radix_tree_node *parent, void *node)
-{
- return false;
-}
-#endif
+#define RADIX_TREE_RETRY XA_RETRY_ENTRY

static inline unsigned long
get_slot_offset(const struct radix_tree_node *parent, void __rcu **slot)
@@ -128,16 +112,10 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,
unsigned int offset = (index >> parent->shift) & RADIX_TREE_MAP_MASK;
void __rcu **entry = rcu_dereference_raw(parent->slots[offset]);

-#ifdef CONFIG_RADIX_TREE_MULTIORDER
- if (radix_tree_is_internal_node(entry)) {
- if (is_sibling_entry(parent, entry)) {
- void __rcu **sibentry;
- sibentry = (void __rcu **) entry_to_node(entry);
- offset = get_slot_offset(parent, sibentry);
- entry = rcu_dereference_raw(*sibentry);
- }
+ if (xa_is_sibling(entry)) {
+ offset = xa_to_sibling(entry);
+ entry = rcu_dereference_raw(parent->slots[offset]);
}
-#endif

*nodep = (void *)entry;
return offset;
@@ -299,10 +277,10 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
} else if (!radix_tree_is_internal_node(entry)) {
pr_debug("radix entry %p offset %ld indices %lu-%lu parent %p\n",
entry, i, first, last, node);
- } else if (is_sibling_entry(node, entry)) {
+ } else if (xa_is_sibling(entry)) {
pr_debug("radix sblng %p offset %ld indices %lu-%lu parent %p val %p\n",
entry, i, first, last, node,
- *(void **)entry_to_node(entry));
+ node->slots[xa_to_sibling(entry)]);
} else {
dump_node(entry_to_node(entry), first);
}
@@ -872,8 +850,7 @@ static void radix_tree_free_nodes(struct radix_tree_node *node)

for (;;) {
void *entry = rcu_dereference_raw(child->slots[offset]);
- if (radix_tree_is_internal_node(entry) &&
- !is_sibling_entry(child, entry)) {
+ if (xa_is_node(entry)) {
child = entry_to_node(entry);
offset = 0;
continue;
@@ -895,7 +872,7 @@ static void radix_tree_free_nodes(struct radix_tree_node *node)
static inline int insert_entries(struct radix_tree_node *node,
void __rcu **slot, void *item, unsigned order, bool replace)
{
- struct radix_tree_node *child;
+ void *sibling;
unsigned i, n, tag, offset, tags = 0;

if (node) {
@@ -913,7 +890,7 @@ static inline int insert_entries(struct radix_tree_node *node,
offset = offset & ~(n - 1);
slot = &node->slots[offset];
}
- child = node_to_entry(slot);
+ sibling = xa_mk_sibling(offset);

for (i = 0; i < n; i++) {
if (slot[i]) {
@@ -930,7 +907,7 @@ static inline int insert_entries(struct radix_tree_node *node,
for (i = 0; i < n; i++) {
struct radix_tree_node *old = rcu_dereference_raw(slot[i]);
if (i) {
- rcu_assign_pointer(slot[i], child);
+ rcu_assign_pointer(slot[i], sibling);
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
if (tags & (1 << tag))
tag_clear(node, tag, offset + i);
@@ -940,9 +917,7 @@ static inline int insert_entries(struct radix_tree_node *node,
if (tags & (1 << tag))
tag_set(node, tag, offset);
}
- if (radix_tree_is_internal_node(old) &&
- !is_sibling_entry(node, old) &&
- (old != RADIX_TREE_RETRY))
+ if (xa_is_node(old))
radix_tree_free_nodes(old);
if (xa_is_value(old))
node->exceptional--;
@@ -1101,10 +1076,10 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
void __rcu **slot, int count, int exceptional)
{
#ifdef CONFIG_RADIX_TREE_MULTIORDER
- void *ptr = node_to_entry(slot);
- unsigned offset = get_slot_offset(node, slot) + 1;
+ unsigned offset = get_slot_offset(node, slot);
+ void *ptr = xa_mk_sibling(offset);

- while (offset < RADIX_TREE_MAP_SIZE) {
+ while (++offset < RADIX_TREE_MAP_SIZE) {
if (rcu_dereference_raw(node->slots[offset]) != ptr)
break;
if (count < 0) {
@@ -1112,7 +1087,6 @@ static inline void replace_sibling_entries(struct radix_tree_node *node,
node->count--;
}
node->exceptional += exceptional;
- offset++;
}
#endif
}
@@ -1311,8 +1285,7 @@ int radix_tree_split(struct radix_tree_root *root, unsigned long index,
tags |= 1 << tag;

for (end = offset + 1; end < RADIX_TREE_MAP_SIZE; end++) {
- if (!is_sibling_entry(parent,
- rcu_dereference_raw(parent->slots[end])))
+ if (!xa_is_sibling(rcu_dereference_raw(parent->slots[end])))
break;
for (tag = 0; tag < RADIX_TREE_MAX_TAGS; tag++)
if (tags & (1 << tag))
@@ -1608,11 +1581,9 @@ static void set_iter_tags(struct radix_tree_iter *iter,
static void __rcu **skip_siblings(struct radix_tree_node **nodep,
void __rcu **slot, struct radix_tree_iter *iter)
{
- void *sib = node_to_entry(slot - 1);
-
while (iter->index < iter->next_index) {
*nodep = rcu_dereference_raw(*slot);
- if (*nodep && *nodep != sib)
+ if (*nodep && !xa_is_sibling(*nodep))
return slot;
slot++;
iter->index = __radix_tree_iter_add(iter, 1);
@@ -1763,7 +1734,7 @@ void __rcu **radix_tree_next_chunk(const struct radix_tree_root *root,
while (++offset < RADIX_TREE_MAP_SIZE) {
void *slot = rcu_dereference_raw(
node->slots[offset]);
- if (is_sibling_entry(node, slot))
+ if (xa_is_sibling(slot))
continue;
if (slot)
break;
@@ -2282,6 +2253,7 @@ void __init radix_tree_init(void)

BUILD_BUG_ON(RADIX_TREE_MAX_TAGS + __GFP_BITS_SHIFT > 32);
BUILD_BUG_ON(GFP_ZONEMASK != (__force gfp_t)15);
+ BUILD_BUG_ON(XA_CHUNK_SIZE > 255);
radix_tree_node_cachep = kmem_cache_create("radix_tree_node",
sizeof(struct radix_tree_node), 0,
SLAB_PANIC | SLAB_RECLAIM_ACCOUNT,
--
2.15.1


2018-01-17 21:18:30

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 02/99] page cache: Use xa_lock

From: Matthew Wilcox <[email protected]>

Remove the address_space ->tree_lock and use the xa_lock newly added to
the radix_tree_root. Rename the address_space ->page_tree to ->pages,
since we don't really care that it's a tree. Take the opportunity to
rearrange the elements of address_space to pack them better on 64-bit,
and make the comments more useful.

Signed-off-by: Matthew Wilcox <[email protected]>
---
Documentation/cgroup-v1/memory.txt | 2 +-
Documentation/vm/page_migration | 14 +--
arch/arm/include/asm/cacheflush.h | 6 +-
arch/nios2/include/asm/cacheflush.h | 6 +-
arch/parisc/include/asm/cacheflush.h | 6 +-
drivers/staging/lustre/lustre/llite/glimpse.c | 2 +-
drivers/staging/lustre/lustre/mdc/mdc_request.c | 8 +-
fs/afs/write.c | 9 +-
fs/btrfs/compression.c | 2 +-
fs/btrfs/extent_io.c | 16 +--
fs/btrfs/inode.c | 2 +-
fs/buffer.c | 13 ++-
fs/cifs/file.c | 9 +-
fs/dax.c | 123 ++++++++++++------------
fs/f2fs/data.c | 6 +-
fs/f2fs/dir.c | 6 +-
fs/f2fs/inline.c | 6 +-
fs/f2fs/node.c | 8 +-
fs/fs-writeback.c | 20 ++--
fs/inode.c | 11 +--
fs/nilfs2/btnode.c | 20 ++--
fs/nilfs2/page.c | 22 ++---
include/linux/backing-dev.h | 12 +--
include/linux/fs.h | 17 ++--
include/linux/mm.h | 2 +-
include/linux/pagemap.h | 4 +-
mm/filemap.c | 84 ++++++++--------
mm/huge_memory.c | 10 +-
mm/khugepaged.c | 49 +++++-----
mm/memcontrol.c | 4 +-
mm/migrate.c | 32 +++---
mm/page-writeback.c | 42 ++++----
mm/readahead.c | 2 +-
mm/rmap.c | 4 +-
mm/shmem.c | 60 ++++++------
mm/swap_state.c | 17 ++--
mm/truncate.c | 22 ++---
mm/vmscan.c | 12 +--
mm/workingset.c | 22 ++---
39 files changed, 344 insertions(+), 368 deletions(-)

diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
index cefb63639070..f0ba3fc6f2d8 100644
--- a/Documentation/cgroup-v1/memory.txt
+++ b/Documentation/cgroup-v1/memory.txt
@@ -262,7 +262,7 @@ When oom event notifier is registered, event will be delivered.
2.6 Locking

lock_page_cgroup()/unlock_page_cgroup() should not be called under
- mapping->tree_lock.
+ the mapping's xa_lock.

Other lock order is following:
PG_locked.
diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration
index 0478ae2ad44a..faf849596a85 100644
--- a/Documentation/vm/page_migration
+++ b/Documentation/vm/page_migration
@@ -90,7 +90,7 @@ Steps:

1. Lock the page to be migrated

-2. Insure that writeback is complete.
+2. Ensure that writeback is complete.

3. Lock the new page that we want to move to. It is locked so that accesses to
this (not yet uptodate) page immediately lock while the move is in progress.
@@ -100,8 +100,8 @@ Steps:
mapcount is not zero then we do not migrate the page. All user space
processes that attempt to access the page will now wait on the page lock.

-5. The radix tree lock is taken. This will cause all processes trying
- to access the page via the mapping to block on the radix tree spinlock.
+5. The address space xa_lock is taken. This will cause all processes trying
+ to access the page via the mapping to block on the spinlock.

6. The refcount of the page is examined and we back out if references remain
otherwise we know that we are the only one referencing this page.
@@ -114,12 +114,12 @@ Steps:

9. The radix tree is changed to point to the new page.

-10. The reference count of the old page is dropped because the radix tree
+10. The reference count of the old page is dropped because the address space
reference is gone. A reference to the new page is established because
- the new page is referenced to by the radix tree.
+ the new page is referenced by the address space.

-11. The radix tree lock is dropped. With that lookups in the mapping
- become possible again. Processes will move from spinning on the tree_lock
+11. The address space xa_lock is dropped. With that lookups in the mapping
+ become possible again. Processes will move from spinning on the xa_lock
to sleeping on the locked new page.

12. The page contents are copied to the new page.
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 74504b154256..f4ead9a74b7d 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -318,10 +318,8 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);

-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)

#define flush_icache_user_range(vma,page,addr,len) \
flush_dcache_page(page)
diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
index 55e383c173f7..7a6eda381964 100644
--- a/arch/nios2/include/asm/cacheflush.h
+++ b/arch/nios2/include/asm/cacheflush.h
@@ -46,9 +46,7 @@ extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
extern void flush_dcache_range(unsigned long start, unsigned long end);
extern void invalidate_dcache_range(unsigned long start, unsigned long end);

-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)

#endif /* _ASM_NIOS2_CACHEFLUSH_H */
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 3742508cc534..b772dd320118 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -54,10 +54,8 @@ void invalidate_kernel_vmap_range(void *vaddr, int size);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *page);

-#define flush_dcache_mmap_lock(mapping) \
- spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
- spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->pages)
+#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->pages)

#define flush_icache_page(vma,page) do { \
flush_kernel_dcache_page(page); \
diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index c43ac574274c..5f2843da911c 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -69,7 +69,7 @@ blkcnt_t dirty_cnt(struct inode *inode)
void *results[1];

if (inode->i_mapping)
- cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->page_tree,
+ cnt += radix_tree_gang_lookup_tag(&inode->i_mapping->pages,
results, 0, 1,
PAGECACHE_TAG_DIRTY);
if (cnt == 0 && atomic_read(&vob->vob_mmap_cnt) > 0)
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 03e55bca4ada..45dcf9f958d4 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -937,14 +937,14 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
struct page *page;
int found;

- spin_lock_irq(&mapping->tree_lock);
- found = radix_tree_gang_lookup(&mapping->page_tree,
+ xa_lock_irq(&mapping->pages);
+ found = radix_tree_gang_lookup(&mapping->pages,
(void **)&page, offset, 1);
if (found > 0 && !radix_tree_exceptional_entry(page)) {
struct lu_dirpage *dp;

get_page(page);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* In contrast to find_lock_page() we are sure that directory
* page cannot be truncated (while DLM lock is held) and,
@@ -992,7 +992,7 @@ static struct page *mdc_page_locate(struct address_space *mapping, __u64 *hash,
page = ERR_PTR(-EIO);
}
} else {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
page = NULL;
}
return page;
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 9370e2feb999..603d2ce48dbb 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -570,10 +570,11 @@ static int afs_writepages_region(struct address_space *mapping,

_debug("wback %lx", page->index);

- /* at this point we hold neither mapping->tree_lock nor lock on
- * the page itself: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled back from
- * swapper_space to tmpfs file mapping
+ /*
+ * at this point we hold neither the xa_lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
*/
ret = lock_page_killable(page);
if (ret < 0) {
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 5982c8a71f02..280717b26224 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -450,7 +450,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
break;

rcu_read_lock();
- page = radix_tree_lookup(&mapping->page_tree, pg_index);
+ page = radix_tree_lookup(&mapping->pages, pg_index);
rcu_read_unlock();
if (page && !radix_tree_exceptional_entry(page)) {
misses++;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 012d63870b99..22948f4febe7 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3966,11 +3966,11 @@ static int extent_write_cache_pages(struct address_space *mapping,

done_index = page->index;
/*
- * At this point we hold neither mapping->tree_lock nor
- * lock on the page itself: the page may be truncated or
- * invalidated (changing page->mapping to NULL), or even
- * swizzled back from swapper_space to tmpfs file
- * mapping
+ * At this point we hold neither the xa_lock nor
+ * the page lock: the page may be truncated or
+ * invalidated (changing page->mapping to NULL),
+ * or even swizzled back from swapper_space to
+ * tmpfs file mapping
*/
if (!trylock_page(page)) {
flush_fn(data);
@@ -5196,13 +5196,13 @@ void clear_extent_buffer_dirty(struct extent_buffer *eb)
WARN_ON(!PagePrivate(page));

clear_page_dirty_for_io(page);
- spin_lock_irq(&page->mapping->tree_lock);
+ xa_lock_irq(&page->mapping->pages);
if (!PageDirty(page)) {
- radix_tree_tag_clear(&page->mapping->page_tree,
+ radix_tree_tag_clear(&page->mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&page->mapping->tree_lock);
+ xa_unlock_irq(&page->mapping->pages);
ClearPageError(page);
unlock_page(page);
}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index e1a7f3cb5be9..712ae1dd572b 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7543,7 +7543,7 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,

bool btrfs_page_exists_in_range(struct inode *inode, loff_t start, loff_t end)
{
- struct radix_tree_root *root = &inode->i_mapping->page_tree;
+ struct radix_tree_root *root = &inode->i_mapping->pages;
bool found = false;
void **pagep = NULL;
struct page *page = NULL;
diff --git a/fs/buffer.c b/fs/buffer.c
index fd894c2ae284..1a6ae530156b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -192,10 +192,9 @@ EXPORT_SYMBOL(end_buffer_write_sync);
* we get exclusion from try_to_free_buffers with the blockdev mapping's
* private_lock.
*
- * Hack idea: for the blockdev mapping, i_bufferlist_lock contention
+ * Hack idea: for the blockdev mapping, private_lock contention
* may be quite high. This code could TryLock the page, and if that
- * succeeds, there is no need to take private_lock. (But if
- * private_lock is contended then so is mapping->tree_lock).
+ * succeeds, there is no need to take private_lock.
*/
static struct buffer_head *
__find_get_block_slow(struct block_device *bdev, sector_t block)
@@ -606,14 +605,14 @@ void __set_page_dirty(struct page *page, struct address_space *mapping,
{
unsigned long flags;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
if (page->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
+ radix_tree_tag_set(&mapping->pages,
page_index(page), PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
}
EXPORT_SYMBOL_GPL(__set_page_dirty);

@@ -1102,7 +1101,7 @@ __getblk_slow(struct block_device *bdev, sector_t block,
* inode list.
*
* mark_buffer_dirty() is atomic. It takes bh->b_page->mapping->private_lock,
- * mapping->tree_lock and mapping->host->i_lock.
+ * mapping xa_lock and mapping->host->i_lock.
*/
void mark_buffer_dirty(struct buffer_head *bh)
{
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index df9f682708c6..ca724217cc38 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1987,11 +1987,10 @@ wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages,
for (i = 0; i < found_pages; i++) {
page = wdata->pages[i];
/*
- * At this point we hold neither mapping->tree_lock nor
- * lock on the page itself: the page may be truncated or
- * invalidated (changing page->mapping to NULL), or even
- * swizzled back from swapper_space to tmpfs file
- * mapping
+ * At this point we hold neither the xa_lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
*/

if (nr_pages == 0)
diff --git a/fs/dax.c b/fs/dax.c
index 95981591977a..0f3844bcf881 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -158,11 +158,9 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
}

/*
- * We do not necessarily hold the mapping->tree_lock when we call this
- * function so it is possible that 'entry' is no longer a valid item in the
- * radix tree. This is okay because all we really need to do is to find the
- * correct waitqueue where tasks might be waiting for that old 'entry' and
- * wake them.
+ * @entry may no longer be the entry at the index in the mapping.
+ * The important information it's conveying is whether the entry at
+ * this index used to be a PMD entry.
*/
static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
pgoff_t index, void *entry, bool wake_all)
@@ -174,7 +172,7 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,

/*
* Checking for locked entry and prepare_to_wait_exclusive() happens
- * under mapping->tree_lock, ditto for entry handling in our callers.
+ * under xa_lock, ditto for entry handling in our callers.
* So at this point all tasks that could have seen our entry locked
* must be in the waitqueue and the following check will see them.
*/
@@ -183,41 +181,38 @@ static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
}

/*
- * Check whether the given slot is locked. The function must be called with
- * mapping->tree_lock held
+ * Check whether the given slot is locked. Must be called with xa_lock held.
*/
static inline int slot_locked(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);
return entry & RADIX_DAX_ENTRY_LOCK;
}

/*
- * Mark the given slot is locked. The function must be called with
- * mapping->tree_lock held
+ * Mark the given slot as locked. Must be called with xa_lock held.
*/
static inline void *lock_slot(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);

entry |= RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
+ radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
return (void *)entry;
}

/*
- * Mark the given slot is unlocked. The function must be called with
- * mapping->tree_lock held
+ * Mark the given slot as unlocked. Must be called with xa_lock held.
*/
static inline void *unlock_slot(struct address_space *mapping, void **slot)
{
unsigned long entry = (unsigned long)
- radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ radix_tree_deref_slot_protected(slot, &mapping->pages.xa_lock);

entry &= ~(unsigned long)RADIX_DAX_ENTRY_LOCK;
- radix_tree_replace_slot(&mapping->page_tree, slot, (void *)entry);
+ radix_tree_replace_slot(&mapping->pages, slot, (void *)entry);
return (void *)entry;
}

@@ -228,7 +223,7 @@ static inline void *unlock_slot(struct address_space *mapping, void **slot)
* put_locked_mapping_entry() when he locked the entry and now wants to
* unlock it.
*
- * The function must be called with mapping->tree_lock held.
+ * Must be called with xa_lock held.
*/
static void *get_unlocked_mapping_entry(struct address_space *mapping,
pgoff_t index, void ***slotp)
@@ -241,7 +236,7 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
ewait.wait.func = wake_exceptional_entry_func;

for (;;) {
- entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
+ entry = __radix_tree_lookup(&mapping->pages, index, NULL,
&slot);
if (!entry ||
WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
@@ -254,10 +249,10 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key);
prepare_to_wait_exclusive(wq, &ewait.wait,
TASK_UNINTERRUPTIBLE);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
schedule();
finish_wait(wq, &ewait.wait);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
}

@@ -266,15 +261,15 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
{
void *entry, **slot;

- spin_lock_irq(&mapping->tree_lock);
- entry = __radix_tree_lookup(&mapping->page_tree, index, NULL, &slot);
+ xa_lock_irq(&mapping->pages);
+ entry = __radix_tree_lookup(&mapping->pages, index, NULL, &slot);
if (WARN_ON_ONCE(!entry || !radix_tree_exceptional_entry(entry) ||
!slot_locked(mapping, slot))) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return;
}
unlock_slot(mapping, slot);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
dax_wake_mapping_entry_waiter(mapping, index, entry, false);
}

@@ -331,7 +326,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
void *entry, **slot;

restart:
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);

if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
@@ -363,12 +358,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
if (pmd_downgrade) {
/*
* Make sure 'entry' remains valid while we drop
- * mapping->tree_lock.
+ * xa_lock.
*/
entry = lock_slot(mapping, slot);
}

- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* Besides huge zero pages the only other thing that gets
* downgraded are empty entries which don't need to be
@@ -385,26 +380,26 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
put_locked_mapping_entry(mapping, index);
return ERR_PTR(err);
}
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

if (!entry) {
/*
- * We needed to drop the page_tree lock while calling
+ * We needed to drop the pages lock while calling
* radix_tree_preload() and we didn't have an entry to
* lock. See if another thread inserted an entry at
* our index during this time.
*/
- entry = __radix_tree_lookup(&mapping->page_tree, index,
+ entry = __radix_tree_lookup(&mapping->pages, index,
NULL, &slot);
if (entry) {
radix_tree_preload_end();
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
goto restart;
}
}

if (pmd_downgrade) {
- radix_tree_delete(&mapping->page_tree, index);
+ radix_tree_delete(&mapping->pages, index);
mapping->nrexceptional--;
dax_wake_mapping_entry_waiter(mapping, index, entry,
true);
@@ -412,11 +407,11 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,

entry = dax_radix_locked_entry(0, size_flag | RADIX_DAX_EMPTY);

- err = __radix_tree_insert(&mapping->page_tree, index,
+ err = __radix_tree_insert(&mapping->pages, index,
dax_radix_order(entry), entry);
radix_tree_preload_end();
if (err) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/*
* Our insertion of a DAX entry failed, most likely
* because we were inserting a PMD entry and it
@@ -429,12 +424,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
}
/* Good, we have inserted empty locked entry into the tree. */
mapping->nrexceptional++;
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}
entry = lock_slot(mapping, slot);
out_unlock:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}

@@ -443,22 +438,22 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
{
int ret = 0;
void *entry;
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, NULL);
if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
goto out;
if (!trunc &&
- (radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_DIRTY) ||
- radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE)))
+ (radix_tree_tag_get(pages, index, PAGECACHE_TAG_DIRTY) ||
+ radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE)))
goto out;
- radix_tree_delete(page_tree, index);
+ radix_tree_delete(pages, index);
mapping->nrexceptional--;
ret = 1;
out:
put_unlocked_mapping_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return ret;
}
/*
@@ -528,7 +523,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
void *entry, sector_t sector,
unsigned long flags, bool dirty)
{
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;
void *new_entry;
pgoff_t index = vmf->pgoff;

@@ -546,7 +541,7 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
PAGE_SIZE, 0);
}

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
new_entry = dax_radix_locked_entry(sector, flags);

if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
@@ -562,17 +557,17 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
void **slot;
void *ret;

- ret = __radix_tree_lookup(page_tree, index, &node, &slot);
+ ret = __radix_tree_lookup(pages, index, &node, &slot);
WARN_ON_ONCE(ret != entry);
- __radix_tree_replace(page_tree, node, slot,
+ __radix_tree_replace(pages, node, slot,
new_entry, NULL);
entry = new_entry;
}

if (dirty)
- radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY);
+ radix_tree_tag_set(pages, index, PAGECACHE_TAG_DIRTY);

- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return entry;
}

@@ -662,7 +657,7 @@ static int dax_writeback_one(struct block_device *bdev,
struct dax_device *dax_dev, struct address_space *mapping,
pgoff_t index, void *entry)
{
- struct radix_tree_root *page_tree = &mapping->page_tree;
+ struct radix_tree_root *pages = &mapping->pages;
void *entry2, **slot, *kaddr;
long ret = 0, id;
sector_t sector;
@@ -677,7 +672,7 @@ static int dax_writeback_one(struct block_device *bdev,
if (WARN_ON(!radix_tree_exceptional_entry(entry)))
return -EIO;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
/* Entry got punched out / reallocated? */
if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
@@ -696,7 +691,7 @@ static int dax_writeback_one(struct block_device *bdev,
}

/* Another fsync thread may have already written back this entry */
- if (!radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE))
+ if (!radix_tree_tag_get(pages, index, PAGECACHE_TAG_TOWRITE))
goto put_unlocked;
/* Lock the entry to serialize with page faults */
entry = lock_slot(mapping, slot);
@@ -704,11 +699,11 @@ static int dax_writeback_one(struct block_device *bdev,
* We can clear the tag now but we have to be careful so that concurrent
* dax_writeback_one() calls for the same index cannot finish before we
* actually flush the caches. This is achieved as the calls will look
- * at the entry only under tree_lock and once they do that they will
+ * at the entry only under xa_lock and once they do that they will
* see the entry locked and wait for it to unlock.
*/
- radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_TOWRITE);
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_tag_clear(pages, index, PAGECACHE_TAG_TOWRITE);
+ xa_unlock_irq(&mapping->pages);

/*
* Even if dax_writeback_mapping_range() was given a wbc->range_start
@@ -726,7 +721,7 @@ static int dax_writeback_one(struct block_device *bdev,
goto dax_unlock;

/*
- * dax_direct_access() may sleep, so cannot hold tree_lock over
+ * dax_direct_access() may sleep, so cannot hold xa_lock over
* its invocation.
*/
ret = dax_direct_access(dax_dev, pgoff, size / PAGE_SIZE, &kaddr, &pfn);
@@ -746,9 +741,9 @@ static int dax_writeback_one(struct block_device *bdev,
* the pfn mappings are writeprotected and fault waits for mapping
* entry lock.
*/
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_clear(page_tree, index, PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_clear(pages, index, PAGECACHE_TAG_DIRTY);
+ xa_unlock_irq(&mapping->pages);
trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
dax_unlock:
dax_read_unlock(id);
@@ -757,7 +752,7 @@ static int dax_writeback_one(struct block_device *bdev,

put_unlocked:
put_unlocked_mapping_entry(mapping, index, entry2);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return ret;
}

@@ -1528,21 +1523,21 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
pgoff_t index = vmf->pgoff;
int vmf_ret, error;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);
/* Did we race with someone splitting entry or so? */
if (!entry ||
(pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
(pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
put_unlocked_mapping_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
VM_FAULT_NOPAGE);
return VM_FAULT_NOPAGE;
}
- radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY);
+ radix_tree_tag_set(&mapping->pages, index, PAGECACHE_TAG_DIRTY);
entry = lock_slot(mapping, slot);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
switch (pe_size) {
case PE_SIZE_PTE:
error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 516fa0d3ff9c..8f51ac47b77f 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2172,12 +2172,12 @@ void f2fs_set_page_dirty_nobuffers(struct page *page)
SetPageDirty(page);
spin_unlock(&mapping->private_lock);

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
WARN_ON_ONCE(!PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
+ radix_tree_tag_set(&mapping->pages,
page_index(page), PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);

__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index 2d98d877c09d..b5515ea6bb2f 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -739,10 +739,10 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,

if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) {
- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree, page_index(page),
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

clear_page_dirty_for_io(page);
ClearPagePrivate(page);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 90e38d8ea688..7858b8e15f33 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -226,10 +226,10 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page);

- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree, page_index(page),
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index d3322752426f..6b64a3009d55 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -91,11 +91,11 @@ static void clear_node_page_dirty(struct page *page)
unsigned int long flags;

if (PageDirty(page)) {
- spin_lock_irqsave(&mapping->tree_lock, flags);
- radix_tree_tag_clear(&mapping->page_tree,
+ xa_lock_irqsave(&mapping->pages, flags);
+ radix_tree_tag_clear(&mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

clear_page_dirty_for_io(page);
dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
@@ -1143,7 +1143,7 @@ void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
f2fs_bug_on(sbi, check_nid_range(sbi, nid));

rcu_read_lock();
- apage = radix_tree_lookup(&NODE_MAPPING(sbi)->page_tree, nid);
+ apage = radix_tree_lookup(&NODE_MAPPING(sbi)->pages, nid);
rcu_read_unlock();
if (apage)
return;
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index cea4836385b7..e2c1ca667d9a 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -347,9 +347,9 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
* By the time control reaches here, RCU grace period has passed
* since I_WB_SWITCH assertion and all wb stat update transactions
* between unlocked_inode_to_wb_begin/end() are guaranteed to be
- * synchronizing against mapping->tree_lock.
+ * synchronizing against xa_lock.
*
- * Grabbing old_wb->list_lock, inode->i_lock and mapping->tree_lock
+ * Grabbing old_wb->list_lock, inode->i_lock and xa_lock
* gives us exclusion against all wb related operations on @inode
* including IO list manipulations and stat updates.
*/
@@ -361,7 +361,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING);
}
spin_lock(&inode->i_lock);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

/*
* Once I_FREEING is visible under i_lock, the eviction path owns
@@ -373,22 +373,22 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
/*
* Count and transfer stats. Note that PAGECACHE_TAG_DIRTY points
* to possibly dirty pages while PAGECACHE_TAG_WRITEBACK points to
- * pages actually under underwriteback.
+ * pages actually under writeback.
*/
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
PAGECACHE_TAG_DIRTY) {
struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (likely(page) && PageDirty(page)) {
dec_wb_stat(old_wb, WB_RECLAIMABLE);
inc_wb_stat(new_wb, WB_RECLAIMABLE);
}
}

- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, 0,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, 0,
PAGECACHE_TAG_WRITEBACK) {
struct page *page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (likely(page)) {
WARN_ON_ONCE(!PageWriteback(page));
dec_wb_stat(old_wb, WB_WRITEBACK);
@@ -430,7 +430,7 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
*/
smp_store_release(&inode->i_state, inode->i_state & ~I_WB_SWITCH);

- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
spin_unlock(&inode->i_lock);
spin_unlock(&new_wb->list_lock);
spin_unlock(&old_wb->list_lock);
@@ -507,7 +507,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
/*
* In addition to synchronizing among switchers, I_WB_SWITCH tells
* the RCU protected stat update paths to grab the mapping's
- * tree_lock so that stat transfer can synchronize against them.
+ * xa_lock so that stat transfer can synchronize against them.
* Let's continue after I_WB_SWITCH is guaranteed to be visible.
*/
call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
diff --git a/fs/inode.c b/fs/inode.c
index 03102d6ef044..c7b00573c10d 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -348,8 +348,7 @@ EXPORT_SYMBOL(inc_nlink);
void address_space_init_once(struct address_space *mapping)
{
memset(mapping, 0, sizeof(*mapping));
- INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC | __GFP_ACCOUNT);
- spin_lock_init(&mapping->tree_lock);
+ INIT_RADIX_TREE(&mapping->pages, GFP_ATOMIC | __GFP_ACCOUNT);
init_rwsem(&mapping->i_mmap_rwsem);
INIT_LIST_HEAD(&mapping->private_list);
spin_lock_init(&mapping->private_lock);
@@ -499,14 +498,14 @@ void clear_inode(struct inode *inode)
{
might_sleep();
/*
- * We have to cycle tree_lock here because reclaim can be still in the
+ * We have to cycle the xa_lock here because reclaim can be in the
* process of removing the last page (in __delete_from_page_cache())
- * and we must not free mapping under it.
+ * and we must not free the mapping under it.
*/
- spin_lock_irq(&inode->i_data.tree_lock);
+ xa_lock_irq(&inode->i_data.pages);
BUG_ON(inode->i_data.nrpages);
BUG_ON(inode->i_data.nrexceptional);
- spin_unlock_irq(&inode->i_data.tree_lock);
+ xa_unlock_irq(&inode->i_data.pages);
BUG_ON(!list_empty(&inode->i_data.private_list));
BUG_ON(!(inode->i_state & I_FREEING));
BUG_ON(inode->i_state & I_CLEAR);
diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
index c21e0b4454a6..9e2a00207436 100644
--- a/fs/nilfs2/btnode.c
+++ b/fs/nilfs2/btnode.c
@@ -193,9 +193,9 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
(unsigned long long)oldkey,
(unsigned long long)newkey);

- spin_lock_irq(&btnc->tree_lock);
- err = radix_tree_insert(&btnc->page_tree, newkey, obh->b_page);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_lock_irq(&btnc->pages);
+ err = radix_tree_insert(&btnc->pages, newkey, obh->b_page);
+ xa_unlock_irq(&btnc->pages);
/*
* Note: page->index will not change to newkey until
* nilfs_btnode_commit_change_key() will be called.
@@ -251,11 +251,11 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
(unsigned long long)newkey);
mark_buffer_dirty(obh);

- spin_lock_irq(&btnc->tree_lock);
- radix_tree_delete(&btnc->page_tree, oldkey);
- radix_tree_tag_set(&btnc->page_tree, newkey,
+ xa_lock_irq(&btnc->pages);
+ radix_tree_delete(&btnc->pages, oldkey);
+ radix_tree_tag_set(&btnc->pages, newkey,
PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_unlock_irq(&btnc->pages);

opage->index = obh->b_blocknr = newkey;
unlock_page(opage);
@@ -283,9 +283,9 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc,
return;

if (nbh == NULL) { /* blocksize == pagesize */
- spin_lock_irq(&btnc->tree_lock);
- radix_tree_delete(&btnc->page_tree, newkey);
- spin_unlock_irq(&btnc->tree_lock);
+ xa_lock_irq(&btnc->pages);
+ radix_tree_delete(&btnc->pages, newkey);
+ xa_unlock_irq(&btnc->pages);
unlock_page(ctxt->bh->b_page);
} else
brelse(nbh);
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 68241512d7c1..1c6703efde9e 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -331,15 +331,15 @@ void nilfs_copy_back_pages(struct address_space *dmap,
struct page *page2;

/* move the page to the destination cache */
- spin_lock_irq(&smap->tree_lock);
- page2 = radix_tree_delete(&smap->page_tree, offset);
+ xa_lock_irq(&smap->pages);
+ page2 = radix_tree_delete(&smap->pages, offset);
WARN_ON(page2 != page);

smap->nrpages--;
- spin_unlock_irq(&smap->tree_lock);
+ xa_unlock_irq(&smap->pages);

- spin_lock_irq(&dmap->tree_lock);
- err = radix_tree_insert(&dmap->page_tree, offset, page);
+ xa_lock_irq(&dmap->pages);
+ err = radix_tree_insert(&dmap->pages, offset, page);
if (unlikely(err < 0)) {
WARN_ON(err == -EEXIST);
page->mapping = NULL;
@@ -348,11 +348,11 @@ void nilfs_copy_back_pages(struct address_space *dmap,
page->mapping = dmap;
dmap->nrpages++;
if (PageDirty(page))
- radix_tree_tag_set(&dmap->page_tree,
+ radix_tree_tag_set(&dmap->pages,
offset,
PAGECACHE_TAG_DIRTY);
}
- spin_unlock_irq(&dmap->tree_lock);
+ xa_unlock_irq(&dmap->pages);
}
unlock_page(page);
}
@@ -474,15 +474,15 @@ int __nilfs_clear_page_dirty(struct page *page)
struct address_space *mapping = page->mapping;

if (mapping) {
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
if (test_bit(PG_dirty, &page->flags)) {
- radix_tree_tag_clear(&mapping->page_tree,
+ radix_tree_tag_clear(&mapping->pages,
page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return clear_page_dirty_for_io(page);
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return 0;
}
return TestClearPageDirty(page);
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 3e4ce54d84ab..3df0d20e23f3 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -329,7 +329,7 @@ static inline bool inode_to_wb_is_valid(struct inode *inode)
* @inode: inode of interest
*
* Returns the wb @inode is currently associated with. The caller must be
- * holding either @inode->i_lock, @inode->i_mapping->tree_lock, or the
+ * holding either @inode->i_lock, @inode->i_mapping->pages.xa_lock, or the
* associated wb's list_lock.
*/
static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
@@ -337,7 +337,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
#ifdef CONFIG_LOCKDEP
WARN_ON_ONCE(debug_locks &&
(!lockdep_is_held(&inode->i_lock) &&
- !lockdep_is_held(&inode->i_mapping->tree_lock) &&
+ !lockdep_is_held(&inode->i_mapping->pages.xa_lock) &&
!lockdep_is_held(&inode->i_wb->list_lock)));
#endif
return inode->i_wb;
@@ -349,7 +349,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
* @lockedp: temp bool output param, to be passed to the end function
*
* The caller wants to access the wb associated with @inode but isn't
- * holding inode->i_lock, mapping->tree_lock or wb->list_lock. This
+ * holding inode->i_lock, mapping->pages.xa_lock or wb->list_lock. This
* function determines the wb associated with @inode and ensures that the
* association doesn't change until the transaction is finished with
* unlocked_inode_to_wb_end().
@@ -370,10 +370,10 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
*lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;

if (unlikely(*lockedp))
- spin_lock_irq(&inode->i_mapping->tree_lock);
+ xa_lock_irq(&inode->i_mapping->pages);

/*
- * Protected by either !I_WB_SWITCH + rcu_read_lock() or tree_lock.
+ * Protected by either !I_WB_SWITCH + rcu_read_lock() or xa_lock.
* inode_to_wb() will bark. Deref directly.
*/
return inode->i_wb;
@@ -387,7 +387,7 @@ unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
{
if (unlikely(locked))
- spin_unlock_irq(&inode->i_mapping->tree_lock);
+ xa_unlock_irq(&inode->i_mapping->pages);

rcu_read_unlock();
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 511fbaabf624..c07169cfb44a 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -13,6 +13,7 @@
#include <linux/list_lru.h>
#include <linux/llist.h>
#include <linux/radix-tree.h>
+#include <linux/xarray.h>
#include <linux/rbtree.h>
#include <linux/init.h>
#include <linux/pid.h>
@@ -390,23 +391,21 @@ int pagecache_write_end(struct file *, struct address_space *mapping,

struct address_space {
struct inode *host; /* owner: inode, block_device */
- struct radix_tree_root page_tree; /* radix tree of all pages */
- spinlock_t tree_lock; /* and lock protecting it */
+ struct radix_tree_root pages; /* cached pages */
+ gfp_t gfp_mask; /* for allocating pages */
atomic_t i_mmap_writable;/* count VM_SHARED mappings */
struct rb_root_cached i_mmap; /* tree of private and shared mappings */
struct rw_semaphore i_mmap_rwsem; /* protect tree, count, list */
- /* Protected by tree_lock together with the radix tree */
+ /* Protected by pages.xa_lock */
unsigned long nrpages; /* number of total pages */
- /* number of shadow or DAX exceptional entries */
- unsigned long nrexceptional;
+ unsigned long nrexceptional; /* shadow or DAX entries */
pgoff_t writeback_index;/* writeback starts here */
const struct address_space_operations *a_ops; /* methods */
unsigned long flags; /* error bits */
+ errseq_t wb_err;
spinlock_t private_lock; /* for use by the address_space */
- gfp_t gfp_mask; /* implicit gfp mask for allocations */
- struct list_head private_list; /* for use by the address_space */
+ struct list_head private_list; /* ditto */
void *private_data; /* ditto */
- errseq_t wb_err;
} __attribute__((aligned(sizeof(long)))) __randomize_layout;
/*
* On most architectures that alignment is already the case; but
@@ -1977,7 +1976,7 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
*
* I_WB_SWITCH Cgroup bdi_writeback switching in progress. Used to
* synchronize competing switching instances and to tell
- * wb stat updates to grab mapping->tree_lock. See
+ * wb stat updates to grab mapping->pages.xa_lock. See
* inode_switch_wb_work_fn() for details.
*
* I_OVL_INUSE Used by overlayfs to get exclusive ownership on upper
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 96b1932380e9..fe1ee4313add 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -738,7 +738,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
* refcount. The each user mapping also has a reference to the page.
*
* The pagecache pages are stored in a per-mapping radix tree, which is
- * rooted at mapping->page_tree, and indexed by offset.
+ * rooted at mapping->pages, and indexed by offset.
* Where 2.4 and early 2.6 kernels kept dirty/clean pages in per-address_space
* lists, we instead now tag pages as dirty/writeback in the radix tree.
*
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 34ce3ebf97d5..80a6149152d4 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -144,7 +144,7 @@ void release_pages(struct page **pages, int nr);
* 3. check the page is still in pagecache (if no, goto 1)
*
* Remove-side that cares about stability of _refcount (eg. reclaim) has the
- * following (with tree_lock held for write):
+ * following (with pages.xa_lock held):
* A. atomically check refcount is correct and set it to 0 (atomic_cmpxchg)
* B. remove page from pagecache
* C. free the page
@@ -157,7 +157,7 @@ void release_pages(struct page **pages, int nr);
*
* It is possible that between 1 and 2, the page is removed then the exact same
* page is inserted into the same position in pagecache. That's OK: the
- * old find_get_page using tree_lock could equally have run before or after
+ * old find_get_page using a lock could equally have run before or after
* such a re-insertion, depending on order that locks are granted.
*
* Lookups racing against pagecache insertion isn't a big problem: either 1
diff --git a/mm/filemap.c b/mm/filemap.c
index ee83baaf855d..e9172cefeb36 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -67,7 +67,7 @@
* ->i_mmap_rwsem (truncate_pagecache)
* ->private_lock (__free_pte->__set_page_dirty_buffers)
* ->swap_lock (exclusive_swap_page, others)
- * ->mapping->tree_lock
+ * ->mapping->pages.xa_lock
*
* ->i_mutex
* ->i_mmap_rwsem (truncate->unmap_mapping_range)
@@ -75,7 +75,7 @@
* ->mmap_sem
* ->i_mmap_rwsem
* ->page_table_lock or pte_lock (various, mainly in memory.c)
- * ->mapping->tree_lock (arch-dependent flush_dcache_mmap_lock)
+ * ->mapping->pages.xa_lock (arch-dependent flush_dcache_mmap_lock)
*
* ->mmap_sem
* ->lock_page (access_process_vm)
@@ -85,7 +85,7 @@
*
* bdi->wb.list_lock
* sb_lock (fs/fs-writeback.c)
- * ->mapping->tree_lock (__sync_single_inode)
+ * ->mapping->pages.xa_lock (__sync_single_inode)
*
* ->i_mmap_rwsem
* ->anon_vma.lock (vma_adjust)
@@ -96,11 +96,11 @@
* ->page_table_lock or pte_lock
* ->swap_lock (try_to_unmap_one)
* ->private_lock (try_to_unmap_one)
- * ->tree_lock (try_to_unmap_one)
+ * ->pages.xa_lock (try_to_unmap_one)
* ->zone_lru_lock(zone) (follow_page->mark_page_accessed)
* ->zone_lru_lock(zone) (check_pte_range->isolate_lru_page)
* ->private_lock (page_remove_rmap->set_page_dirty)
- * ->tree_lock (page_remove_rmap->set_page_dirty)
+ * ->pages.xa_lock (page_remove_rmap->set_page_dirty)
* bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
* ->inode->i_lock (page_remove_rmap->set_page_dirty)
* ->memcg->move_lock (page_remove_rmap->lock_page_memcg)
@@ -119,14 +119,15 @@ static int page_cache_tree_insert(struct address_space *mapping,
void **slot;
int error;

- error = __radix_tree_create(&mapping->page_tree, page->index, 0,
+ error = __radix_tree_create(&mapping->pages, page->index, 0,
&node, &slot);
if (error)
return error;
if (*slot) {
void *p;

- p = radix_tree_deref_slot_protected(slot, &mapping->tree_lock);
+ p = radix_tree_deref_slot_protected(slot,
+ &mapping->pages.xa_lock);
if (!radix_tree_exceptional_entry(p))
return -EEXIST;

@@ -134,7 +135,7 @@ static int page_cache_tree_insert(struct address_space *mapping,
if (shadowp)
*shadowp = p;
}
- __radix_tree_replace(&mapping->page_tree, node, slot, page,
+ __radix_tree_replace(&mapping->pages, node, slot, page,
workingset_lookup_update(mapping));
mapping->nrpages++;
return 0;
@@ -156,13 +157,13 @@ static void page_cache_tree_delete(struct address_space *mapping,
struct radix_tree_node *node;
void **slot;

- __radix_tree_lookup(&mapping->page_tree, page->index + i,
+ __radix_tree_lookup(&mapping->pages, page->index + i,
&node, &slot);

VM_BUG_ON_PAGE(!node && nr != 1, page);

- radix_tree_clear_tags(&mapping->page_tree, node, slot);
- __radix_tree_replace(&mapping->page_tree, node, slot, shadow,
+ radix_tree_clear_tags(&mapping->pages, node, slot);
+ __radix_tree_replace(&mapping->pages, node, slot, shadow,
workingset_lookup_update(mapping));
}

@@ -254,7 +255,7 @@ static void unaccount_page_cache_page(struct address_space *mapping,
/*
* Delete a page from the page cache and free it. Caller has to make
* sure the page is locked and that nobody else uses it - or that usage
- * is safe. The caller must hold the mapping's tree_lock.
+ * is safe. The caller must hold the xa_lock.
*/
void __delete_from_page_cache(struct page *page, void *shadow)
{
@@ -297,9 +298,9 @@ void delete_from_page_cache(struct page *page)
unsigned long flags;

BUG_ON(!PageLocked(page));
- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
__delete_from_page_cache(page, NULL);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

page_cache_free_page(mapping, page);
}
@@ -310,14 +311,14 @@ EXPORT_SYMBOL(delete_from_page_cache);
* @mapping: the mapping to which pages belong
* @pvec: pagevec with pages to delete
*
- * The function walks over mapping->page_tree and removes pages passed in @pvec
- * from the radix tree. The function expects @pvec to be sorted by page index.
- * It tolerates holes in @pvec (radix tree entries at those indices are not
+ * The function walks over mapping->pages and removes pages passed in @pvec
+ * from the mapping. The function expects @pvec to be sorted by page index.
+ * It tolerates holes in @pvec (mapping entries at those indices are not
* modified). The function expects only THP head pages to be present in the
- * @pvec and takes care to delete all corresponding tail pages from the radix
- * tree as well.
+ * @pvec and takes care to delete all corresponding tail pages from the
+ * mapping as well.
*
- * The function expects mapping->tree_lock to be held.
+ * The function expects xa_lock to be held.
*/
static void
page_cache_tree_delete_batch(struct address_space *mapping,
@@ -331,11 +332,11 @@ page_cache_tree_delete_batch(struct address_space *mapping,
pgoff_t start;

start = pvec->pages[0]->index;
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (i >= pagevec_count(pvec) && !tail_pages)
break;
page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (radix_tree_exceptional_entry(page))
continue;
if (!tail_pages) {
@@ -358,8 +359,8 @@ page_cache_tree_delete_batch(struct address_space *mapping,
} else {
tail_pages--;
}
- radix_tree_clear_tags(&mapping->page_tree, iter.node, slot);
- __radix_tree_replace(&mapping->page_tree, iter.node, slot, NULL,
+ radix_tree_clear_tags(&mapping->pages, iter.node, slot);
+ __radix_tree_replace(&mapping->pages, iter.node, slot, NULL,
workingset_lookup_update(mapping));
total_pages++;
}
@@ -375,14 +376,14 @@ void delete_from_page_cache_batch(struct address_space *mapping,
if (!pagevec_count(pvec))
return;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
for (i = 0; i < pagevec_count(pvec); i++) {
trace_mm_filemap_delete_from_page_cache(pvec->pages[i]);

unaccount_page_cache_page(mapping, pvec->pages[i]);
}
page_cache_tree_delete_batch(mapping, pvec);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

for (i = 0; i < pagevec_count(pvec); i++)
page_cache_free_page(mapping, pvec->pages[i]);
@@ -799,7 +800,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
new->mapping = mapping;
new->index = offset;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
__delete_from_page_cache(old, NULL);
error = page_cache_tree_insert(mapping, new, NULL);
BUG_ON(error);
@@ -811,7 +812,7 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
__inc_node_page_state(new, NR_FILE_PAGES);
if (PageSwapBacked(new))
__inc_node_page_state(new, NR_SHMEM);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
mem_cgroup_migrate(old, new);
radix_tree_preload_end();
if (freepage)
@@ -853,7 +854,7 @@ static int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
error = page_cache_tree_insert(mapping, page, shadowp);
radix_tree_preload_end();
if (unlikely(error))
@@ -862,7 +863,7 @@ static int __add_to_page_cache_locked(struct page *page,
/* hugetlb pages do not participate in page cache accounting. */
if (!huge)
__inc_node_page_state(page, NR_FILE_PAGES);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_commit_charge(page, memcg, false, false);
trace_mm_filemap_add_to_page_cache(page);
@@ -870,7 +871,7 @@ static int __add_to_page_cache_locked(struct page *page,
err_insert:
page->mapping = NULL;
/* Leave page->index set: truncation relies upon it */
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
if (!huge)
mem_cgroup_cancel_charge(page, memcg, false);
put_page(page);
@@ -1354,7 +1355,7 @@ pgoff_t page_cache_next_hole(struct address_space *mapping,
for (i = 0; i < max_scan; i++) {
struct page *page;

- page = radix_tree_lookup(&mapping->page_tree, index);
+ page = radix_tree_lookup(&mapping->pages, index);
if (!page || radix_tree_exceptional_entry(page))
break;
index++;
@@ -1395,7 +1396,7 @@ pgoff_t page_cache_prev_hole(struct address_space *mapping,
for (i = 0; i < max_scan; i++) {
struct page *page;

- page = radix_tree_lookup(&mapping->page_tree, index);
+ page = radix_tree_lookup(&mapping->pages, index);
if (!page || radix_tree_exceptional_entry(page))
break;
index--;
@@ -1428,7 +1429,7 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
rcu_read_lock();
repeat:
page = NULL;
- pagep = radix_tree_lookup_slot(&mapping->page_tree, offset);
+ pagep = radix_tree_lookup_slot(&mapping->pages, offset);
if (pagep) {
page = radix_tree_deref_slot(pagep);
if (unlikely(!page))
@@ -1634,7 +1635,7 @@ unsigned find_get_entries(struct address_space *mapping,
return 0;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -1711,7 +1712,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
return 0;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, *start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) {
struct page *head, *page;

if (iter.index > end)
@@ -1796,7 +1797,7 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
return 0;

rcu_read_lock();
- radix_tree_for_each_contig(slot, &mapping->page_tree, &iter, index) {
+ radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -1876,8 +1877,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
return 0;

rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree,
- &iter, *index, tag) {
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) {
struct page *head, *page;

if (iter.index > end)
@@ -1970,8 +1970,7 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
return 0;

rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree,
- &iter, start, tag) {
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) {
struct page *head, *page;
repeat:
page = radix_tree_deref_slot(slot);
@@ -2625,8 +2624,7 @@ void filemap_map_pages(struct vm_fault *vmf,
struct page *head, *page;

rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
- start_pgoff) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) {
if (iter.index > end_pgoff)
break;
repeat:
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0e7ded98d114..f71dd3e7d8cd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2458,7 +2458,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
} else {
/* Additional pin to radix tree */
page_ref_add(head, 2);
- spin_unlock(&head->mapping->tree_lock);
+ xa_unlock(&head->mapping->pages);
}

spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
@@ -2666,15 +2666,15 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
if (mapping) {
void **pslot;

- spin_lock(&mapping->tree_lock);
- pslot = radix_tree_lookup_slot(&mapping->page_tree,
+ xa_lock(&mapping->pages);
+ pslot = radix_tree_lookup_slot(&mapping->pages,
page_index(head));
/*
* Check if the head page is present in radix tree.
* We assume all tail are present too, if head is there.
*/
if (radix_tree_deref_slot_protected(pslot,
- &mapping->tree_lock) != head)
+ &mapping->pages.xa_lock) != head)
goto fail;
}

@@ -2708,7 +2708,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
}
spin_unlock(&pgdata->split_queue_lock);
fail: if (mapping)
- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
unfreeze_page(head);
ret = -EBUSY;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ea4ff259b671..cb4d199bf328 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1339,8 +1339,8 @@ static void collapse_shmem(struct mm_struct *mm,
*/

index = start;
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
int n = min(iter.index, end) - index;

/*
@@ -1353,7 +1353,7 @@ static void collapse_shmem(struct mm_struct *mm,
}
nr_none += n;
for (; index < min(iter.index, end); index++) {
- radix_tree_insert(&mapping->page_tree, index,
+ radix_tree_insert(&mapping->pages, index,
new_page + (index % HPAGE_PMD_NR));
}

@@ -1362,16 +1362,16 @@ static void collapse_shmem(struct mm_struct *mm,
break;

page = radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock);
+ &mapping->pages.xa_lock);
if (radix_tree_exceptional_entry(page) || !PageUptodate(page)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
/* swap in or instantiate fallocated page */
if (shmem_getpage(mapping->host, index, &page,
SGP_NOHUGE)) {
result = SCAN_FAIL;
goto tree_unlocked;
}
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
} else if (trylock_page(page)) {
get_page(page);
} else {
@@ -1380,7 +1380,7 @@ static void collapse_shmem(struct mm_struct *mm,
}

/*
- * The page must be locked, so we can drop the tree_lock
+ * The page must be locked, so we can drop the xa_lock
* without racing with truncate.
*/
VM_BUG_ON_PAGE(!PageLocked(page), page);
@@ -1391,7 +1391,7 @@ static void collapse_shmem(struct mm_struct *mm,
result = SCAN_TRUNCATED;
goto out_unlock;
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);

if (isolate_lru_page(page)) {
result = SCAN_DEL_PAGE_LRU;
@@ -1402,11 +1402,11 @@ static void collapse_shmem(struct mm_struct *mm,
unmap_mapping_range(mapping, index << PAGE_SHIFT,
PAGE_SIZE, 0);

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

- slot = radix_tree_lookup_slot(&mapping->page_tree, index);
+ slot = radix_tree_lookup_slot(&mapping->pages, index);
VM_BUG_ON_PAGE(page != radix_tree_deref_slot_protected(slot,
- &mapping->tree_lock), page);
+ &mapping->pages.xa_lock), page);
VM_BUG_ON_PAGE(page_mapped(page), page);

/*
@@ -1427,14 +1427,14 @@ static void collapse_shmem(struct mm_struct *mm,
list_add_tail(&page->lru, &pagelist);

/* Finally, replace with the new page. */
- radix_tree_replace_slot(&mapping->page_tree, slot,
+ radix_tree_replace_slot(&mapping->pages, slot,
new_page + (index % HPAGE_PMD_NR));

slot = radix_tree_iter_resume(slot, &iter);
index++;
continue;
out_lru:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
putback_lru_page(page);
out_isolate_failed:
unlock_page(page);
@@ -1460,14 +1460,14 @@ static void collapse_shmem(struct mm_struct *mm,
}

for (; index < end; index++) {
- radix_tree_insert(&mapping->page_tree, index,
+ radix_tree_insert(&mapping->pages, index,
new_page + (index % HPAGE_PMD_NR));
}
nr_none += n;
}

tree_locked:
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
tree_unlocked:

if (result == SCAN_SUCCEED) {
@@ -1516,9 +1516,8 @@ static void collapse_shmem(struct mm_struct *mm,
} else {
/* Something went wrong: rollback changes to the radix-tree */
shmem_uncharge(mapping->host, nr_none);
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter,
- start) {
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= end)
break;
page = list_first_entry_or_null(&pagelist,
@@ -1528,8 +1527,7 @@ static void collapse_shmem(struct mm_struct *mm,
break;
nr_none--;
/* Put holes back where they were */
- radix_tree_delete(&mapping->page_tree,
- iter.index);
+ radix_tree_delete(&mapping->pages, iter.index);
continue;
}

@@ -1538,16 +1536,15 @@ static void collapse_shmem(struct mm_struct *mm,
/* Unfreeze the page. */
list_del(&page->lru);
page_ref_unfreeze(page, 2);
- radix_tree_replace_slot(&mapping->page_tree,
- slot, page);
+ radix_tree_replace_slot(&mapping->pages, slot, page);
slot = radix_tree_iter_resume(slot, &iter);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
putback_lru_page(page);
unlock_page(page);
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
VM_BUG_ON(nr_none);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);

/* Unfreeze new_page, caller would take care about freeing it */
page_ref_unfreeze(new_page, 1);
@@ -1575,7 +1572,7 @@ static void khugepaged_scan_shmem(struct mm_struct *mm,
swap = 0;
memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
rcu_read_lock();
- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= start + HPAGE_PMD_NR)
break;

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ac2ffd5e02b9..1d4acb7f33e9 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6034,9 +6034,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)

/*
* Interrupts should be disabled here because the caller holds the
- * mapping->tree_lock lock which is taken with interrupts-off. It is
+ * mapping->pages xa_lock which is taken with interrupts-off. It is
* important here to have the interrupts disabled because it is the
- * only synchronisation we have for udpating the per-CPU variables.
+ * only synchronisation we have for updating the per-CPU variables.
*/
VM_BUG_ON(!irqs_disabled());
mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
diff --git a/mm/migrate.c b/mm/migrate.c
index 4d0be47a322a..75d19904dd9a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -466,20 +466,21 @@ int migrate_page_move_mapping(struct address_space *mapping,
oldzone = page_zone(page);
newzone = page_zone(newpage);

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

- pslot = radix_tree_lookup_slot(&mapping->page_tree,
+ pslot = radix_tree_lookup_slot(&mapping->pages,
page_index(page));

expected_count += 1 + page_has_private(page);
if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_deref_slot_protected(pslot,
+ &mapping->pages.xa_lock) != page) {
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}

if (!page_ref_freeze(page, expected_count)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}

@@ -493,7 +494,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
if (mode == MIGRATE_ASYNC && head &&
!buffer_migrate_lock_buffers(head, mode)) {
page_ref_unfreeze(page, expected_count);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}

@@ -521,7 +522,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
SetPageDirty(newpage);
}

- radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
+ radix_tree_replace_slot(&mapping->pages, pslot, newpage);

/*
* Drop cache reference from old page by unfreezing
@@ -530,7 +531,7 @@ int migrate_page_move_mapping(struct address_space *mapping,
*/
page_ref_unfreeze(page, expected_count - 1);

- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
/* Leave irq disabled to prevent preemption while updating stats */

/*
@@ -573,20 +574,19 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,
int expected_count;
void **pslot;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

- pslot = radix_tree_lookup_slot(&mapping->page_tree,
- page_index(page));
+ pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page));

expected_count = 2 + page_has_private(page);
if (page_count(page) != expected_count ||
- radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
- spin_unlock_irq(&mapping->tree_lock);
+ radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) {
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}

if (!page_ref_freeze(page, expected_count)) {
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
return -EAGAIN;
}

@@ -595,11 +595,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping,

get_page(newpage);

- radix_tree_replace_slot(&mapping->page_tree, pslot, newpage);
+ radix_tree_replace_slot(&mapping->pages, pslot, newpage);

page_ref_unfreeze(page, expected_count - 1);

- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);

return MIGRATEPAGE_SUCCESS;
}
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 586f31261c83..588ce729d199 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2099,7 +2099,7 @@ void __init page_writeback_init(void)
* so that it can tag pages faster than a dirtying process can create them).
*/
/*
- * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce tree_lock latency.
+ * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency.
*/
void tag_pages_for_writeback(struct address_space *mapping,
pgoff_t start, pgoff_t end)
@@ -2109,22 +2109,22 @@ void tag_pages_for_writeback(struct address_space *mapping,
struct radix_tree_iter iter;
void **slot;

- spin_lock_irq(&mapping->tree_lock);
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter, start,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start,
PAGECACHE_TAG_DIRTY) {
if (iter.index > end)
break;
- radix_tree_iter_tag_set(&mapping->page_tree, &iter,
+ radix_tree_iter_tag_set(&mapping->pages, &iter,
PAGECACHE_TAG_TOWRITE);
tagged++;
if ((tagged % WRITEBACK_TAG_BATCH) != 0)
continue;
slot = radix_tree_iter_resume(slot, &iter);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
cond_resched();
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
}
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}
EXPORT_SYMBOL(tag_pages_for_writeback);

@@ -2467,13 +2467,13 @@ int __set_page_dirty_nobuffers(struct page *page)
return 1;
}

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
BUG_ON(page_mapping(page) != mapping);
WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree, page_index(page),
+ radix_tree_tag_set(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
unlock_page_memcg(page);

if (mapping->host) {
@@ -2718,11 +2718,10 @@ int test_clear_page_writeback(struct page *page)
struct backing_dev_info *bdi = inode_to_bdi(inode);
unsigned long flags;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
ret = TestClearPageWriteback(page);
if (ret) {
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi)) {
struct bdi_writeback *wb = inode_to_wb(inode);
@@ -2736,7 +2735,7 @@ int test_clear_page_writeback(struct page *page)
PAGECACHE_TAG_WRITEBACK))
sb_clear_inode_writeback(mapping->host);

- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
} else {
ret = TestClearPageWriteback(page);
}
@@ -2766,7 +2765,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
struct backing_dev_info *bdi = inode_to_bdi(inode);
unsigned long flags;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
ret = TestSetPageWriteback(page);
if (!ret) {
bool on_wblist;
@@ -2774,8 +2773,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
on_wblist = mapping_tagged(mapping,
PAGECACHE_TAG_WRITEBACK);

- radix_tree_tag_set(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_set(&mapping->pages, page_index(page),
PAGECACHE_TAG_WRITEBACK);
if (bdi_cap_account_writeback(bdi))
inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK);
@@ -2789,14 +2787,12 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
sb_mark_inode_writeback(mapping->host);
}
if (!PageDirty(page))
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_DIRTY);
if (!keep_write)
- radix_tree_tag_clear(&mapping->page_tree,
- page_index(page),
+ radix_tree_tag_clear(&mapping->pages, page_index(page),
PAGECACHE_TAG_TOWRITE);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
} else {
ret = TestSetPageWriteback(page);
}
@@ -2816,7 +2812,7 @@ EXPORT_SYMBOL(__test_set_page_writeback);
*/
int mapping_tagged(struct address_space *mapping, int tag)
{
- return radix_tree_tagged(&mapping->page_tree, tag);
+ return radix_tree_tagged(&mapping->pages, tag);
}
EXPORT_SYMBOL(mapping_tagged);

diff --git a/mm/readahead.c b/mm/readahead.c
index c4ca70239233..514188fd2489 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -175,7 +175,7 @@ int __do_page_cache_readahead(struct address_space *mapping, struct file *filp,
break;

rcu_read_lock();
- page = radix_tree_lookup(&mapping->page_tree, page_offset);
+ page = radix_tree_lookup(&mapping->pages, page_offset);
rcu_read_unlock();
if (page && !radix_tree_exceptional_entry(page))
continue;
diff --git a/mm/rmap.c b/mm/rmap.c
index 47db27f8049e..87c1ca0cf1a3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -32,11 +32,11 @@
* mmlist_lock (in mmput, drain_mmlist and others)
* mapping->private_lock (in __set_page_dirty_buffers)
* mem_cgroup_{begin,end}_page_stat (memcg->move_lock)
- * mapping->tree_lock (widely used)
+ * mapping->pages.xa_lock (widely used)
* inode->i_lock (in set_page_dirty's __mark_inode_dirty)
* bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
* sb_lock (within inode_lock in fs/fs-writeback.c)
- * mapping->tree_lock (widely used, in set_page_dirty,
+ * mapping->pages.xa_lock (widely used, in set_page_dirty,
* in arch-dependent flush_dcache_mmap_lock,
* within bdi.wb->list_lock in __sync_single_inode)
*
diff --git a/mm/shmem.c b/mm/shmem.c
index 7fbe67be86fa..9b1766e7c8cf 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -332,12 +332,12 @@ static int shmem_radix_tree_replace(struct address_space *mapping,

VM_BUG_ON(!expected);
VM_BUG_ON(!replacement);
- item = __radix_tree_lookup(&mapping->page_tree, index, &node, &pslot);
+ item = __radix_tree_lookup(&mapping->pages, index, &node, &pslot);
if (!item)
return -ENOENT;
if (item != expected)
return -ENOENT;
- __radix_tree_replace(&mapping->page_tree, node, pslot,
+ __radix_tree_replace(&mapping->pages, node, pslot,
replacement, NULL);
return 0;
}
@@ -355,7 +355,7 @@ static bool shmem_confirm_swap(struct address_space *mapping,
void *item;

rcu_read_lock();
- item = radix_tree_lookup(&mapping->page_tree, index);
+ item = radix_tree_lookup(&mapping->pages, index);
rcu_read_unlock();
return item == swp_to_radix_entry(swap);
}
@@ -581,14 +581,14 @@ static int shmem_add_to_page_cache(struct page *page,
page->mapping = mapping;
page->index = index;

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
if (PageTransHuge(page)) {
void __rcu **results;
pgoff_t idx;
int i;

error = 0;
- if (radix_tree_gang_lookup_slot(&mapping->page_tree,
+ if (radix_tree_gang_lookup_slot(&mapping->pages,
&results, &idx, index, 1) &&
idx < index + HPAGE_PMD_NR) {
error = -EEXIST;
@@ -596,14 +596,14 @@ static int shmem_add_to_page_cache(struct page *page,

if (!error) {
for (i = 0; i < HPAGE_PMD_NR; i++) {
- error = radix_tree_insert(&mapping->page_tree,
+ error = radix_tree_insert(&mapping->pages,
index + i, page + i);
VM_BUG_ON(error);
}
count_vm_event(THP_FILE_ALLOC);
}
} else if (!expected) {
- error = radix_tree_insert(&mapping->page_tree, index, page);
+ error = radix_tree_insert(&mapping->pages, index, page);
} else {
error = shmem_radix_tree_replace(mapping, index, expected,
page);
@@ -615,10 +615,10 @@ static int shmem_add_to_page_cache(struct page *page,
__inc_node_page_state(page, NR_SHMEM_THPS);
__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
__mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
} else {
page->mapping = NULL;
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
page_ref_sub(page, nr);
}
return error;
@@ -634,13 +634,13 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap)

VM_BUG_ON_PAGE(PageCompound(page), page);

- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
error = shmem_radix_tree_replace(mapping, page->index, page, radswap);
page->mapping = NULL;
mapping->nrpages--;
__dec_node_page_state(page, NR_FILE_PAGES);
__dec_node_page_state(page, NR_SHMEM);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
put_page(page);
BUG_ON(error);
}
@@ -653,9 +653,9 @@ static int shmem_free_swap(struct address_space *mapping,
{
void *old;

- spin_lock_irq(&mapping->tree_lock);
- old = radix_tree_delete_item(&mapping->page_tree, index, radswap);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ old = radix_tree_delete_item(&mapping->pages, index, radswap);
+ xa_unlock_irq(&mapping->pages);
if (old != radswap)
return -ENOENT;
free_swap_and_cache(radix_to_swp_entry(radswap));
@@ -666,7 +666,7 @@ static int shmem_free_swap(struct address_space *mapping,
* Determine (in bytes) how many of the shmem object's pages mapped by the
* given offsets are swapped out.
*
- * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
+ * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
* as long as the inode doesn't go away and racy results are not a problem.
*/
unsigned long shmem_partial_swap_usage(struct address_space *mapping,
@@ -679,7 +679,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,

rcu_read_lock();

- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
if (iter.index >= end)
break;

@@ -708,7 +708,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
* Determine (in bytes) how many of the shmem object's pages mapped by the
* given vma is swapped out.
*
- * This is safe to call without i_mutex or mapping->tree_lock thanks to RCU,
+ * This is safe to call without i_mutex or mapping->pages.xa_lock thanks to RCU,
* as long as the inode doesn't go away and racy results are not a problem.
*/
unsigned long shmem_swap_usage(struct vm_area_struct *vma)
@@ -1123,7 +1123,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info,
int error = 0;

radswap = swp_to_radix_entry(swap);
- index = find_swap_entry(&mapping->page_tree, radswap);
+ index = find_swap_entry(&mapping->pages, radswap);
if (index == -1)
return -EAGAIN; /* tell shmem_unuse we found nothing */

@@ -1436,7 +1436,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,

hindex = round_down(index, HPAGE_PMD_NR);
rcu_read_lock();
- if (radix_tree_gang_lookup_slot(&mapping->page_tree, &results, &idx,
+ if (radix_tree_gang_lookup_slot(&mapping->pages, &results, &idx,
hindex, 1) && idx < hindex + HPAGE_PMD_NR) {
rcu_read_unlock();
return NULL;
@@ -1549,14 +1549,14 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
* Our caller will very soon move newpage out of swapcache, but it's
* a nice clean interface for us to replace oldpage by newpage there.
*/
- spin_lock_irq(&swap_mapping->tree_lock);
+ xa_lock_irq(&swap_mapping->pages);
error = shmem_radix_tree_replace(swap_mapping, swap_index, oldpage,
newpage);
if (!error) {
__inc_node_page_state(newpage, NR_FILE_PAGES);
__dec_node_page_state(oldpage, NR_FILE_PAGES);
}
- spin_unlock_irq(&swap_mapping->tree_lock);
+ xa_unlock_irq(&swap_mapping->pages);

if (unlikely(error)) {
/*
@@ -2622,7 +2622,7 @@ static void shmem_tag_pins(struct address_space *mapping)
start = 0;
rcu_read_lock();

- radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
+ radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) {
page = radix_tree_deref_slot(slot);
if (!page || radix_tree_exception(page)) {
if (radix_tree_deref_retry(page)) {
@@ -2630,10 +2630,10 @@ static void shmem_tag_pins(struct address_space *mapping)
continue;
}
} else if (page_count(page) - page_mapcount(page) > 1) {
- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_set(&mapping->page_tree, iter.index,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_set(&mapping->pages, iter.index,
SHMEM_TAG_PINNED);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}

if (need_resched()) {
@@ -2665,7 +2665,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)

error = 0;
for (scan = 0; scan <= LAST_SCAN; scan++) {
- if (!radix_tree_tagged(&mapping->page_tree, SHMEM_TAG_PINNED))
+ if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED))
break;

if (!scan)
@@ -2675,7 +2675,7 @@ static int shmem_wait_for_pins(struct address_space *mapping)

start = 0;
rcu_read_lock();
- radix_tree_for_each_tagged(slot, &mapping->page_tree, &iter,
+ radix_tree_for_each_tagged(slot, &mapping->pages, &iter,
start, SHMEM_TAG_PINNED) {

page = radix_tree_deref_slot(slot);
@@ -2701,10 +2701,10 @@ static int shmem_wait_for_pins(struct address_space *mapping)
error = -EBUSY;
}

- spin_lock_irq(&mapping->tree_lock);
- radix_tree_tag_clear(&mapping->page_tree,
+ xa_lock_irq(&mapping->pages);
+ radix_tree_tag_clear(&mapping->pages,
iter.index, SHMEM_TAG_PINNED);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
continue_resched:
if (need_resched()) {
slot = radix_tree_iter_resume(slot, &iter);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 39ae7cfad90f..3f95e8fc4cb2 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -124,10 +124,10 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
SetPageSwapCache(page);

address_space = swap_address_space(entry);
- spin_lock_irq(&address_space->tree_lock);
+ xa_lock_irq(&address_space->pages);
for (i = 0; i < nr; i++) {
set_page_private(page + i, entry.val + i);
- error = radix_tree_insert(&address_space->page_tree,
+ error = radix_tree_insert(&address_space->pages,
idx + i, page + i);
if (unlikely(error))
break;
@@ -145,13 +145,13 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
VM_BUG_ON(error == -EEXIST);
set_page_private(page + i, 0UL);
while (i--) {
- radix_tree_delete(&address_space->page_tree, idx + i);
+ radix_tree_delete(&address_space->pages, idx + i);
set_page_private(page + i, 0UL);
}
ClearPageSwapCache(page);
page_ref_sub(page, nr);
}
- spin_unlock_irq(&address_space->tree_lock);
+ xa_unlock_irq(&address_space->pages);

return error;
}
@@ -188,7 +188,7 @@ void __delete_from_swap_cache(struct page *page)
address_space = swap_address_space(entry);
idx = swp_offset(entry);
for (i = 0; i < nr; i++) {
- radix_tree_delete(&address_space->page_tree, idx + i);
+ radix_tree_delete(&address_space->pages, idx + i);
set_page_private(page + i, 0);
}
ClearPageSwapCache(page);
@@ -272,9 +272,9 @@ void delete_from_swap_cache(struct page *page)
entry.val = page_private(page);

address_space = swap_address_space(entry);
- spin_lock_irq(&address_space->tree_lock);
+ xa_lock_irq(&address_space->pages);
__delete_from_swap_cache(page);
- spin_unlock_irq(&address_space->tree_lock);
+ xa_unlock_irq(&address_space->pages);

put_swap_page(page, entry);
page_ref_sub(page, hpage_nr_pages(page));
@@ -612,12 +612,11 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages)
return -ENOMEM;
for (i = 0; i < nr; i++) {
space = spaces + i;
- INIT_RADIX_TREE(&space->page_tree, GFP_ATOMIC|__GFP_NOWARN);
+ INIT_RADIX_TREE(&space->pages, GFP_ATOMIC|__GFP_NOWARN);
atomic_set(&space->i_mmap_writable, 0);
space->a_ops = &swap_aops;
/* swap cache doesn't use writeback related tags */
mapping_set_no_writeback_tags(space);
- spin_lock_init(&space->tree_lock);
}
nr_swapper_spaces[type] = nr;
rcu_assign_pointer(swapper_spaces[type], spaces);
diff --git a/mm/truncate.c b/mm/truncate.c
index e4b4cf0f4070..094158f2e447 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -36,11 +36,11 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
struct radix_tree_node *node;
void **slot;

- if (!__radix_tree_lookup(&mapping->page_tree, index, &node, &slot))
+ if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot))
return;
if (*slot != entry)
return;
- __radix_tree_replace(&mapping->page_tree, node, slot, NULL,
+ __radix_tree_replace(&mapping->pages, node, slot, NULL,
workingset_update_node);
mapping->nrexceptional--;
}
@@ -48,9 +48,9 @@ static inline void __clear_shadow_entry(struct address_space *mapping,
static void clear_shadow_entry(struct address_space *mapping, pgoff_t index,
void *entry)
{
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
__clear_shadow_entry(mapping, index, entry);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
}

/*
@@ -79,7 +79,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
dax = dax_mapping(mapping);
lock = !dax && indices[j] < end;
if (lock)
- spin_lock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);

for (i = j; i < pagevec_count(pvec); i++) {
struct page *page = pvec->pages[i];
@@ -102,7 +102,7 @@ static void truncate_exceptional_pvec_entries(struct address_space *mapping,
}

if (lock)
- spin_unlock_irq(&mapping->tree_lock);
+ xa_unlock_irq(&mapping->pages);
pvec->nr = j;
}

@@ -522,8 +522,8 @@ void truncate_inode_pages_final(struct address_space *mapping)
* modification that does not see AS_EXITING is
* completed before starting the final truncate.
*/
- spin_lock_irq(&mapping->tree_lock);
- spin_unlock_irq(&mapping->tree_lock);
+ xa_lock_irq(&mapping->pages);
+ xa_unlock_irq(&mapping->pages);

truncate_inode_pages(mapping, 0);
}
@@ -631,13 +631,13 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL))
return 0;

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
if (PageDirty(page))
goto failed;

BUG_ON(page_has_private(page));
__delete_from_page_cache(page, NULL);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

if (mapping->a_ops->freepage)
mapping->a_ops->freepage(page);
@@ -645,7 +645,7 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page)
put_page(page); /* pagecache ref */
return 1;
failed:
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
return 0;
}

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 47d5ced51f2d..2fa675a2db31 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -677,7 +677,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
BUG_ON(!PageLocked(page));
BUG_ON(mapping != page_mapping(page));

- spin_lock_irqsave(&mapping->tree_lock, flags);
+ xa_lock_irqsave(&mapping->pages, flags);
/*
* The non racy check for a busy page.
*
@@ -701,7 +701,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
* load is not satisfied before that of page->_refcount.
*
* Note that if SetPageDirty is always performed via set_page_dirty,
- * and thus under tree_lock, then this ordering is not required.
+ * and thus under xa_lock, then this ordering is not required.
*/
if (unlikely(PageTransHuge(page)) && PageSwapCache(page))
refcount = 1 + HPAGE_PMD_NR;
@@ -719,7 +719,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
swp_entry_t swap = { .val = page_private(page) };
mem_cgroup_swapout(page, swap);
__delete_from_swap_cache(page);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
put_swap_page(page, swap);
} else {
void (*freepage)(struct page *);
@@ -740,13 +740,13 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
* only page cache pages found in these are zero pages
* covering holes, and because we don't want to mix DAX
* exceptional entries and shadow exceptional entries in the
- * same page_tree.
+ * same address_space.
*/
if (reclaimed && page_is_file_cache(page) &&
!mapping_exiting(mapping) && !dax_mapping(mapping))
shadow = workingset_eviction(mapping, page);
__delete_from_page_cache(page, shadow);
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);

if (freepage != NULL)
freepage(page);
@@ -755,7 +755,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
return 1;

cannot_free:
- spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ xa_unlock_irqrestore(&mapping->pages, flags);
return 0;
}

diff --git a/mm/workingset.c b/mm/workingset.c
index b7d616a3bbbe..3cb3586181e6 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -202,7 +202,7 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
* @mapping: address space the page was backing
* @page: the page being evicted
*
- * Returns a shadow entry to be stored in @mapping->page_tree in place
+ * Returns a shadow entry to be stored in @mapping->pages in place
* of the evicted @page so that a later refault can be detected.
*/
void *workingset_eviction(struct address_space *mapping, struct page *page)
@@ -348,7 +348,7 @@ void workingset_update_node(struct radix_tree_node *node)
*
* Avoid acquiring the list_lru lock when the nodes are
* already where they should be. The list_empty() test is safe
- * as node->private_list is protected by &mapping->tree_lock.
+ * as node->private_list is protected by mapping->pages.xa_lock.
*/
if (node->count && node->count == node->exceptional) {
if (list_empty(&node->private_list))
@@ -366,7 +366,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
unsigned long nodes;
unsigned long cache;

- /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
+ /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
local_irq_disable();
nodes = list_lru_shrink_count(&shadow_nodes, sc);
local_irq_enable();
@@ -419,21 +419,21 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,

/*
* Page cache insertions and deletions synchroneously maintain
- * the shadow node LRU under the mapping->tree_lock and the
+ * the shadow node LRU under the mapping->pages.xa_lock and the
* lru_lock. Because the page cache tree is emptied before
* the inode can be destroyed, holding the lru_lock pins any
* address_space that has radix tree nodes on the LRU.
*
- * We can then safely transition to the mapping->tree_lock to
+ * We can then safely transition to the mapping->pages.xa_lock to
* pin only the address_space of the particular node we want
* to reclaim, take the node off-LRU, and drop the lru_lock.
*/

node = container_of(item, struct radix_tree_node, private_list);
- mapping = container_of(node->root, struct address_space, page_tree);
+ mapping = container_of(node->root, struct address_space, pages);

/* Coming from the list, invert the lock order */
- if (!spin_trylock(&mapping->tree_lock)) {
+ if (!xa_trylock(&mapping->pages)) {
spin_unlock(lru_lock);
ret = LRU_RETRY;
goto out;
@@ -468,11 +468,11 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
if (WARN_ON_ONCE(node->exceptional))
goto out_invalid;
inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
- __radix_tree_delete_node(&mapping->page_tree, node,
+ __radix_tree_delete_node(&mapping->pages, node,
workingset_lookup_update(mapping));

out_invalid:
- spin_unlock(&mapping->tree_lock);
+ xa_unlock(&mapping->pages);
ret = LRU_REMOVED_RETRY;
out:
local_irq_enable();
@@ -487,7 +487,7 @@ static unsigned long scan_shadow_nodes(struct shrinker *shrinker,
{
unsigned long ret;

- /* list_lru lock nests inside IRQ-safe mapping->tree_lock */
+ /* list_lru lock nests inside IRQ-safe mapping->pages.xa_lock */
local_irq_disable();
ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL);
local_irq_enable();
@@ -503,7 +503,7 @@ static struct shrinker workingset_shadow_shrinker = {

/*
* Our list_lru->lock is IRQ-safe as it nests inside the IRQ-safe
- * mapping->tree_lock.
+ * mapping->pages.xa_lock.
*/
static struct lock_class_key shadow_nodes_key;

--
2.15.1


Subject: Re: [PATCH v6 20/99] ida: Convert to XArray

Hi Matthew!

On 01/17/2018 09:20 PM, Matthew Wilcox wrote:
> Use the xarray infrstructure like we used the radix tree infrastructure.
> This lets us get rid of idr_get_free() from the radix tree code.

There's a typo: infrstructure => infratructure

Cheers,
Adrian

--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer - [email protected]
`. `' Freie Universitaet Berlin - [email protected]
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913

2018-01-17 21:19:48

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH v6 05/99] xarray: Add definition of struct xarray

From: Matthew Wilcox <[email protected]>

This is a direct replacement for struct radix_tree_root. Some of the
struct members have changed name; convert those, and use a #define so
that radix_tree users continue to work without change.

Signed-off-by: Matthew Wilcox <[email protected]>
---
include/linux/radix-tree.h | 33 ++++----------
include/linux/xarray.h | 59 +++++++++++++++++++++++++
lib/Makefile | 2 +-
lib/idr.c | 4 +-
lib/radix-tree.c | 75 ++++++++++++++++----------------
lib/xarray.c | 42 ++++++++++++++++++
tools/include/linux/spinlock.h | 1 +
tools/testing/radix-tree/.gitignore | 1 +
tools/testing/radix-tree/Makefile | 8 +++-
tools/testing/radix-tree/linux/bug.h | 1 +
tools/testing/radix-tree/linux/kconfig.h | 1 +
tools/testing/radix-tree/linux/xarray.h | 2 +
tools/testing/radix-tree/multiorder.c | 6 +--
tools/testing/radix-tree/test.c | 6 +--
14 files changed, 168 insertions(+), 73 deletions(-)
create mode 100644 lib/xarray.c
create mode 100644 tools/testing/radix-tree/linux/kconfig.h
create mode 100644 tools/testing/radix-tree/linux/xarray.h

diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 87f35fe00e55..c8a33e9e9a3c 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -30,6 +30,9 @@
#include <linux/types.h>
#include <linux/xarray.h>

+/* Keep unconverted code working */
+#define radix_tree_root xarray
+
/*
* The bottom two bits of the slot determine how the remaining bits in the
* slot are interpreted:
@@ -59,10 +62,7 @@ static inline bool radix_tree_is_internal_node(void *ptr)

#define RADIX_TREE_MAX_TAGS 3

-#ifndef RADIX_TREE_MAP_SHIFT
-#define RADIX_TREE_MAP_SHIFT (CONFIG_BASE_SMALL ? 4 : 6)
-#endif
-
+#define RADIX_TREE_MAP_SHIFT XA_CHUNK_SHIFT
#define RADIX_TREE_MAP_SIZE (1UL << RADIX_TREE_MAP_SHIFT)
#define RADIX_TREE_MAP_MASK (RADIX_TREE_MAP_SIZE-1)

@@ -95,36 +95,21 @@ struct radix_tree_node {
unsigned long tags[RADIX_TREE_MAX_TAGS][RADIX_TREE_TAG_LONGS];
};

-/* The IDR tag is stored in the low bits of the GFP flags */
+/* The IDR tag is stored in the low bits of xa_flags */
#define ROOT_IS_IDR ((__force gfp_t)4)
-/* The top bits of gfp_mask are used to store the root tags */
+/* The top bits of xa_flags are used to store the root tags */
#define ROOT_TAG_SHIFT (__GFP_BITS_SHIFT)

-struct radix_tree_root {
- spinlock_t xa_lock;
- gfp_t gfp_mask;
- struct radix_tree_node __rcu *rnode;
-};
-
-#define RADIX_TREE_INIT(name, mask) { \
- .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
- .gfp_mask = (mask), \
- .rnode = NULL, \
-}
+#define RADIX_TREE_INIT(name, mask) XARRAY_INIT_FLAGS(name, mask)

#define RADIX_TREE(name, mask) \
struct radix_tree_root name = RADIX_TREE_INIT(name, mask)

-#define INIT_RADIX_TREE(root, mask) \
-do { \
- spin_lock_init(&(root)->xa_lock); \
- (root)->gfp_mask = (mask); \
- (root)->rnode = NULL; \
-} while (0)
+#define INIT_RADIX_TREE(root, mask) xa_init_flags(root, mask)

static inline bool radix_tree_empty(const struct radix_tree_root *root)
{
- return root->rnode == NULL;
+ return root->xa_head == NULL;
}

/**
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index c308152fde7f..3d2f1fafb7ec 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -10,6 +10,8 @@
*/

#include <linux/bug.h>
+#include <linux/compiler.h>
+#include <linux/kconfig.h>
#include <linux/spinlock.h>
#include <linux/types.h>

@@ -99,6 +101,63 @@ static inline bool xa_is_internal(const void *entry)
return ((unsigned long)entry & 3) == 2;
}

+/**
+ * struct xarray - The anchor of the XArray.
+ * @xa_lock: Lock that protects the contents of the XArray.
+ *
+ * To use the xarray, define it statically or embed it in your data structure.
+ * It is a very small data structure, so it does not usually make sense to
+ * allocate it separately and keep a pointer to it in your data structure.
+ *
+ * You may use the xa_lock to protect your own data structures as well.
+ */
+/*
+ * If all of the entries in the array are NULL, @xa_head is a NULL pointer.
+ * If the only non-NULL entry in the array is at index 0, @xa_head is that
+ * entry. If any other entry in the array is non-NULL, @xa_head points
+ * to an @xa_node.
+ */
+struct xarray {
+ spinlock_t xa_lock;
+/* private: The rest of the data structure is not to be used directly. */
+ gfp_t xa_flags;
+ void __rcu * xa_head;
+};
+
+#define XARRAY_INIT_FLAGS(name, flags) { \
+ .xa_lock = __SPIN_LOCK_UNLOCKED(name.xa_lock), \
+ .xa_flags = flags, \
+ .xa_head = NULL, \
+}
+
+#define XARRAY_INIT(name) XARRAY_INIT_FLAGS(name, 0)
+
+/**
+ * DEFINE_XARRAY() - Define an XArray
+ * @name: A string that names your XArray
+ *
+ * This is intended for file scope definitions of XArrays. It declares
+ * and initialises an empty XArray with the chosen name. It is equivalent
+ * to calling xa_init() on the array, but it does the initialisation at
+ * compiletime instead of runtime.
+ */
+#define DEFINE_XARRAY(name) struct xarray name = XARRAY_INIT(name)
+#define DEFINE_XARRAY_FLAGS(name, flags) \
+ struct xarray name = XARRAY_INIT_FLAGS(name, flags)
+
+void xa_init_flags(struct xarray *, gfp_t flags);
+
+/**
+ * xa_init() - Initialise an empty XArray.
+ * @xa: XArray.
+ *
+ * An empty XArray is full of NULL entries.
+ */
+static inline void xa_init(struct xarray *xa)
+{
+ xa_init_flags(xa, 0);
+}
+
#define xa_trylock(xa) spin_trylock(&(xa)->xa_lock)
#define xa_lock(xa) spin_lock(&(xa)->xa_lock)
#define xa_unlock(xa) spin_unlock(&(xa)->xa_lock)
diff --git a/lib/Makefile b/lib/Makefile
index d11c48ec8ffd..6aa523acc7c1 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,7 +18,7 @@ KCOV_INSTRUMENT_debugobjects.o := n
KCOV_INSTRUMENT_dynamic_debug.o := n

lib-y := ctype.o string.o vsprintf.o cmdline.o \
- rbtree.o radix-tree.o dump_stack.o timerqueue.o\
+ rbtree.o radix-tree.o dump_stack.o timerqueue.o xarray.o \
idr.o int_sqrt.o extable.o \
sha1.o chacha20.o irq_regs.o argv_split.o \
flex_proportions.o ratelimit.o show_mem.o \
diff --git a/lib/idr.c b/lib/idr.c
index 48c53890adc0..b9aa08e198a2 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -35,8 +35,8 @@ int idr_alloc_ul(struct idr *idr, void *ptr, unsigned long *nextid,
if (WARN_ON_ONCE(radix_tree_is_internal_node(ptr)))
return -EINVAL;

- if (WARN_ON_ONCE(!(idr->idr_rt.gfp_mask & ROOT_IS_IDR)))
- idr->idr_rt.gfp_mask |= IDR_RT_MARKER;
+ if (WARN_ON_ONCE(!(idr->idr_rt.xa_flags & ROOT_IS_IDR)))
+ idr->idr_rt.xa_flags |= IDR_RT_MARKER;

radix_tree_iter_init(&iter, *nextid);
slot = idr_get_free(&idr->idr_rt, &iter, gfp, max);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index f16f63d15edc..126eeb06cfef 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -123,7 +123,7 @@ static unsigned int radix_tree_descend(const struct radix_tree_node *parent,

static inline gfp_t root_gfp_mask(const struct radix_tree_root *root)
{
- return root->gfp_mask & ((__GFP_BITS_MASK >> 4) << 4);
+ return root->xa_flags & ((__GFP_BITS_MASK >> 4) << 4);
}

static inline void tag_set(struct radix_tree_node *node, unsigned int tag,
@@ -146,32 +146,32 @@ static inline int tag_get(const struct radix_tree_node *node, unsigned int tag,

static inline void root_tag_set(struct radix_tree_root *root, unsigned tag)
{
- root->gfp_mask |= (__force gfp_t)(1 << (tag + ROOT_TAG_SHIFT));
+ root->xa_flags |= (__force gfp_t)(1 << (tag + ROOT_TAG_SHIFT));
}

static inline void root_tag_clear(struct radix_tree_root *root, unsigned tag)
{
- root->gfp_mask &= (__force gfp_t)~(1 << (tag + ROOT_TAG_SHIFT));
+ root->xa_flags &= (__force gfp_t)~(1 << (tag + ROOT_TAG_SHIFT));
}

static inline void root_tag_clear_all(struct radix_tree_root *root)
{
- root->gfp_mask &= (1 << ROOT_TAG_SHIFT) - 1;
+ root->xa_flags &= (__force gfp_t)((1 << ROOT_TAG_SHIFT) - 1);
}

static inline int root_tag_get(const struct radix_tree_root *root, unsigned tag)
{
- return (__force int)root->gfp_mask & (1 << (tag + ROOT_TAG_SHIFT));
+ return (__force int)root->xa_flags & (1 << (tag + ROOT_TAG_SHIFT));
}

static inline unsigned root_tags_get(const struct radix_tree_root *root)
{
- return (__force unsigned)root->gfp_mask >> ROOT_TAG_SHIFT;
+ return (__force unsigned)root->xa_flags >> ROOT_TAG_SHIFT;
}

static inline bool is_idr(const struct radix_tree_root *root)
{
- return !!(root->gfp_mask & ROOT_IS_IDR);
+ return !!(root->xa_flags & ROOT_IS_IDR);
}

/*
@@ -290,12 +290,12 @@ static void dump_node(struct radix_tree_node *node, unsigned long index)
/* For debug */
static void radix_tree_dump(struct radix_tree_root *root)
{
- pr_debug("radix root: %p rnode %p tags %x\n",
- root, root->rnode,
- root->gfp_mask >> ROOT_TAG_SHIFT);
- if (!radix_tree_is_internal_node(root->rnode))
+ pr_debug("radix root: %p xa_head %p tags %x\n",
+ root, root->xa_head,
+ root->xa_flags >> ROOT_TAG_SHIFT);
+ if (!radix_tree_is_internal_node(root->xa_head))
return;
- dump_node(entry_to_node(root->rnode), 0);
+ dump_node(entry_to_node(root->xa_head), 0);
}

static void dump_ida_node(void *entry, unsigned long index)
@@ -339,9 +339,9 @@ static void dump_ida_node(void *entry, unsigned long index)
static void ida_dump(struct ida *ida)
{
struct radix_tree_root *root = &ida->ida_rt;
- pr_debug("ida: %p node %p free %d\n", ida, root->rnode,
- root->gfp_mask >> ROOT_TAG_SHIFT);
- dump_ida_node(root->rnode, 0);
+ pr_debug("ida: %p node %p free %d\n", ida, root->xa_head,
+ root->xa_flags >> ROOT_TAG_SHIFT);
+ dump_ida_node(root->xa_head, 0);
}
#endif

@@ -575,7 +575,7 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order)
static unsigned radix_tree_load_root(const struct radix_tree_root *root,
struct radix_tree_node **nodep, unsigned long *maxindex)
{
- struct radix_tree_node *node = rcu_dereference_raw(root->rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(root->xa_head);

*nodep = node;

@@ -604,7 +604,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
while (index > shift_maxindex(maxshift))
maxshift += RADIX_TREE_MAP_SHIFT;

- entry = rcu_dereference_raw(root->rnode);
+ entry = rcu_dereference_raw(root->xa_head);
if (!entry && (!is_idr(root) || root_tag_get(root, IDR_FREE)))
goto out;

@@ -632,7 +632,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
if (radix_tree_is_internal_node(entry)) {
entry_to_node(entry)->parent = node;
} else if (xa_is_value(entry)) {
- /* Moving an exceptional root->rnode to a node */
+ /* Moving an exceptional root->xa_head to a node */
node->exceptional = 1;
}
/*
@@ -641,7 +641,7 @@ static int radix_tree_extend(struct radix_tree_root *root, gfp_t gfp,
*/
node->slots[0] = (void __rcu *)entry;
entry = node_to_entry(node);
- rcu_assign_pointer(root->rnode, entry);
+ rcu_assign_pointer(root->xa_head, entry);
shift += RADIX_TREE_MAP_SHIFT;
} while (shift <= maxshift);
out:
@@ -658,7 +658,7 @@ static inline bool radix_tree_shrink(struct radix_tree_root *root,
bool shrunk = false;

for (;;) {
- struct radix_tree_node *node = rcu_dereference_raw(root->rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(root->xa_head);
struct radix_tree_node *child;

if (!radix_tree_is_internal_node(node))
@@ -686,9 +686,9 @@ static inline bool radix_tree_shrink(struct radix_tree_root *root,
* moving the node from one part of the tree to another: if it
* was safe to dereference the old pointer to it
* (node->slots[0]), it will be safe to dereference the new
- * one (root->rnode) as far as dependent read barriers go.
+ * one (root->xa_head) as far as dependent read barriers go.
*/
- root->rnode = (void __rcu *)child;
+ root->xa_head = (void __rcu *)child;
if (is_idr(root) && !tag_get(node, IDR_FREE, 0))
root_tag_clear(root, IDR_FREE);

@@ -736,9 +736,8 @@ static bool delete_node(struct radix_tree_root *root,

if (node->count) {
if (node_to_entry(node) ==
- rcu_dereference_raw(root->rnode))
- deleted |= radix_tree_shrink(root,
- update_node);
+ rcu_dereference_raw(root->xa_head))
+ deleted |= radix_tree_shrink(root, update_node);
return deleted;
}

@@ -753,7 +752,7 @@ static bool delete_node(struct radix_tree_root *root,
*/
if (!is_idr(root))
root_tag_clear_all(root);
- root->rnode = NULL;
+ root->xa_head = NULL;
}

WARN_ON_ONCE(!list_empty(&node->private_list));
@@ -778,7 +777,7 @@ static bool delete_node(struct radix_tree_root *root,
* at position @index in the radix tree @root.
*
* Until there is more than one item in the tree, no nodes are
- * allocated and @root->rnode is used as a direct slot instead of
+ * allocated and @root->xa_head is used as a direct slot instead of
* pointing to a node, in which case *@nodep will be NULL.
*
* Returns -ENOMEM, or 0 for success.
@@ -788,7 +787,7 @@ int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
void __rcu ***slotp)
{
struct radix_tree_node *node = NULL, *child;
- void __rcu **slot = (void __rcu **)&root->rnode;
+ void __rcu **slot = (void __rcu **)&root->xa_head;
unsigned long maxindex;
unsigned int shift, offset = 0;
unsigned long max = index | ((1UL << order) - 1);
@@ -804,7 +803,7 @@ int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
if (error < 0)
return error;
shift = error;
- child = rcu_dereference_raw(root->rnode);
+ child = rcu_dereference_raw(root->xa_head);
}

while (shift > order) {
@@ -995,7 +994,7 @@ EXPORT_SYMBOL(__radix_tree_insert);
* tree @root.
*
* Until there is more than one item in the tree, no nodes are
- * allocated and @root->rnode is used as a direct slot instead of
+ * allocated and @root->xa_head is used as a direct slot instead of
* pointing to a node, in which case *@nodep will be NULL.
*/
void *__radix_tree_lookup(const struct radix_tree_root *root,
@@ -1008,7 +1007,7 @@ void *__radix_tree_lookup(const struct radix_tree_root *root,

restart:
parent = NULL;
- slot = (void __rcu **)&root->rnode;
+ slot = (void __rcu **)&root->xa_head;
radix_tree_load_root(root, &node, &maxindex);
if (index > maxindex)
return NULL;
@@ -1160,9 +1159,9 @@ void __radix_tree_replace(struct radix_tree_root *root,
/*
* This function supports replacing exceptional entries and
* deleting entries, but that needs accounting against the
- * node unless the slot is root->rnode.
+ * node unless the slot is root->xa_head.
*/
- WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->rnode) &&
+ WARN_ON_ONCE(!node && (slot != (void __rcu **)&root->xa_head) &&
(count || exceptional));
replace_slot(slot, item, node, count, exceptional);

@@ -1714,7 +1713,7 @@ void __rcu **radix_tree_next_chunk(const struct radix_tree_root *root,
iter->tags = 1;
iter->node = NULL;
__set_iter_shift(iter, 0);
- return (void __rcu **)&root->rnode;
+ return (void __rcu **)&root->xa_head;
}

do {
@@ -2108,7 +2107,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
unsigned long max)
{
struct radix_tree_node *node = NULL, *child;
- void __rcu **slot = (void __rcu **)&root->rnode;
+ void __rcu **slot = (void __rcu **)&root->xa_head;
unsigned long maxindex, start = iter->next_index;
unsigned int shift, offset = 0;

@@ -2124,7 +2123,7 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
if (error < 0)
return ERR_PTR(error);
shift = error;
- child = rcu_dereference_raw(root->rnode);
+ child = rcu_dereference_raw(root->xa_head);
}

while (shift) {
@@ -2187,10 +2186,10 @@ void __rcu **idr_get_free(struct radix_tree_root *root,
*/
void idr_destroy(struct idr *idr)
{
- struct radix_tree_node *node = rcu_dereference_raw(idr->idr_rt.rnode);
+ struct radix_tree_node *node = rcu_dereference_raw(idr->idr_rt.xa_head);
if (radix_tree_is_internal_node(node))
radix_tree_free_nodes(node);
- idr->idr_rt.rnode = NULL;
+ idr->idr_rt.xa_head = NULL;
root_tag_set(&idr->idr_rt, IDR_FREE);
}
EXPORT_SYMBOL(idr_destroy);
diff --git a/lib/xarray.c b/lib/xarray.c
new file mode 100644
index 000000000000..c56b0f858e10
--- /dev/null
+++ b/lib/xarray.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * XArray implementation
+ * Copyright (c) 2017 Microsoft Corporation
+ * Author: Matthew Wilcox <[email protected]>
+ */
+
+#include <linux/export.h>
+#include <linux/xarray.h>
+
+/*
+ * Coding conventions in this file:
+ *
+ * @xa is used to refer to the entire xarray.
+ * @xas is the 'xarray operation state'. It may be either a pointer to
+ * an xa_state, or an xa_state stored on the stack. This is an unfortunate
+ * ambiguity.
+ * @index is the index of the entry being operated on
+ * @tag is an xa_tag_t; a small number indicating one of the tag bits.
+ * @node refers to an xa_node; usually the primary one being operated on by
+ * this function.
+ * @offset is the index into the slots array inside an xa_node.
+ * @parent refers to the @xa_node closer to the head than @node.
+ * @entry refers to something stored in a slot in the xarray
+ */
+
+/**
+ * xa_init_flags() - Initialise an empty XArray with flags.
+ * @xa: XArray.
+ * @flags: XA_FLAG values.
+ *
+ * If you need to initialise an XArray with special flags (eg you need
+ * to take the lock from interrupt context), use this function instead
+ * of xa_init().
+ */
+void xa_init_flags(struct xarray *xa, gfp_t flags)
+{
+ spin_lock_init(&xa->xa_lock);
+ xa->xa_flags = flags;
+ xa->xa_head = NULL;
+}
+EXPORT_SYMBOL(xa_init_flags);
diff --git a/tools/include/linux/spinlock.h b/tools/include/linux/spinlock.h
index b21b586b9854..34fed5c38da2 100644
--- a/tools/include/linux/spinlock.h
+++ b/tools/include/linux/spinlock.h
@@ -8,6 +8,7 @@
#define spinlock_t pthread_mutex_t
#define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
+#define spin_lock_init(x) pthread_mutex_init(x, NULL);

#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
diff --git a/tools/testing/radix-tree/.gitignore b/tools/testing/radix-tree/.gitignore
index d4706c0ffceb..8d4df7a72a8e 100644
--- a/tools/testing/radix-tree/.gitignore
+++ b/tools/testing/radix-tree/.gitignore
@@ -4,3 +4,4 @@ idr-test
main
multiorder
radix-tree.c
+xarray.c
diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
index fa7ee369b3c9..3868bc189199 100644
--- a/tools/testing/radix-tree/Makefile
+++ b/tools/testing/radix-tree/Makefile
@@ -4,7 +4,7 @@ CFLAGS += -I. -I../../include -g -O2 -Wall -D_LGPL_SOURCE -fsanitize=address
LDFLAGS += -fsanitize=address
LDLIBS+= -lpthread -lurcu
TARGETS = main idr-test multiorder
-CORE_OFILES := radix-tree.o idr.o linux.o test.o find_bit.o
+CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o
OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \
tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o

@@ -33,9 +33,13 @@ vpath %.c ../../lib
$(OFILES): Makefile *.h */*.h generated/map-shift.h \
../../include/linux/*.h \
../../include/asm/*.h \
+ ../../../include/linux/xarray.h \
../../../include/linux/radix-tree.h \
../../../include/linux/idr.h

+xarray.c: ../../../lib/xarray.c
+ sed -e 's/^static //' -e 's/__always_inline //' -e 's/inline //' < $< > $@
+
radix-tree.c: ../../../lib/radix-tree.c
sed -e 's/^static //' -e 's/__always_inline //' -e 's/inline //' < $< > $@

@@ -46,6 +50,6 @@ idr.c: ../../../lib/idr.c

mapshift:
@if ! grep -qws $(SHIFT) generated/map-shift.h; then \
- echo "#define RADIX_TREE_MAP_SHIFT $(SHIFT)" > \
+ echo "#define XA_CHUNK_SHIFT $(SHIFT)" > \
generated/map-shift.h; \
fi
diff --git a/tools/testing/radix-tree/linux/bug.h b/tools/testing/radix-tree/linux/bug.h
index 23b8ed52f8c8..03dc8a57eb99 100644
--- a/tools/testing/radix-tree/linux/bug.h
+++ b/tools/testing/radix-tree/linux/bug.h
@@ -1 +1,2 @@
+#include <stdio.h>
#include "asm/bug.h"
diff --git a/tools/testing/radix-tree/linux/kconfig.h b/tools/testing/radix-tree/linux/kconfig.h
new file mode 100644
index 000000000000..6c8675859913
--- /dev/null
+++ b/tools/testing/radix-tree/linux/kconfig.h
@@ -0,0 +1 @@
+#include "../../../../include/linux/kconfig.h"
diff --git a/tools/testing/radix-tree/linux/xarray.h b/tools/testing/radix-tree/linux/xarray.h
new file mode 100644
index 000000000000..df3812cda376
--- /dev/null
+++ b/tools/testing/radix-tree/linux/xarray.h
@@ -0,0 +1,2 @@
+#include "generated/map-shift.h"
+#include "../../../../include/linux/xarray.h"
diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
index 684e76f79f4a..24293a2fd82d 100644
--- a/tools/testing/radix-tree/multiorder.c
+++ b/tools/testing/radix-tree/multiorder.c
@@ -191,13 +191,13 @@ static void multiorder_shrink(unsigned long index, int order)

assert(item_insert_order(&tree, 0, order) == 0);

- node = tree.rnode;
+ node = tree.xa_head;

assert(item_insert(&tree, index) == 0);
- assert(node != tree.rnode);
+ assert(node != tree.xa_head);

assert(item_delete(&tree, index) != 0);
- assert(node == tree.rnode);
+ assert(node == tree.xa_head);

for (i = 0; i < max; i++) {
struct item *item = item_lookup(&tree, i);
diff --git a/tools/testing/radix-tree/test.c b/tools/testing/radix-tree/test.c
index 0d69c49177c6..6e1cc2040817 100644
--- a/tools/testing/radix-tree/test.c
+++ b/tools/testing/radix-tree/test.c
@@ -262,7 +262,7 @@ static int verify_node(struct radix_tree_node *slot, unsigned int tag,

void verify_tag_consistency(struct radix_tree_root *root, unsigned int tag)
{
- struct radix_tree_node *node = root->rnode;
+ struct radix_tree_node *node = root->xa_head;
if (!radix_tree_is_internal_node(node))
return;
verify_node(node, tag, !!root_tag_get(root, tag));
@@ -292,13 +292,13 @@ void item_kill_tree(struct radix_tree_root *root)
}
}
assert(radix_tree_gang_lookup(root, (void **)items, 0, 32) == 0);
- assert(root->rnode == NULL);
+ assert(root->xa_head == NULL);
}

void tree_verify_min_height(struct radix_tree_root *root, int maxindex)
{
unsigned shift;
- struct radix_tree_node *node = root->rnode;
+ struct radix_tree_node *node = root->xa_head;
if (!radix_tree_is_internal_node(node)) {
assert(maxindex == 0);
return;
--
2.15.1


2018-01-18 14:25:42

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v6 85/99] btrfs: Remove unused spinlock

On Wed, Jan 17, 2018 at 12:21:49PM -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> The reada_lock in struct btrfs_device was only initialised, and not
> actually used. That's good because there's another lock also called
> reada_lock in the btrfs_fs_info that was quite heavily used. Remove
> this one.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

I'll pick this one now, thanks.

2018-01-18 16:30:45

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v6 00/99] XArray version 6

On Wed, Jan 17, 2018 at 12:20:24PM -0800, Matthew Wilcox wrote:
> From: Matthew Wilcox <[email protected]>
>
> This version of the XArray has no known bugs.

I've booted this patchset on 2 boxes, both had random problems during
boot. On one I was not able to diagnose what went wrong. On the other
one the system booted up to userspace and failed to set up networking.
Serial console worked and the network service complained about wrong
format of /usr/share/wicked/schema/team.xml . That's supposed to be a
text file, though hexdump showed me lots of zeros. Trimmed output:

00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
(similar output here)
*
00000a10 00 00 00 00 00 00 00 00 11 03 00 00 00 00 00 00 |................|
00000a20 20 8b 7f 01 00 00 00 00 a0 84 7d 01 00 00 00 00 | .........}.....|
00000a30 00 00 00 00 00 00 00 00 10 89 7f 01 00 00 00 00 |................|
00000a40 a0 84 7d 01 00 00 00 00 00 00 00 00 00 00 00 00 |..}.............|
00000a50 80 8a 7f 01 00 00 00 00 e0 cf 7d 01 00 00 00 00 |..........}.....|
00000a60 00 00 00 00 00 00 00 00 60 8a 7f 01 00 00 00 00 |........`.......|
00000a70 a0 84 7d 01 00 00 00 00 00 00 00 00 00 00 00 00 |..}.............|
00000a80 30 89 7f 01 00 00 00 00 a0 84 7d 01 00 00 00 00 |0.........}.....|
00000a90 00 00 00 00 00 00 00 00 60 f2 7f 01 00 00 00 00 |........`.......|
00000aa0 40 fd 7e 01 00 00 00 00 00 00 00 00 00 00 00 00 |@.~.............|
00000ab0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001000 3e 0a 20 20 3c 2f 6d 65 74 68 6f 64 3e 0a 3c 2f |>. </method>.</|
00001010 73 65 72 76 69 63 65 3e 0a |service>.|

There's something at the end of the file that does look like a xml fragment.
The file size is 4121. This looks to me like exactly the first page of the file
was not read correctly.

The xml file is supposed to be read-only during startup, so there was no write
in flight. 'rpm -Vv' reported only this file corrupted. Booting to other
kernels was fine, network up, and the file was ok again. So the
corruption happened only in memory, which leads me to conclusion that
there is an unknown bug in your patchset.

2018-01-18 16:50:32

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v6 00/99] XArray version 6

On Thu, Jan 18, 2018 at 05:07:50PM +0100, David Sterba wrote:
> On Wed, Jan 17, 2018 at 12:20:24PM -0800, Matthew Wilcox wrote:
> > From: Matthew Wilcox <[email protected]>
> >
> > This version of the XArray has no known bugs.
>
> I've booted this patchset on 2 boxes, both had random problems during
> boot. On one I was not able to diagnose what went wrong. On the other
> one the system booted up to userspace and failed to set up networking.
> Serial console worked and the network service complained about wrong
> format of /usr/share/wicked/schema/team.xml . That's supposed to be a
> text file, though hexdump showed me lots of zeros. Trimmed output:
>
> 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
> *
> (similar output here)
> *
> 00000a10 00 00 00 00 00 00 00 00 11 03 00 00 00 00 00 00 |................|
> 00000a20 20 8b 7f 01 00 00 00 00 a0 84 7d 01 00 00 00 00 | .........}.....|
> 00000a30 00 00 00 00 00 00 00 00 10 89 7f 01 00 00 00 00 |................|
> 00000a40 a0 84 7d 01 00 00 00 00 00 00 00 00 00 00 00 00 |..}.............|
> 00000a50 80 8a 7f 01 00 00 00 00 e0 cf 7d 01 00 00 00 00 |..........}.....|
> 00000a60 00 00 00 00 00 00 00 00 60 8a 7f 01 00 00 00 00 |........`.......|
> 00000a70 a0 84 7d 01 00 00 00 00 00 00 00 00 00 00 00 00 |..}.............|
> 00000a80 30 89 7f 01 00 00 00 00 a0 84 7d 01 00 00 00 00 |0.........}.....|
> 00000a90 00 00 00 00 00 00 00 00 60 f2 7f 01 00 00 00 00 |........`.......|
> 00000aa0 40 fd 7e 01 00 00 00 00 00 00 00 00 00 00 00 00 |@.~.............|
> 00000ab0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
> *
> 00001000 3e 0a 20 20 3c 2f 6d 65 74 68 6f 64 3e 0a 3c 2f |>. </method>.</|
> 00001010 73 65 72 76 69 63 65 3e 0a |service>.|
>
> There's something at the end of the file that does look like a xml fragment.
> The file size is 4121. This looks to me like exactly the first page of the file
> was not read correctly.
>
> The xml file is supposed to be read-only during startup, so there was no write
> in flight. 'rpm -Vv' reported only this file corrupted. Booting to other
> kernels was fine, network up, and the file was ok again. So the
> corruption happened only in memory, which leads me to conclusion that
> there is an unknown bug in your patchset.

Thank you! I shall attempt to debug. Was this with a btrfs root
filesystem? I'm most suspicious of those patches right now, since they've
received next to no testing. I'm going to put together a smaller patchset
which just does the page cache conversion and nothing else in the hope
that we can get that merged this year.

2018-01-18 17:00:21

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH v6 00/99] XArray version 6

On Thu, Jan 18, 2018 at 08:48:43AM -0800, Matthew Wilcox wrote:
> Thank you! I shall attempt to debug. Was this with a btrfs root
> filesystem? I'm most suspicious of those patches right now, since they've
> received next to no testing. I'm going to put together a smaller patchset
> which just does the page cache conversion and nothing else in the hope
> that we can get that merged this year.

No, the root is ext3 and there was no btrfs filesytem mounted at the
time.

2018-01-18 17:03:44

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v6 00/99] XArray version 6

On Thu, Jan 18, 2018 at 05:56:12PM +0100, David Sterba wrote:
> On Thu, Jan 18, 2018 at 08:48:43AM -0800, Matthew Wilcox wrote:
> > Thank you! I shall attempt to debug. Was this with a btrfs root
> > filesystem? I'm most suspicious of those patches right now, since they've
> > received next to no testing. I'm going to put together a smaller patchset
> > which just does the page cache conversion and nothing else in the hope
> > that we can get that merged this year.
>
> No, the root is ext3 and there was no btrfs filesytem mounted at the
> time.

Found it; I was missing a prerequisite patch. New (smaller) patch series
coming soon.

2018-01-24 08:46:19

by Paul Bolle

[permalink] [raw]
Subject: Re: [PATCH v6 05/99] xarray: Add definition of struct xarray

Mathhew,

Just a minor question.

On Wed, 2018-01-17 at 12:20 -0800, Matthew Wilcox wrote:
> This is a direct replacement for struct radix_tree_root. Some of the
> struct members have changed name; convert those, and use a #define so
> that radix_tree users continue to work without change.
>
> Signed-off-by: Matthew Wilcox <[email protected]>

> --- a/include/linux/xarray.h
> +++ b/include/linux/xarray.h
> @@ -10,6 +10,8 @@
> */
>
> #include <linux/bug.h>
> +#include <linux/compiler.h>
> +#include <linux/kconfig.h>

The top Makefile includes linux/kconfig.h globally. (See the odd USERINCLUDE
variable, which is actually part of the LINUXINCLUDE variable, but split off
to make things confusing.)

Why do you need to include linux/kconfig.h here?

> #include <linux/spinlock.h>
> #include <linux/types.h>

Thanks,


Paul Bolle