2023-05-05 17:53:46

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 00/36] Maple tree mas_{next,prev}_range() and cleanup

This patch set contains a number of clean ups to the code to make it
more usable (next/prev range), the addition of debug output formatting,
the addition of printing the maple state information in the
WARN_ON/BUG_ON code.

There is also work done here to keep nodes active during iterations to
reduce the necessity of re-walking the tree.

Finally, there is a new interface added to move to the next or previous
range in the tree, even if it is empty.

The organisation of the patches is as follows:

0001-0004 - Small clean ups
0005-0018 - Additional debug options and WARN_ON/BUG_ON changes
0019 - Test module __init and __exit addition
0020-0021 - More functional clean ups
0022-0026 - Changes to keep nodes active
0027-0034 - Add new mas_{prev,next}_range()
0035 - Use new mas_{prev,next}_range() in mmap_region()
0036 - Fix testing code of alloc_rev_range() testcase.

Changes since v1:
- Changed mas_parent_enum() to mas_parent_type() - Thanks Peng Zhang
- Fix the test code working without debug. - Thanks kernel test robot
<[email protected]>
- Change vma_iter_store() debug log to one line - Thanks Sergey
Senozhatsky
- Change mas_walk() to return entry instead of mas_root() to avoid
potential RCU issue. - Thanks Peng Zhang
- Added fix for testing of mas_empty_area.
- Added check for underflow to mas_prev_slot() - Thanks Peng Zhang
- Fixed compile issue in mas_next_slot() interface patch - Thanks
kernel test robot <[email protected]>
- Removed extra check in mas_next_node() - Thanks Peng Zhang
- Added missing T's to commit message in two patchs - Thanks Petr
Tesařík
- Fix the change to the do_vmi_align_munmap() function to work with the
new iterator - Thanks kernel test robot <[email protected]>
- Fix the testing of mas_{prev,next}_range() undeclared function names
- Thanks kernel test robot <[email protected]>
- Fix typo in testing code of mas_{prev,next,walk} - Thanks Peng Zhang


v1: https://lore.kernel.org/linux-mm/[email protected]/

Liam R. Howlett (36):
maple_tree: Fix static analyser cppcheck issue
maple_tree: Clean up mas_parent_enum() and rename to mas_parent_type()
maple_tree: Avoid unnecessary ascending
maple_tree: Clean up mas_dfs_postorder()
maple_tree: Add format option to mt_dump()
maple_tree: Add debug BUG_ON and WARN_ON variants
maple_tree: Convert BUG_ON() to MT_BUG_ON()
maple_tree: Change RCU checks to WARN_ON() instead of BUG_ON()
maple_tree: Convert debug code to use MT_WARN_ON() and MAS_WARN_ON()
maple_tree: Use MAS_BUG_ON() when setting a leaf node as a parent
maple_tree: Use MAS_BUG_ON() in mas_set_height()
maple_tree: Use MAS_BUG_ON() from mas_topiary_range()
maple_tree: Use MAS_WR_BUG_ON() in mas_store_prealloc()
maple_tree: Use MAS_BUG_ON() prior to calling mas_meta_gap()
maple_tree: Return error on mte_pivots() out of range
maple_tree: Make test code work without debug enabled
mm: Update validate_mm() to use vma iterator
mm: Update vma_iter_store() to use MAS_WARN_ON()
maple_tree: Add __init and __exit to test module
maple_tree: Remove unnecessary check from mas_destroy()
maple_tree: mas_start() reset depth on dead node
mm/mmap: Change do_vmi_align_munmap() for maple tree iterator changes
maple_tree: Try harder to keep active node after mas_next()
maple_tree: Try harder to keep active node with mas_prev()
maple_tree: Revise limit checks in mas_empty_area{_rev}()
maple_tree: Fix testing mas_empty_area()
maple_tree: Introduce mas_next_slot() interface
maple_tree: Add mas_next_range() and mas_find_range() interfaces
maple_tree: Relocate mas_rewalk() and mas_rewalk_if_dead()
maple_tree: Introduce mas_prev_slot() interface
maple_tree: Add mas_prev_range() and mas_find_range_rev interface
maple_tree: Clear up index and last setting in single entry tree
maple_tree: Update testing code for mas_{next,prev,walk}
mm: Add vma_iter_{next,prev}_range() to vma iterator
mm: Avoid rewalk in mmap_region
maple_tree: Add gap to check_alloc_rev_range() testcase

include/linux/maple_tree.h | 129 ++-
include/linux/mm.h | 13 +
include/linux/mmdebug.h | 14 +
lib/Kconfig.debug | 10 +-
lib/maple_tree.c | 1175 ++++++++++++++-----------
lib/test_maple_tree.c | 863 +++++++++++++++---
mm/debug.c | 9 +
mm/internal.h | 26 +-
mm/mmap.c | 111 ++-
tools/testing/radix-tree/linux/init.h | 1 +
tools/testing/radix-tree/maple.c | 164 ++--
11 files changed, 1760 insertions(+), 755 deletions(-)

--
2.39.2


2023-05-05 17:54:02

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 05/36] maple_tree: Add format option to mt_dump()

From: "Liam R. Howlett" <[email protected]>

Allow different formatting strings to be used when dumping the tree.
Currently supports hex and decimal.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 9 +++-
lib/maple_tree.c | 87 +++++++++++++++++++++-----------
lib/test_maple_tree.c | 10 ++--
mm/internal.h | 4 +-
mm/mmap.c | 8 +--
tools/testing/radix-tree/maple.c | 12 ++---
6 files changed, 82 insertions(+), 48 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 1fadb5f5978b6..140fb271be4a4 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -670,10 +670,15 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max);


#ifdef CONFIG_DEBUG_MAPLE_TREE
+enum mt_dump_format {
+ mt_dump_dec,
+ mt_dump_hex,
+};
+
extern atomic_t maple_tree_tests_run;
extern atomic_t maple_tree_tests_passed;

-void mt_dump(const struct maple_tree *mt);
+void mt_dump(const struct maple_tree *mt, enum mt_dump_format format);
void mt_validate(struct maple_tree *mt);
void mt_cache_shrink(void);
#define MT_BUG_ON(__tree, __x) do { \
@@ -681,7 +686,7 @@ void mt_cache_shrink(void);
if (__x) { \
pr_info("BUG at %s:%d (%u)\n", \
__func__, __LINE__, __x); \
- mt_dump(__tree); \
+ mt_dump(__tree, mt_dump_hex); \
pr_info("Pass: %u Run:%u\n", \
atomic_read(&maple_tree_tests_passed), \
atomic_read(&maple_tree_tests_run)); \
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 1e7e7249a0cf2..084868865849c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5699,7 +5699,7 @@ void *mas_store(struct ma_state *mas, void *entry)
trace_ma_write(__func__, mas, 0, entry);
#ifdef CONFIG_DEBUG_MAPLE_TREE
if (mas->index > mas->last)
- pr_err("Error %lu > %lu %p\n", mas->index, mas->last, entry);
+ pr_err("Error %lX > %lX %p\n", mas->index, mas->last, entry);
MT_BUG_ON(mas->tree, mas->index > mas->last);
if (mas->index > mas->last) {
mas_set_err(mas, -EINVAL);
@@ -6753,22 +6753,33 @@ static void mas_dfs_postorder(struct ma_state *mas, unsigned long max)

/* Tree validations */
static void mt_dump_node(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth);
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format);
static void mt_dump_range(unsigned long min, unsigned long max,
- unsigned int depth)
+ unsigned int depth, enum mt_dump_format format)
{
static const char spaces[] = " ";

- if (min == max)
- pr_info("%.*s%lu: ", depth * 2, spaces, min);
- else
- pr_info("%.*s%lu-%lu: ", depth * 2, spaces, min, max);
+ switch(format) {
+ case mt_dump_hex:
+ if (min == max)
+ pr_info("%.*s%lx: ", depth * 2, spaces, min);
+ else
+ pr_info("%.*s%lx-%lx: ", depth * 2, spaces, min, max);
+ break;
+ default:
+ case mt_dump_dec:
+ if (min == max)
+ pr_info("%.*s%lu: ", depth * 2, spaces, min);
+ else
+ pr_info("%.*s%lu-%lu: ", depth * 2, spaces, min, max);
+ }
}

static void mt_dump_entry(void *entry, unsigned long min, unsigned long max,
- unsigned int depth)
+ unsigned int depth, enum mt_dump_format format)
{
- mt_dump_range(min, max, depth);
+ mt_dump_range(min, max, depth, format);

if (xa_is_value(entry))
pr_cont("value %ld (0x%lx) [%p]\n", xa_to_value(entry),
@@ -6782,7 +6793,8 @@ static void mt_dump_entry(void *entry, unsigned long min, unsigned long max,
}

static void mt_dump_range64(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_range_64 *node = &mte_to_node(entry)->mr64;
bool leaf = mte_is_leaf(entry);
@@ -6790,8 +6802,16 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
int i;

pr_cont(" contents: ");
- for (i = 0; i < MAPLE_RANGE64_SLOTS - 1; i++)
- pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
+ for (i = 0; i < MAPLE_RANGE64_SLOTS - 1; i++) {
+ switch(format) {
+ case mt_dump_hex:
+ pr_cont("%p %lX ", node->slot[i], node->pivot[i]);
+ break;
+ default:
+ case mt_dump_dec:
+ pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
+ }
+ }
pr_cont("%p\n", node->slot[i]);
for (i = 0; i < MAPLE_RANGE64_SLOTS; i++) {
unsigned long last = max;
@@ -6804,24 +6824,32 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
break;
if (leaf)
mt_dump_entry(mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);
else if (node->slot[i])
mt_dump_node(mt, mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);

if (last == max)
break;
if (last > max) {
- pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n",
+ switch(format) {
+ case mt_dump_hex:
+ pr_err("node %p last (%lx) > max (%lx) at pivot %d!\n",
node, last, max, i);
- break;
+ break;
+ default:
+ case mt_dump_dec:
+ pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n",
+ node, last, max, i);
+ }
}
first = last + 1;
}
}

static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_arange_64 *node = &mte_to_node(entry)->ma64;
bool leaf = mte_is_leaf(entry);
@@ -6846,10 +6874,10 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
break;
if (leaf)
mt_dump_entry(mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);
else if (node->slot[i])
mt_dump_node(mt, mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);

if (last == max)
break;
@@ -6863,13 +6891,14 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
}

static void mt_dump_node(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_node *node = mte_to_node(entry);
unsigned int type = mte_node_type(entry);
unsigned int i;

- mt_dump_range(min, max, depth);
+ mt_dump_range(min, max, depth, format);

pr_cont("node %p depth %d type %d parent %p", node, depth, type,
node ? node->parent : NULL);
@@ -6880,15 +6909,15 @@ static void mt_dump_node(const struct maple_tree *mt, void *entry,
if (min + i > max)
pr_cont("OUT OF RANGE: ");
mt_dump_entry(mt_slot(mt, node->slot, i),
- min + i, min + i, depth);
+ min + i, min + i, depth, format);
}
break;
case maple_leaf_64:
case maple_range_64:
- mt_dump_range64(mt, entry, min, max, depth);
+ mt_dump_range64(mt, entry, min, max, depth, format);
break;
case maple_arange_64:
- mt_dump_arange64(mt, entry, min, max, depth);
+ mt_dump_arange64(mt, entry, min, max, depth, format);
break;

default:
@@ -6896,16 +6925,16 @@ static void mt_dump_node(const struct maple_tree *mt, void *entry,
}
}

-void mt_dump(const struct maple_tree *mt)
+void mt_dump(const struct maple_tree *mt, enum mt_dump_format format)
{
void *entry = rcu_dereference_check(mt->ma_root, mt_locked(mt));

pr_info("maple_tree(%p) flags %X, height %u root %p\n",
mt, mt->ma_flags, mt_height(mt), entry);
if (!xa_is_node(entry))
- mt_dump_entry(entry, 0, 0, 0);
+ mt_dump_entry(entry, 0, 0, 0, format);
else if (entry)
- mt_dump_node(mt, entry, 0, mt_node_max(entry), 0);
+ mt_dump_node(mt, entry, 0, mt_node_max(entry), 0, format);
}
EXPORT_SYMBOL_GPL(mt_dump);

@@ -6958,7 +6987,7 @@ static void mas_validate_gaps(struct ma_state *mas)
mas_mn(mas), i,
mas_get_slot(mas, i), gap,
p_end, p_start);
- mt_dump(mas->tree);
+ mt_dump(mas->tree, mt_dump_hex);

MT_BUG_ON(mas->tree,
gap != p_end - p_start + 1);
@@ -6991,7 +7020,7 @@ static void mas_validate_gaps(struct ma_state *mas)
MT_BUG_ON(mas->tree, max_gap > mas->max);
if (ma_gaps(p_mn, mas_parent_type(mas, mte))[p_slot] != max_gap) {
pr_err("gap %p[%u] != %lu\n", p_mn, p_slot, max_gap);
- mt_dump(mas->tree);
+ mt_dump(mas->tree, mt_dump_hex);
}

MT_BUG_ON(mas->tree,
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index f1db333270e9f..d6929270dd36a 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -219,7 +219,7 @@ static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
#ifndef __KERNEL__
if (verbose) {
rcu_barrier();
- mt_dump(mt);
+ mt_dump(mt, mt_dump_dec);
pr_info(" %s test of 0-%lu %luK in %d active (%d total)\n",
__func__, max, mt_get_alloc_size()/1024, mt_nr_allocated(),
mt_nr_tallocated());
@@ -248,7 +248,7 @@ static noinline void check_seq(struct maple_tree *mt, unsigned long max,
#ifndef __KERNEL__
if (verbose) {
rcu_barrier();
- mt_dump(mt);
+ mt_dump(mt, mt_dump_dec);
pr_info(" seq test of 0-%lu %luK in %d active (%d total)\n",
max, mt_get_alloc_size()/1024, mt_nr_allocated(),
mt_nr_tallocated());
@@ -893,7 +893,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
#if DEBUG_ALLOC_RANGE
pr_debug("\tInsert %lu-%lu\n", range[i] >> 12,
(range[i + 1] >> 12) - 1);
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
check_insert_range(mt, range[i] >> 12, (range[i + 1] >> 12) - 1,
xa_mk_value(range[i] >> 12), 0);
@@ -934,7 +934,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
xa_mk_value(req_range[i] >> 12)); /* pointer */
mt_validate(mt);
#if DEBUG_ALLOC_RANGE
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
}

@@ -1572,7 +1572,7 @@ static noinline void check_node_overwrite(struct maple_tree *mt)
mtree_test_store_range(mt, i*100, i*100 + 50, xa_mk_value(i*100));

mtree_test_store_range(mt, 319951, 367950, NULL);
- /*mt_dump(mt); */
+ /*mt_dump(mt, mt_dump_dec); */
mt_validate(mt);
}

diff --git a/mm/internal.h b/mm/internal.h
index 68410c6d97aca..4c195920f5656 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1051,13 +1051,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree);
+ mt_dump(vmi->mas.tree, mt_dump_hex);
}
if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.last < vma->vm_start)) {
printk("%lu < %lu\n", vmi->mas.last, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree);
+ mt_dump(vmi->mas.tree, mt_dump_hex);
}
#endif

diff --git a/mm/mmap.c b/mm/mmap.c
index e96f96c69c342..76eaf12f13903 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -301,7 +301,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)

#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
extern void mt_validate(struct maple_tree *mt);
-extern void mt_dump(const struct maple_tree *mt);
+extern void mt_dump(const struct maple_tree *mt, enum mt_dump_format fmt);

/* Validate the maple tree */
static void validate_mm_mt(struct mm_struct *mm)
@@ -323,18 +323,18 @@ static void validate_mm_mt(struct mm_struct *mm)
pr_emerg("mt vma: %p %lu - %lu\n", vma_mt,
vma_mt->vm_start, vma_mt->vm_end);

- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
if (vma_mt->vm_end != mas.last + 1) {
pr_err("vma: %p vma_mt %lu-%lu\tmt %lu-%lu\n",
mm, vma_mt->vm_start, vma_mt->vm_end,
mas.index, mas.last);
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
}
VM_BUG_ON_MM(vma_mt->vm_end != mas.last + 1, mm);
if (vma_mt->vm_start != mas.index) {
pr_err("vma: %p vma_mt %p %lu - %lu doesn't match\n",
mm, vma_mt, vma_mt->vm_start, vma_mt->vm_end);
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
}
VM_BUG_ON_MM(vma_mt->vm_start != mas.index, mm);
}
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 75df543e019c9..ebcb3faf85ea9 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -1054,7 +1054,7 @@ static noinline void check_erase2_testset(struct maple_tree *mt,
if (entry_count)
MT_BUG_ON(mt, !mt_height(mt));
#if check_erase2_debug > 1
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
#if check_erase2_debug
pr_err("Done\n");
@@ -1085,7 +1085,7 @@ static noinline void check_erase2_testset(struct maple_tree *mt,
mas_for_each(&mas, foo, ULONG_MAX) {
if (xa_is_zero(foo)) {
if (addr == mas.index) {
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
pr_err("retry failed %lu - %lu\n",
mas.index, mas.last);
MT_BUG_ON(mt, 1);
@@ -34513,7 +34513,7 @@ static void *rcu_reader_rev(void *ptr)
if (mas.index != r_start) {
alt = xa_mk_value(index + i * 2 + 1 +
RCU_RANGE_COUNT);
- mt_dump(test->mt);
+ mt_dump(test->mt, mt_dump_dec);
printk("Error: %lu-%lu %p != %lu-%lu %p %p line %d i %d\n",
mas.index, mas.last, entry,
r_start, r_end, expected, alt,
@@ -35784,10 +35784,10 @@ void farmer_tests(void)
struct maple_node *node;
DEFINE_MTREE(tree);

- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

tree.ma_root = xa_mk_value(0);
- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

node = mt_alloc_one(GFP_KERNEL);
node->parent = (void *)((unsigned long)(&tree) | 1);
@@ -35797,7 +35797,7 @@ void farmer_tests(void)
node->mr64.pivot[1] = 1;
node->mr64.pivot[2] = 0;
tree.ma_root = mt_mk_node(node, maple_leaf_64);
- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

node->parent = ma_parent_ptr(node);
ma_free_rcu(node);
--
2.39.2

2023-05-05 17:54:24

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 19/36] maple_tree: Add __init and __exit to test module

The test functions are not needed after the module is removed, so mark
them as such. Add __exit to the module removal function. Some other
variables have been marked as const static as well.

Suggested-by: Andrew Morton <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 158 +++++++++++++-------------
tools/testing/radix-tree/linux/init.h | 1 +
tools/testing/radix-tree/maple.c | 147 ++++++++++++------------
3 files changed, 155 insertions(+), 151 deletions(-)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 93b40a78c4f55..19b130c9dddea 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -51,54 +51,54 @@ atomic_t maple_tree_tests_passed;
#else
#define cond_resched() do {} while (0)
#endif
-static
-int mtree_insert_index(struct maple_tree *mt, unsigned long index, gfp_t gfp)
+static int __init mtree_insert_index(struct maple_tree *mt,
+ unsigned long index, gfp_t gfp)
{
return mtree_insert(mt, index, xa_mk_value(index & LONG_MAX), gfp);
}

-static void mtree_erase_index(struct maple_tree *mt, unsigned long index)
+static void __init mtree_erase_index(struct maple_tree *mt, unsigned long index)
{
MT_BUG_ON(mt, mtree_erase(mt, index) != xa_mk_value(index & LONG_MAX));
MT_BUG_ON(mt, mtree_load(mt, index) != NULL);
}

-static int mtree_test_insert(struct maple_tree *mt, unsigned long index,
+static int __init mtree_test_insert(struct maple_tree *mt, unsigned long index,
void *ptr)
{
return mtree_insert(mt, index, ptr, GFP_KERNEL);
}

-static int mtree_test_store_range(struct maple_tree *mt, unsigned long start,
- unsigned long end, void *ptr)
+static int __init mtree_test_store_range(struct maple_tree *mt,
+ unsigned long start, unsigned long end, void *ptr)
{
return mtree_store_range(mt, start, end, ptr, GFP_KERNEL);
}

-static int mtree_test_store(struct maple_tree *mt, unsigned long start,
+static int __init mtree_test_store(struct maple_tree *mt, unsigned long start,
void *ptr)
{
return mtree_test_store_range(mt, start, start, ptr);
}

-static int mtree_test_insert_range(struct maple_tree *mt, unsigned long start,
- unsigned long end, void *ptr)
+static int __init mtree_test_insert_range(struct maple_tree *mt,
+ unsigned long start, unsigned long end, void *ptr)
{
return mtree_insert_range(mt, start, end, ptr, GFP_KERNEL);
}

-static void *mtree_test_load(struct maple_tree *mt, unsigned long index)
+static void __init *mtree_test_load(struct maple_tree *mt, unsigned long index)
{
return mtree_load(mt, index);
}

-static void *mtree_test_erase(struct maple_tree *mt, unsigned long index)
+static void __init *mtree_test_erase(struct maple_tree *mt, unsigned long index)
{
return mtree_erase(mt, index);
}

#if defined(CONFIG_64BIT)
-static noinline void check_mtree_alloc_range(struct maple_tree *mt,
+static noinline void __init check_mtree_alloc_range(struct maple_tree *mt,
unsigned long start, unsigned long end, unsigned long size,
unsigned long expected, int eret, void *ptr)
{
@@ -115,7 +115,7 @@ static noinline void check_mtree_alloc_range(struct maple_tree *mt,
MT_BUG_ON(mt, result != expected);
}

-static noinline void check_mtree_alloc_rrange(struct maple_tree *mt,
+static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt,
unsigned long start, unsigned long end, unsigned long size,
unsigned long expected, int eret, void *ptr)
{
@@ -133,8 +133,8 @@ static noinline void check_mtree_alloc_rrange(struct maple_tree *mt,
}
#endif

-static noinline void check_load(struct maple_tree *mt, unsigned long index,
- void *ptr)
+static noinline void __init check_load(struct maple_tree *mt,
+ unsigned long index, void *ptr)
{
void *ret = mtree_test_load(mt, index);

@@ -143,7 +143,7 @@ static noinline void check_load(struct maple_tree *mt, unsigned long index,
MT_BUG_ON(mt, ret != ptr);
}

-static noinline void check_store_range(struct maple_tree *mt,
+static noinline void __init check_store_range(struct maple_tree *mt,
unsigned long start, unsigned long end, void *ptr, int expected)
{
int ret = -EINVAL;
@@ -159,7 +159,7 @@ static noinline void check_store_range(struct maple_tree *mt,
check_load(mt, i, ptr);
}

-static noinline void check_insert_range(struct maple_tree *mt,
+static noinline void __init check_insert_range(struct maple_tree *mt,
unsigned long start, unsigned long end, void *ptr, int expected)
{
int ret = -EINVAL;
@@ -175,8 +175,8 @@ static noinline void check_insert_range(struct maple_tree *mt,
check_load(mt, i, ptr);
}

-static noinline void check_insert(struct maple_tree *mt, unsigned long index,
- void *ptr)
+static noinline void __init check_insert(struct maple_tree *mt,
+ unsigned long index, void *ptr)
{
int ret = -EINVAL;

@@ -184,7 +184,7 @@ static noinline void check_insert(struct maple_tree *mt, unsigned long index,
MT_BUG_ON(mt, ret != 0);
}

-static noinline void check_dup_insert(struct maple_tree *mt,
+static noinline void __init check_dup_insert(struct maple_tree *mt,
unsigned long index, void *ptr)
{
int ret = -EINVAL;
@@ -194,13 +194,13 @@ static noinline void check_dup_insert(struct maple_tree *mt,
}


-static noinline
-void check_index_load(struct maple_tree *mt, unsigned long index)
+static noinline void __init check_index_load(struct maple_tree *mt,
+ unsigned long index)
{
return check_load(mt, index, xa_mk_value(index & LONG_MAX));
}

-static inline int not_empty(struct maple_node *node)
+static inline __init int not_empty(struct maple_node *node)
{
int i;

@@ -215,8 +215,8 @@ static inline int not_empty(struct maple_node *node)
}


-static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
- bool verbose)
+static noinline void __init check_rev_seq(struct maple_tree *mt,
+ unsigned long max, bool verbose)
{
unsigned long i = max, j;

@@ -248,7 +248,7 @@ static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
#endif
}

-static noinline void check_seq(struct maple_tree *mt, unsigned long max,
+static noinline void __init check_seq(struct maple_tree *mt, unsigned long max,
bool verbose)
{
unsigned long i, j;
@@ -277,7 +277,7 @@ static noinline void check_seq(struct maple_tree *mt, unsigned long max,
#endif
}

-static noinline void check_lb_not_empty(struct maple_tree *mt)
+static noinline void __init check_lb_not_empty(struct maple_tree *mt)
{
unsigned long i, j;
unsigned long huge = 4000UL * 1000 * 1000;
@@ -296,13 +296,13 @@ static noinline void check_lb_not_empty(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_lower_bound_split(struct maple_tree *mt)
+static noinline void __init check_lower_bound_split(struct maple_tree *mt)
{
MT_BUG_ON(mt, !mtree_empty(mt));
check_lb_not_empty(mt);
}

-static noinline void check_upper_bound_split(struct maple_tree *mt)
+static noinline void __init check_upper_bound_split(struct maple_tree *mt)
{
unsigned long i, j;
unsigned long huge;
@@ -327,7 +327,7 @@ static noinline void check_upper_bound_split(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_mid_split(struct maple_tree *mt)
+static noinline void __init check_mid_split(struct maple_tree *mt)
{
unsigned long huge = 8000UL * 1000 * 1000;

@@ -336,7 +336,7 @@ static noinline void check_mid_split(struct maple_tree *mt)
check_lb_not_empty(mt);
}

-static noinline void check_rev_find(struct maple_tree *mt)
+static noinline void __init check_rev_find(struct maple_tree *mt)
{
int i, nr_entries = 200;
void *val;
@@ -375,7 +375,7 @@ static noinline void check_rev_find(struct maple_tree *mt)
rcu_read_unlock();
}

-static noinline void check_find(struct maple_tree *mt)
+static noinline void __init check_find(struct maple_tree *mt)
{
unsigned long val = 0;
unsigned long count;
@@ -592,7 +592,7 @@ static noinline void check_find(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_find_2(struct maple_tree *mt)
+static noinline void __init check_find_2(struct maple_tree *mt)
{
unsigned long i, j;
void *entry;
@@ -637,7 +637,7 @@ static noinline void check_find_2(struct maple_tree *mt)


#if defined(CONFIG_64BIT)
-static noinline void check_alloc_rev_range(struct maple_tree *mt)
+static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
{
/*
* Generated by:
@@ -645,7 +645,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
* awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}'
*/

- unsigned long range[] = {
+ static const unsigned long range[] = {
/* Inclusive , Exclusive. */
0x565234af2000, 0x565234af4000,
0x565234af4000, 0x565234af9000,
@@ -673,7 +673,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
0x7fff58791000, 0x7fff58793000,
};

- unsigned long holes[] = {
+ static const unsigned long holes[] = {
/*
* Note: start of hole is INCLUSIVE
* end of hole is EXCLUSIVE
@@ -693,7 +693,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
* 4. number that should be returned.
* 5. return value
*/
- unsigned long req_range[] = {
+ static const unsigned long req_range[] = {
0x565234af9000, /* Min */
0x7fff58791000, /* Max */
0x1000, /* Size */
@@ -804,7 +804,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_alloc_range(struct maple_tree *mt)
+static noinline void __init check_alloc_range(struct maple_tree *mt)
{
/*
* Generated by:
@@ -812,7 +812,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
* awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}'
*/

- unsigned long range[] = {
+ static const unsigned long range[] = {
/* Inclusive , Exclusive. */
0x565234af2000, 0x565234af4000,
0x565234af4000, 0x565234af9000,
@@ -839,7 +839,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
0x7fff5878e000, 0x7fff58791000,
0x7fff58791000, 0x7fff58793000,
};
- unsigned long holes[] = {
+ static const unsigned long holes[] = {
/* Start of hole, end of hole, size of hole (+1) */
0x565234afb000, 0x565234afc000, 0x1000,
0x565234afe000, 0x565235def000, 0x12F1000,
@@ -854,7 +854,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
* 4. number that should be returned.
* 5. return value
*/
- unsigned long req_range[] = {
+ static const unsigned long req_range[] = {
0x565234af9000, /* Min */
0x7fff58791000, /* Max */
0x1000, /* Size */
@@ -963,10 +963,10 @@ static noinline void check_alloc_range(struct maple_tree *mt)
}
#endif

-static noinline void check_ranges(struct maple_tree *mt)
+static noinline void __init check_ranges(struct maple_tree *mt)
{
int i, val, val2;
- unsigned long r[] = {
+ static const unsigned long r[] = {
10, 15,
20, 25,
17, 22, /* Overlaps previous range. */
@@ -1231,7 +1231,7 @@ static noinline void check_ranges(struct maple_tree *mt)
MT_BUG_ON(mt, mt_height(mt) != 4);
}

-static noinline void check_next_entry(struct maple_tree *mt)
+static noinline void __init check_next_entry(struct maple_tree *mt)
{
void *entry = NULL;
unsigned long limit = 30, i = 0;
@@ -1255,7 +1255,7 @@ static noinline void check_next_entry(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_prev_entry(struct maple_tree *mt)
+static noinline void __init check_prev_entry(struct maple_tree *mt)
{
unsigned long index = 16;
void *value;
@@ -1299,7 +1299,7 @@ static noinline void check_prev_entry(struct maple_tree *mt)
mas_unlock(&mas);
}

-static noinline void check_root_expand(struct maple_tree *mt)
+static noinline void __init check_root_expand(struct maple_tree *mt)
{
MA_STATE(mas, mt, 0, 0);
void *ptr;
@@ -1388,13 +1388,13 @@ static noinline void check_root_expand(struct maple_tree *mt)
mas_unlock(&mas);
}

-static noinline void check_gap_combining(struct maple_tree *mt)
+static noinline void __init check_gap_combining(struct maple_tree *mt)
{
struct maple_enode *mn1, *mn2;
void *entry;
unsigned long singletons = 100;
- unsigned long *seq100;
- unsigned long seq100_64[] = {
+ static const unsigned long *seq100;
+ static const unsigned long seq100_64[] = {
/* 0-5 */
74, 75, 76,
50, 100, 2,
@@ -1408,7 +1408,7 @@ static noinline void check_gap_combining(struct maple_tree *mt)
76, 2, 79, 85, 4,
};

- unsigned long seq100_32[] = {
+ static const unsigned long seq100_32[] = {
/* 0-5 */
61, 62, 63,
50, 100, 2,
@@ -1422,11 +1422,11 @@ static noinline void check_gap_combining(struct maple_tree *mt)
76, 2, 79, 85, 4,
};

- unsigned long seq2000[] = {
+ static const unsigned long seq2000[] = {
1152, 1151,
1100, 1200, 2,
};
- unsigned long seq400[] = {
+ static const unsigned long seq400[] = {
286, 318,
256, 260, 266, 270, 275, 280, 290, 398,
286, 310,
@@ -1585,7 +1585,7 @@ static noinline void check_gap_combining(struct maple_tree *mt)
mt_set_non_kernel(0);
mtree_destroy(mt);
}
-static noinline void check_node_overwrite(struct maple_tree *mt)
+static noinline void __init check_node_overwrite(struct maple_tree *mt)
{
int i, max = 4000;

@@ -1598,7 +1598,7 @@ static noinline void check_node_overwrite(struct maple_tree *mt)
}

#if defined(BENCH_SLOT_STORE)
-static noinline void bench_slot_store(struct maple_tree *mt)
+static noinline void __init bench_slot_store(struct maple_tree *mt)
{
int i, brk = 105, max = 1040, brk_start = 100, count = 20000000;

@@ -1614,7 +1614,7 @@ static noinline void bench_slot_store(struct maple_tree *mt)
#endif

#if defined(BENCH_NODE_STORE)
-static noinline void bench_node_store(struct maple_tree *mt)
+static noinline void __init bench_node_store(struct maple_tree *mt)
{
int i, overwrite = 76, max = 240, count = 20000000;

@@ -1633,7 +1633,7 @@ static noinline void bench_node_store(struct maple_tree *mt)
#endif

#if defined(BENCH_AWALK)
-static noinline void bench_awalk(struct maple_tree *mt)
+static noinline void __init bench_awalk(struct maple_tree *mt)
{
int i, max = 2500, count = 50000000;
MA_STATE(mas, mt, 1470, 1470);
@@ -1650,7 +1650,7 @@ static noinline void bench_awalk(struct maple_tree *mt)
}
#endif
#if defined(BENCH_WALK)
-static noinline void bench_walk(struct maple_tree *mt)
+static noinline void __init bench_walk(struct maple_tree *mt)
{
int i, max = 2500, count = 550000000;
MA_STATE(mas, mt, 1470, 1470);
@@ -1667,7 +1667,7 @@ static noinline void bench_walk(struct maple_tree *mt)
#endif

#if defined(BENCH_MT_FOR_EACH)
-static noinline void bench_mt_for_each(struct maple_tree *mt)
+static noinline void __init bench_mt_for_each(struct maple_tree *mt)
{
int i, count = 1000000;
unsigned long max = 2500, index = 0;
@@ -1691,7 +1691,7 @@ static noinline void bench_mt_for_each(struct maple_tree *mt)
#endif

/* check_forking - simulate the kernel forking sequence with the tree. */
-static noinline void check_forking(struct maple_tree *mt)
+static noinline void __init check_forking(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1730,7 +1730,7 @@ static noinline void check_forking(struct maple_tree *mt)
mtree_destroy(&newmt);
}

-static noinline void check_iteration(struct maple_tree *mt)
+static noinline void __init check_iteration(struct maple_tree *mt)
{
int i, nr_entries = 125;
void *val;
@@ -1798,7 +1798,7 @@ static noinline void check_iteration(struct maple_tree *mt)
mt_set_non_kernel(0);
}

-static noinline void check_mas_store_gfp(struct maple_tree *mt)
+static noinline void __init check_mas_store_gfp(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1831,7 +1831,7 @@ static noinline void check_mas_store_gfp(struct maple_tree *mt)
}

#if defined(BENCH_FORK)
-static noinline void bench_forking(struct maple_tree *mt)
+static noinline void __init bench_forking(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1873,15 +1873,17 @@ static noinline void bench_forking(struct maple_tree *mt)
}
#endif

-static noinline void next_prev_test(struct maple_tree *mt)
+static noinline void __init next_prev_test(struct maple_tree *mt)
{
int i, nr_entries;
void *val;
MA_STATE(mas, mt, 0, 0);
struct maple_enode *mn;
- unsigned long *level2;
- unsigned long level2_64[] = {707, 1000, 710, 715, 720, 725};
- unsigned long level2_32[] = {1747, 2000, 1750, 1755, 1760, 1765};
+ static const unsigned long *level2;
+ static const unsigned long level2_64[] = { 707, 1000, 710, 715, 720,
+ 725};
+ static const unsigned long level2_32[] = { 1747, 2000, 1750, 1755,
+ 1760, 1765};

if (MAPLE_32BIT) {
nr_entries = 500;
@@ -2049,7 +2051,7 @@ static noinline void next_prev_test(struct maple_tree *mt)


/* Test spanning writes that require balancing right sibling or right cousin */
-static noinline void check_spanning_relatives(struct maple_tree *mt)
+static noinline void __init check_spanning_relatives(struct maple_tree *mt)
{

unsigned long i, nr_entries = 1000;
@@ -2062,7 +2064,7 @@ static noinline void check_spanning_relatives(struct maple_tree *mt)
mtree_store_range(mt, 9365, 9955, NULL, GFP_KERNEL);
}

-static noinline void check_fuzzer(struct maple_tree *mt)
+static noinline void __init check_fuzzer(struct maple_tree *mt)
{
/*
* 1. Causes a spanning rebalance of a single root node.
@@ -2459,7 +2461,7 @@ static noinline void check_fuzzer(struct maple_tree *mt)
}

/* duplicate the tree with a specific gap */
-static noinline void check_dup_gaps(struct maple_tree *mt,
+static noinline void __init check_dup_gaps(struct maple_tree *mt,
unsigned long nr_entries, bool zero_start,
unsigned long gap)
{
@@ -2499,7 +2501,7 @@ static noinline void check_dup_gaps(struct maple_tree *mt,
}

/* Duplicate many sizes of trees. Mainly to test expected entry values */
-static noinline void check_dup(struct maple_tree *mt)
+static noinline void __init check_dup(struct maple_tree *mt)
{
int i;
int big_start = 100010;
@@ -2587,7 +2589,7 @@ static noinline void check_dup(struct maple_tree *mt)
}
}

-static noinline void check_bnode_min_spanning(struct maple_tree *mt)
+static noinline void __init check_bnode_min_spanning(struct maple_tree *mt)
{
int i = 50;
MA_STATE(mas, mt, 0, 0);
@@ -2606,7 +2608,7 @@ static noinline void check_bnode_min_spanning(struct maple_tree *mt)
mt_set_non_kernel(0);
}

-static noinline void check_empty_area_window(struct maple_tree *mt)
+static noinline void __init check_empty_area_window(struct maple_tree *mt)
{
unsigned long i, nr_entries = 20;
MA_STATE(mas, mt, 0, 0);
@@ -2691,7 +2693,7 @@ static noinline void check_empty_area_window(struct maple_tree *mt)
rcu_read_unlock();
}

-static noinline void check_empty_area_fill(struct maple_tree *mt)
+static noinline void __init check_empty_area_fill(struct maple_tree *mt)
{
const unsigned long max = 0x25D78000;
unsigned long size;
@@ -2735,11 +2737,11 @@ static noinline void check_empty_area_fill(struct maple_tree *mt)
}

static DEFINE_MTREE(tree);
-static int maple_tree_seed(void)
+static int __init maple_tree_seed(void)
{
- unsigned long set[] = {5015, 5014, 5017, 25, 1000,
- 1001, 1002, 1003, 1005, 0,
- 5003, 5002};
+ unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
+ 1001, 1002, 1003, 1005, 0,
+ 5003, 5002};
void *ptr = &set;

pr_info("\nTEST STARTING\n\n");
@@ -3009,7 +3011,7 @@ static int maple_tree_seed(void)
return -EINVAL;
}

-static void maple_tree_harvest(void)
+static void __exit maple_tree_harvest(void)
{

}
diff --git a/tools/testing/radix-tree/linux/init.h b/tools/testing/radix-tree/linux/init.h
index 1bb0afc213099..81563c3dfce79 100644
--- a/tools/testing/radix-tree/linux/init.h
+++ b/tools/testing/radix-tree/linux/init.h
@@ -1 +1,2 @@
#define __init
+#define __exit
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index cf37ed9ab6c4d..03539d86cdf0f 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -14,6 +14,7 @@
#include "test.h"
#include <stdlib.h>
#include <time.h>
+#include "linux/init.h"

#define module_init(x)
#define module_exit(x)
@@ -80,7 +81,7 @@ static void check_mas_alloc_node_count(struct ma_state *mas)
* check_new_node() - Check the creation of new nodes and error path
* verification.
*/
-static noinline void check_new_node(struct maple_tree *mt)
+static noinline void __init check_new_node(struct maple_tree *mt)
{

struct maple_node *mn, *mn2, *mn3;
@@ -454,7 +455,7 @@ static noinline void check_new_node(struct maple_tree *mt)
/*
* Check erasing including RCU.
*/
-static noinline void check_erase(struct maple_tree *mt, unsigned long index,
+static noinline void __init check_erase(struct maple_tree *mt, unsigned long index,
void *ptr)
{
MT_BUG_ON(mt, mtree_test_erase(mt, index) != ptr);
@@ -464,24 +465,24 @@ static noinline void check_erase(struct maple_tree *mt, unsigned long index,
#define erase_check_insert(mt, i) check_insert(mt, set[i], entry[i%2])
#define erase_check_erase(mt, i) check_erase(mt, set[i], entry[i%2])

-static noinline void check_erase_testset(struct maple_tree *mt)
+static noinline void __init check_erase_testset(struct maple_tree *mt)
{
- unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
- 1001, 1002, 1003, 1005, 0,
- 6003, 6002, 6008, 6012, 6015,
- 7003, 7002, 7008, 7012, 7015,
- 8003, 8002, 8008, 8012, 8015,
- 9003, 9002, 9008, 9012, 9015,
- 10003, 10002, 10008, 10012, 10015,
- 11003, 11002, 11008, 11012, 11015,
- 12003, 12002, 12008, 12012, 12015,
- 13003, 13002, 13008, 13012, 13015,
- 14003, 14002, 14008, 14012, 14015,
- 15003, 15002, 15008, 15012, 15015,
- };
-
-
- void *ptr = &set;
+ static const unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
+ 1001, 1002, 1003, 1005, 0,
+ 6003, 6002, 6008, 6012, 6015,
+ 7003, 7002, 7008, 7012, 7015,
+ 8003, 8002, 8008, 8012, 8015,
+ 9003, 9002, 9008, 9012, 9015,
+ 10003, 10002, 10008, 10012, 10015,
+ 11003, 11002, 11008, 11012, 11015,
+ 12003, 12002, 12008, 12012, 12015,
+ 13003, 13002, 13008, 13012, 13015,
+ 14003, 14002, 14008, 14012, 14015,
+ 15003, 15002, 15008, 15012, 15015,
+ };
+
+
+ void *ptr = &check_erase_testset;
void *entry[2] = { ptr, mt };
void *root_node;

@@ -738,7 +739,7 @@ static noinline void check_erase_testset(struct maple_tree *mt)
int mas_ce2_over_count(struct ma_state *mas_start, struct ma_state *mas_end,
void *s_entry, unsigned long s_min,
void *e_entry, unsigned long e_max,
- unsigned long *set, int i, bool null_entry)
+ const unsigned long *set, int i, bool null_entry)
{
int count = 0, span = 0;
unsigned long retry = 0;
@@ -968,8 +969,8 @@ static inline void *mas_range_load(struct ma_state *mas,
}

#if defined(CONFIG_64BIT)
-static noinline void check_erase2_testset(struct maple_tree *mt,
- unsigned long *set, unsigned long size)
+static noinline void __init check_erase2_testset(struct maple_tree *mt,
+ const unsigned long *set, unsigned long size)
{
int entry_count = 0;
int check = 0;
@@ -1113,11 +1114,11 @@ static noinline void check_erase2_testset(struct maple_tree *mt,


/* These tests were pulled from KVM tree modifications which failed. */
-static noinline void check_erase2_sets(struct maple_tree *mt)
+static noinline void __init check_erase2_sets(struct maple_tree *mt)
{
void *entry;
unsigned long start = 0;
- unsigned long set[] = {
+ static const unsigned long set[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140721266458624, 140737488351231,
ERASE, 140721266458624, 140737488351231,
@@ -1135,7 +1136,7 @@ ERASE, 140253902692352, 140253902864383,
STORE, 140253902692352, 140253902696447,
STORE, 140253902696448, 140253902864383,
};
- unsigned long set2[] = {
+ static const unsigned long set2[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735933583360, 140737488351231,
ERASE, 140735933583360, 140737488351231,
@@ -1159,7 +1160,7 @@ STORE, 140277094813696, 140277094821887,
STORE, 140277094821888, 140277094825983,
STORE, 140735933906944, 140735933911039,
};
- unsigned long set3[] = {
+ static const unsigned long set3[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735790264320, 140737488351231,
ERASE, 140735790264320, 140737488351231,
@@ -1202,7 +1203,7 @@ STORE, 47135835840512, 47135835885567,
STORE, 47135835885568, 47135835893759,
};

- unsigned long set4[] = {
+ static const unsigned long set4[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140728251703296, 140737488351231,
ERASE, 140728251703296, 140737488351231,
@@ -1223,7 +1224,7 @@ ERASE, 47646523277312, 47646523445247,
STORE, 47646523277312, 47646523400191,
};

- unsigned long set5[] = {
+ static const unsigned long set5[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140726874062848, 140737488351231,
ERASE, 140726874062848, 140737488351231,
@@ -1356,7 +1357,7 @@ STORE, 47884791619584, 47884791623679,
STORE, 47884791623680, 47884791627775,
};

- unsigned long set6[] = {
+ static const unsigned long set6[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722999021568, 140737488351231,
ERASE, 140722999021568, 140737488351231,
@@ -1488,7 +1489,7 @@ ERASE, 47430432014336, 47430432022527,
STORE, 47430432014336, 47430432018431,
STORE, 47430432018432, 47430432022527,
};
- unsigned long set7[] = {
+ static const unsigned long set7[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140729808330752, 140737488351231,
ERASE, 140729808330752, 140737488351231,
@@ -1620,7 +1621,7 @@ ERASE, 47439987130368, 47439987138559,
STORE, 47439987130368, 47439987134463,
STORE, 47439987134464, 47439987138559,
};
- unsigned long set8[] = {
+ static const unsigned long set8[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722482974720, 140737488351231,
ERASE, 140722482974720, 140737488351231,
@@ -1753,7 +1754,7 @@ STORE, 47708488638464, 47708488642559,
STORE, 47708488642560, 47708488646655,
};

- unsigned long set9[] = {
+ static const unsigned long set9[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736427839488, 140737488351231,
ERASE, 140736427839488, 140736427839488,
@@ -5619,7 +5620,7 @@ ERASE, 47906195480576, 47906195480576,
STORE, 94641242615808, 94641242750975,
};

- unsigned long set10[] = {
+ static const unsigned long set10[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736427839488, 140737488351231,
ERASE, 140736427839488, 140736427839488,
@@ -9483,7 +9484,7 @@ STORE, 139726599680000, 139726599684095,
ERASE, 47906195480576, 47906195480576,
STORE, 94641242615808, 94641242750975,
};
- unsigned long set11[] = {
+ static const unsigned long set11[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140732658499584, 140737488351231,
ERASE, 140732658499584, 140732658499584,
@@ -9509,7 +9510,7 @@ STORE, 140732658565120, 140732658569215,
STORE, 140732658552832, 140732658565119,
};

- unsigned long set12[] = { /* contains 12 values. */
+ static const unsigned long set12[] = { /* contains 12 values. */
STORE, 140737488347136, 140737488351231,
STORE, 140732658499584, 140737488351231,
ERASE, 140732658499584, 140732658499584,
@@ -9536,7 +9537,7 @@ STORE, 140732658552832, 140732658565119,
STORE, 140014592741375, 140014592741375, /* contrived */
STORE, 140014592733184, 140014592741376, /* creates first entry retry. */
};
- unsigned long set13[] = {
+ static const unsigned long set13[] = {
STORE, 140373516247040, 140373516251135,/*: ffffa2e7b0e10d80 */
STORE, 140373516251136, 140373516255231,/*: ffffa2e7b1195d80 */
STORE, 140373516255232, 140373516443647,/*: ffffa2e7b0e109c0 */
@@ -9549,7 +9550,7 @@ STORE, 140373518684160, 140373518688254,/*: ffffa2e7b05fec00 */
STORE, 140373518688256, 140373518692351,/*: ffffa2e7bfbdcd80 */
STORE, 140373518692352, 140373518696447,/*: ffffa2e7b0749e40 */
};
- unsigned long set14[] = {
+ static const unsigned long set14[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731667996672, 140737488351231,
SNULL, 140731668000767, 140737488351231,
@@ -9833,7 +9834,7 @@ SNULL, 139826136543232, 139826136809471,
STORE, 139826136809472, 139826136842239,
STORE, 139826136543232, 139826136809471,
};
- unsigned long set15[] = {
+ static const unsigned long set15[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722061451264, 140737488351231,
SNULL, 140722061455359, 140737488351231,
@@ -10118,7 +10119,7 @@ STORE, 139906808958976, 139906808991743,
STORE, 139906808692736, 139906808958975,
};

- unsigned long set16[] = {
+ static const unsigned long set16[] = {
STORE, 94174808662016, 94174809321471,
STORE, 94174811414528, 94174811426815,
STORE, 94174811426816, 94174811430911,
@@ -10329,7 +10330,7 @@ STORE, 139921865613312, 139921865617407,
STORE, 139921865547776, 139921865564159,
};

- unsigned long set17[] = {
+ static const unsigned long set17[] = {
STORE, 94397057224704, 94397057646591,
STORE, 94397057650688, 94397057691647,
STORE, 94397057691648, 94397057695743,
@@ -10391,7 +10392,7 @@ STORE, 140720477511680, 140720477646847,
STORE, 140720478302208, 140720478314495,
STORE, 140720478314496, 140720478318591,
};
- unsigned long set18[] = {
+ static const unsigned long set18[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140724953673728, 140737488351231,
SNULL, 140724953677823, 140737488351231,
@@ -10424,7 +10425,7 @@ STORE, 140222970597376, 140222970605567,
ERASE, 140222970597376, 140222970605567,
STORE, 140222970597376, 140222970605567,
};
- unsigned long set19[] = {
+ static const unsigned long set19[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140725182459904, 140737488351231,
SNULL, 140725182463999, 140737488351231,
@@ -10693,7 +10694,7 @@ STORE, 140656836775936, 140656836780031,
STORE, 140656787476480, 140656791920639,
ERASE, 140656774639616, 140656779083775,
};
- unsigned long set20[] = {
+ static const unsigned long set20[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735952392192, 140737488351231,
SNULL, 140735952396287, 140737488351231,
@@ -10849,7 +10850,7 @@ STORE, 140590386819072, 140590386823167,
STORE, 140590386823168, 140590386827263,
SNULL, 140590376591359, 140590376595455,
};
- unsigned long set21[] = {
+ static const unsigned long set21[] = {
STORE, 93874710941696, 93874711363583,
STORE, 93874711367680, 93874711408639,
STORE, 93874711408640, 93874711412735,
@@ -10919,7 +10920,7 @@ ERASE, 140708393312256, 140708393316351,
ERASE, 140708393308160, 140708393312255,
ERASE, 140708393291776, 140708393308159,
};
- unsigned long set22[] = {
+ static const unsigned long set22[] = {
STORE, 93951397134336, 93951397183487,
STORE, 93951397183488, 93951397728255,
STORE, 93951397728256, 93951397826559,
@@ -11046,7 +11047,7 @@ STORE, 140551361253376, 140551361519615,
ERASE, 140551361253376, 140551361519615,
};

- unsigned long set23[] = {
+ static const unsigned long set23[] = {
STORE, 94014447943680, 94014448156671,
STORE, 94014450253824, 94014450257919,
STORE, 94014450257920, 94014450266111,
@@ -14370,7 +14371,7 @@ SNULL, 140175956627455, 140175985139711,
STORE, 140175927242752, 140175956627455,
STORE, 140175956627456, 140175985139711,
};
- unsigned long set24[] = {
+ static const unsigned long set24[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735281639424, 140737488351231,
SNULL, 140735281643519, 140737488351231,
@@ -15532,7 +15533,7 @@ ERASE, 139635393024000, 139635401412607,
ERASE, 139635384627200, 139635384631295,
ERASE, 139635384631296, 139635393019903,
};
- unsigned long set25[] = {
+ static const unsigned long set25[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140737488343040, 140737488351231,
STORE, 140722547441664, 140737488351231,
@@ -22320,7 +22321,7 @@ STORE, 140249652703232, 140249682087935,
STORE, 140249682087936, 140249710600191,
};

- unsigned long set26[] = {
+ static const unsigned long set26[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140729464770560, 140737488351231,
SNULL, 140729464774655, 140737488351231,
@@ -22344,7 +22345,7 @@ ERASE, 140109040951296, 140109040959487,
STORE, 140109040955392, 140109040959487,
ERASE, 140109040955392, 140109040959487,
};
- unsigned long set27[] = {
+ static const unsigned long set27[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140726128070656, 140737488351231,
SNULL, 140726128074751, 140737488351231,
@@ -22740,7 +22741,7 @@ STORE, 140415509696512, 140415535910911,
ERASE, 140415537422336, 140415562588159,
STORE, 140415482433536, 140415509696511,
};
- unsigned long set28[] = {
+ static const unsigned long set28[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722475622400, 140737488351231,
SNULL, 140722475626495, 140737488351231,
@@ -22808,7 +22809,7 @@ STORE, 139918413348864, 139918413352959,
ERASE, 139918413316096, 139918413344767,
STORE, 93865848528896, 93865848664063,
};
- unsigned long set29[] = {
+ static const unsigned long set29[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734467944448, 140737488351231,
SNULL, 140734467948543, 140737488351231,
@@ -23683,7 +23684,7 @@ ERASE, 140143079972864, 140143088361471,
ERASE, 140143205793792, 140143205797887,
ERASE, 140143205797888, 140143214186495,
};
- unsigned long set30[] = {
+ static const unsigned long set30[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140733436743680, 140737488351231,
SNULL, 140733436747775, 140737488351231,
@@ -24565,7 +24566,7 @@ ERASE, 140165225893888, 140165225897983,
ERASE, 140165225897984, 140165234286591,
ERASE, 140165058105344, 140165058109439,
};
- unsigned long set31[] = {
+ static const unsigned long set31[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730890784768, 140737488351231,
SNULL, 140730890788863, 140737488351231,
@@ -25378,7 +25379,7 @@ ERASE, 140623906590720, 140623914979327,
ERASE, 140622950277120, 140622950281215,
ERASE, 140622950281216, 140622958669823,
};
- unsigned long set32[] = {
+ static const unsigned long set32[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731244212224, 140737488351231,
SNULL, 140731244216319, 140737488351231,
@@ -26174,7 +26175,7 @@ ERASE, 140400417288192, 140400425676799,
ERASE, 140400283066368, 140400283070463,
ERASE, 140400283070464, 140400291459071,
};
- unsigned long set33[] = {
+ static const unsigned long set33[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734562918400, 140737488351231,
SNULL, 140734562922495, 140737488351231,
@@ -26316,7 +26317,7 @@ STORE, 140582961786880, 140583003750399,
ERASE, 140582961786880, 140583003750399,
};

- unsigned long set34[] = {
+ static const unsigned long set34[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731327180800, 140737488351231,
SNULL, 140731327184895, 140737488351231,
@@ -27197,7 +27198,7 @@ ERASE, 140012522094592, 140012530483199,
ERASE, 140012033142784, 140012033146879,
ERASE, 140012033146880, 140012041535487,
};
- unsigned long set35[] = {
+ static const unsigned long set35[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730536939520, 140737488351231,
SNULL, 140730536943615, 140737488351231,
@@ -27954,7 +27955,7 @@ ERASE, 140474471936000, 140474480324607,
ERASE, 140474396430336, 140474396434431,
ERASE, 140474396434432, 140474404823039,
};
- unsigned long set36[] = {
+ static const unsigned long set36[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140723893125120, 140737488351231,
SNULL, 140723893129215, 140737488351231,
@@ -28815,7 +28816,7 @@ ERASE, 140121890357248, 140121898745855,
ERASE, 140121269587968, 140121269592063,
ERASE, 140121269592064, 140121277980671,
};
- unsigned long set37[] = {
+ static const unsigned long set37[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722404016128, 140737488351231,
SNULL, 140722404020223, 140737488351231,
@@ -28941,7 +28942,7 @@ STORE, 139759821246464, 139759888355327,
ERASE, 139759821246464, 139759888355327,
ERASE, 139759888355328, 139759955464191,
};
- unsigned long set38[] = {
+ static const unsigned long set38[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730666221568, 140737488351231,
SNULL, 140730666225663, 140737488351231,
@@ -29751,7 +29752,7 @@ ERASE, 140613504712704, 140613504716799,
ERASE, 140613504716800, 140613513105407,
};

- unsigned long set39[] = {
+ static const unsigned long set39[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736271417344, 140737488351231,
SNULL, 140736271421439, 140737488351231,
@@ -30123,7 +30124,7 @@ STORE, 140325364428800, 140325372821503,
STORE, 140325356036096, 140325364428799,
SNULL, 140325364432895, 140325372821503,
};
- unsigned long set40[] = {
+ static const unsigned long set40[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734309167104, 140737488351231,
SNULL, 140734309171199, 140737488351231,
@@ -30874,7 +30875,7 @@ ERASE, 140320289300480, 140320289304575,
ERASE, 140320289304576, 140320297693183,
ERASE, 140320163409920, 140320163414015,
};
- unsigned long set41[] = {
+ static const unsigned long set41[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140728157171712, 140737488351231,
SNULL, 140728157175807, 140737488351231,
@@ -31184,7 +31185,7 @@ STORE, 94376135090176, 94376135094271,
STORE, 94376135094272, 94376135098367,
SNULL, 94376135094272, 94377208836095,
};
- unsigned long set42[] = {
+ static const unsigned long set42[] = {
STORE, 314572800, 1388314623,
STORE, 1462157312, 1462169599,
STORE, 1462169600, 1462185983,
@@ -33861,7 +33862,7 @@ SNULL, 3798999040, 3799101439,
*/
};

- unsigned long set43[] = {
+ static const unsigned long set43[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734187720704, 140737488351231,
SNULL, 140734187724800, 140737488351231,
@@ -34995,7 +34996,7 @@ void run_check_rcu_slowread(struct maple_tree *mt, struct rcu_test_struct *vals)
MT_BUG_ON(mt, !vals->seen_entry3);
MT_BUG_ON(mt, !vals->seen_both);
}
-static noinline void check_rcu_simulated(struct maple_tree *mt)
+static noinline void __init check_rcu_simulated(struct maple_tree *mt)
{
unsigned long i, nr_entries = 1000;
unsigned long target = 4320;
@@ -35156,7 +35157,7 @@ static noinline void check_rcu_simulated(struct maple_tree *mt)
rcu_unregister_thread();
}

-static noinline void check_rcu_threaded(struct maple_tree *mt)
+static noinline void __init check_rcu_threaded(struct maple_tree *mt)
{
unsigned long i, nr_entries = 1000;
struct rcu_test_struct vals;
@@ -35369,7 +35370,7 @@ static void check_dfs_preorder(struct maple_tree *mt)
/* End of depth first search tests */

/* Preallocation testing */
-static noinline void check_prealloc(struct maple_tree *mt)
+static noinline void __init check_prealloc(struct maple_tree *mt)
{
unsigned long i, max = 100;
unsigned long allocated;
@@ -35497,7 +35498,7 @@ static noinline void check_prealloc(struct maple_tree *mt)
/* End of preallocation testing */

/* Spanning writes, writes that span nodes and layers of the tree */
-static noinline void check_spanning_write(struct maple_tree *mt)
+static noinline void __init check_spanning_write(struct maple_tree *mt)
{
unsigned long i, max = 5000;
MA_STATE(mas, mt, 1200, 2380);
@@ -35665,7 +35666,7 @@ static noinline void check_spanning_write(struct maple_tree *mt)
/* End of spanning write testing */

/* Writes to a NULL area that are adjacent to other NULLs */
-static noinline void check_null_expand(struct maple_tree *mt)
+static noinline void __init check_null_expand(struct maple_tree *mt)
{
unsigned long i, max = 100;
unsigned char data_end;
@@ -35726,7 +35727,7 @@ static noinline void check_null_expand(struct maple_tree *mt)
/* End of NULL area expansions */

/* Checking for no memory is best done outside the kernel */
-static noinline void check_nomem(struct maple_tree *mt)
+static noinline void __init check_nomem(struct maple_tree *mt)
{
MA_STATE(ms, mt, 1, 1);

@@ -35761,7 +35762,7 @@ static noinline void check_nomem(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_locky(struct maple_tree *mt)
+static noinline void __init check_locky(struct maple_tree *mt)
{
MA_STATE(ms, mt, 2, 2);
MA_STATE(reader, mt, 2, 2);
--
2.39.2

2023-05-05 17:54:27

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 18/36] mm: Update vma_iter_store() to use MAS_WARN_ON()

MAS_WARN_ON() will provide more information on the maple state and can
be more useful for debugging. Use this version of WARN_ON() in the
debugging code when storing to the tree.

Update the printk to a pr_warn(), but this will only be printed when
maple tree debug is enabled anyways.

Making all print statements into one will keep them together on a busy
terminal.

Cc: Sergey Senozhatsky <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
mm/internal.h | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 8d1a8bd001247..6112108322461 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1047,18 +1047,23 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
{

#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
- if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.index > vma->vm_start)) {
- printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
- printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
- printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- vma_iter_dump_tree(vmi);
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ vmi->mas.index > vma->vm_start)) {
+ pr_warn("%lx > %lx\n"
+ "store of vma %lx-%lx\n"
+ "into slot %lx-%lx\n",
+ vmi->mas.index, vma->vm_start,
+ vma->vm_start, vma->vm_end,
+ vmi->mas.index, vmi->mas.last);
}
- if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.last < vma->vm_start)) {
- printk("%lu < %lu\n", vmi->mas.last, vma->vm_start);
- printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
- printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree, mt_dump_hex);
- vma_iter_dump_tree(vmi);
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ vmi->mas.last < vma->vm_start)) {
+ pr_warn("%lx < %lx\n"
+ "store of vma %lx-%lx\n"
+ "into slot %lx-%lx\n",
+ vmi->mas.last, vma->vm_start,
+ vma->vm_start, vma->vm_end,
+ vmi->mas.index, vmi->mas.last);
}
#endif

--
2.39.2

2023-05-05 17:54:28

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 01/36] maple_tree: Fix static analyser cppcheck issue

Static analyser of the maple tree code noticed that the split variable
is being used to dereference into an array prior to checking the
variable itself. Fix this issue by changing the order of the statement
to check the variable first.

Reported-by: David Binderman <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang<[email protected]>
---
lib/maple_tree.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 110a36479dced..9cf4fca42310c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1943,8 +1943,9 @@ static inline int mab_calc_split(struct ma_state *mas,
* causes one node to be deficient.
* NOTE: mt_min_slots is 1 based, b_end and split are zero.
*/
- while (((bn->pivot[split] - min) < slot_count - 1) &&
- (split < slot_count - 1) && (b_end - split > slot_min))
+ while ((split < slot_count - 1) &&
+ ((bn->pivot[split] - min) < slot_count - 1) &&
+ (b_end - split > slot_min))
split++;
}

--
2.39.2

2023-05-05 17:54:43

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 24/36] maple_tree: Try harder to keep active node with mas_prev()

Keep a reference to the node when possible with mas_prev(). This will
avoid re-walking the tree. In keeping a reference to the node, keep the
last/index accurate to the range being referenced. This means the limit
may be within the range, but the range may extend outside of the limit.

Also fix the single entry tree to respect the range (of 0), or set the
node to MAS_NONE in the case of shifting beyond 0.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 125 +++++++++++++++++++++++++++++++----------------
1 file changed, 83 insertions(+), 42 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 09142af082148..425ad922bb2d6 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4827,7 +4827,7 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
unsigned long index)
{
unsigned long pivot, min;
- unsigned char offset;
+ unsigned char offset, count;
struct maple_node *mn;
enum maple_type mt;
unsigned long *pivots;
@@ -4841,29 +4841,42 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
mn = mas_mn(mas);
mt = mte_node_type(mas->node);
offset = mas->offset - 1;
- if (offset >= mt_slots[mt])
- offset = mt_slots[mt] - 1;
-
slots = ma_slots(mn, mt);
pivots = ma_pivots(mn, mt);
+ count = ma_data_end(mn, mt, pivots, mas->max);
if (unlikely(ma_dead_node(mn))) {
mas_rewalk(mas, index);
goto retry;
}

- if (offset == mt_pivots[mt])
+ offset = mas->offset - 1;
+ if (offset >= mt_slots[mt])
+ offset = mt_slots[mt] - 1;
+
+ if (offset >= count) {
pivot = mas->max;
- else
+ offset = count;
+ } else {
pivot = pivots[offset];
+ }

if (unlikely(ma_dead_node(mn))) {
mas_rewalk(mas, index);
goto retry;
}

- while (offset && ((!mas_slot(mas, slots, offset) && pivot >= limit) ||
- !pivot))
+ while (offset && !mas_slot(mas, slots, offset)) {
pivot = pivots[--offset];
+ if (pivot >= limit)
+ break;
+ }
+
+ /*
+ * If the slot was null but we've shifted outside the limits, then set
+ * the range to the last NULL.
+ */
+ if (unlikely((pivot < limit) && (offset < mas->offset)))
+ pivot = pivots[++offset];

min = mas_safe_min(mas, pivots, offset);
entry = mas_slot(mas, slots, offset);
@@ -4872,32 +4885,33 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
goto retry;
}

- if (likely(entry)) {
- mas->offset = offset;
- mas->last = pivot;
- mas->index = min;
- }
+ mas->offset = offset;
+ mas->last = pivot;
+ mas->index = min;
return entry;
}

static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
{
void *entry;
+ struct maple_enode *prev_enode;
+ unsigned char prev_offset;

- if (mas->index < min) {
- mas->index = mas->last = min;
- mas->node = MAS_NONE;
+ if (mas->index < min)
return NULL;
- }
+
retry:
+ prev_enode = mas->node;
+ prev_offset = mas->offset;
while (likely(!mas_is_none(mas))) {
entry = mas_prev_nentry(mas, min, mas->index);
- if (unlikely(mas->last < min))
- goto not_found;

if (likely(entry))
return entry;

+ if (unlikely(mas->index <= min))
+ return NULL;
+
if (unlikely(mas_prev_node(mas, min))) {
mas_rewalk(mas, mas->index);
goto retry;
@@ -4906,9 +4920,8 @@ static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
mas->offset++;
}

- mas->offset--;
-not_found:
- mas->index = mas->last = min;
+ mas->node = prev_enode;
+ mas->offset = prev_offset;
return NULL;
}

@@ -5957,15 +5970,8 @@ EXPORT_SYMBOL_GPL(mt_next);
*/
void *mas_prev(struct ma_state *mas, unsigned long min)
{
- if (!mas->index) {
- /* Nothing comes before 0 */
- mas->last = 0;
- mas->node = MAS_NONE;
- return NULL;
- }
-
- if (unlikely(mas_is_ptr(mas)))
- return NULL;
+ if (mas->index <= min)
+ goto none;

if (mas_is_none(mas) || mas_is_paused(mas))
mas->node = MAS_START;
@@ -5973,19 +5979,30 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
if (mas_is_start(mas)) {
mas_walk(mas);
if (!mas->index)
- return NULL;
+ goto none;
}

- if (mas_is_ptr(mas)) {
- if (!mas->index) {
- mas->last = 0;
- return NULL;
- }
-
+ if (unlikely(mas_is_ptr(mas))) {
+ if (!mas->index)
+ goto none;
mas->index = mas->last = 0;
- return mas_root_locked(mas);
+ return mas_root(mas);
+ }
+
+ if (mas_is_none(mas)) {
+ if (mas->index) {
+ /* Walked to out-of-range pointer? */
+ mas->index = mas->last = 0;
+ mas->node = MAS_ROOT;
+ return mas_root(mas);
+ }
+ return NULL;
}
return mas_prev_entry(mas, min);
+
+none:
+ mas->node = MAS_NONE;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_prev);

@@ -6111,8 +6128,16 @@ EXPORT_SYMBOL_GPL(mas_find);
*/
void *mas_find_rev(struct ma_state *mas, unsigned long min)
{
+ if (unlikely(mas_is_none(mas))) {
+ if (mas->index <= min)
+ goto none;
+
+ mas->last = mas->index;
+ mas->node = MAS_START;
+ }
+
if (unlikely(mas_is_paused(mas))) {
- if (unlikely(mas->last == ULONG_MAX)) {
+ if (unlikely(mas->index <= min)) {
mas->node = MAS_NONE;
return NULL;
}
@@ -6132,14 +6157,30 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
return entry;
}

- if (unlikely(!mas_searchable(mas)))
- return NULL;
+ if (unlikely(!mas_searchable(mas))) {
+ if (mas_is_ptr(mas))
+ goto none;
+
+ if (mas_is_none(mas)) {
+ /*
+ * Walked to the location, and there was nothing so the
+ * previous location is 0.
+ */
+ mas->last = mas->index = 0;
+ mas->node = MAS_ROOT;
+ return mas_root(mas);
+ }
+ }

if (mas->index < min)
return NULL;

/* Retries on dead nodes handled by mas_prev_entry */
return mas_prev_entry(mas, min);
+
+none:
+ mas->node = MAS_NONE;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_find_rev);

--
2.39.2

2023-05-05 17:54:44

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 02/36] maple_tree: Clean up mas_parent_enum() and rename to mas_parent_type()

From: "Liam R. Howlett" <[email protected]>

mas_parent_enum() is a simple wrapper for mte_parent_enum() which is
only called from that wrapper. Remove the wrapper and inline
mte_parent_enum() into mas_parent_enum().

At the same time, clean up the bit masking of the root pointer since it
cannot be set by the time the bit masking occurs. Change the check on
the root bit to a WARN_ON(), and fix the verification code to not
trigger the WARN_ON() before checking if the node is root.

Align the name to mas_parent_type() since mas_node_type() exists
already.

Reported-by: Wei Yang <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
---
lib/maple_tree.c | 50 +++++++++++++++++++++---------------------------
1 file changed, 22 insertions(+), 28 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 9cf4fca42310c..555de3a8343e1 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -425,28 +425,26 @@ static inline unsigned long mte_parent_slot_mask(unsigned long parent)
}

/*
- * mas_parent_enum() - Return the maple_type of the parent from the stored
+ * mas_parent_type() - Return the maple_type of the parent from the stored
* parent type.
* @mas: The maple state
- * @node: The maple_enode to extract the parent's enum
+ * @enode: The maple_enode to extract the parent's enum
* Return: The node->parent maple_type
*/
static inline
-enum maple_type mte_parent_enum(struct maple_enode *p_enode,
- struct maple_tree *mt)
+enum maple_type mas_parent_type(struct ma_state *mas, struct maple_enode *enode)
{
unsigned long p_type;

- p_type = (unsigned long)p_enode;
- if (p_type & MAPLE_PARENT_ROOT)
- return 0; /* Validated in the caller. */
+ p_type = (unsigned long)mte_to_node(enode)->parent;
+ if (WARN_ON(p_type & MAPLE_PARENT_ROOT))
+ return 0;

p_type &= MAPLE_NODE_MASK;
- p_type = p_type & ~(MAPLE_PARENT_ROOT | mte_parent_slot_mask(p_type));
-
+ p_type &= ~mte_parent_slot_mask(p_type);
switch (p_type) {
case MAPLE_PARENT_RANGE64: /* or MAPLE_PARENT_ARANGE64 */
- if (mt_is_alloc(mt))
+ if (mt_is_alloc(mas->tree))
return maple_arange_64;
return maple_range_64;
}
@@ -454,12 +452,6 @@ enum maple_type mte_parent_enum(struct maple_enode *p_enode,
return 0;
}

-static inline
-enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
-{
- return mte_parent_enum(ma_enode_ptr(mte_to_node(enode)->parent), mas->tree);
-}
-
/*
* mte_set_parent() - Set the parent node and encode the slot
* @enode: The encoded maple node.
@@ -1123,7 +1115,7 @@ static int mas_ascend(struct ma_state *mas)
p_node = mte_parent(mas->node);
if (unlikely(a_node == p_node))
return 1;
- a_type = mas_parent_enum(mas, mas->node);
+ a_type = mas_parent_type(mas, mas->node);
offset = mte_parent_slot(mas->node);
a_enode = mt_mk_node(p_node, a_type);

@@ -1144,7 +1136,7 @@ static int mas_ascend(struct ma_state *mas)
max = ULONG_MAX;
do {
p_enode = a_enode;
- a_type = mas_parent_enum(mas, p_enode);
+ a_type = mas_parent_type(mas, p_enode);
a_node = mte_parent(p_enode);
a_slot = mte_parent_slot(p_enode);
a_enode = mt_mk_node(a_node, a_type);
@@ -1659,7 +1651,7 @@ static inline void mas_parent_gap(struct ma_state *mas, unsigned char offset,
enum maple_type pmt;

pnode = mte_parent(mas->node);
- pmt = mas_parent_enum(mas, mas->node);
+ pmt = mas_parent_type(mas, mas->node);
penode = mt_mk_node(pnode, pmt);
pgaps = ma_gaps(pnode, pmt);

@@ -1691,7 +1683,7 @@ static inline void mas_parent_gap(struct ma_state *mas, unsigned char offset,

/* Go to the parent node. */
pnode = mte_parent(penode);
- pmt = mas_parent_enum(mas, penode);
+ pmt = mas_parent_type(mas, penode);
pgaps = ma_gaps(pnode, pmt);
offset = mte_parent_slot(penode);
penode = mt_mk_node(pnode, pmt);
@@ -1718,7 +1710,7 @@ static inline void mas_update_gap(struct ma_state *mas)

pslot = mte_parent_slot(mas->node);
p_gap = ma_gaps(mte_parent(mas->node),
- mas_parent_enum(mas, mas->node))[pslot];
+ mas_parent_type(mas, mas->node))[pslot];

if (p_gap != max_gap)
mas_parent_gap(mas, pslot, max_gap);
@@ -1767,7 +1759,7 @@ static inline void mas_replace(struct ma_state *mas, bool advanced)
} else {
offset = mte_parent_slot(mas->node);
slots = ma_slots(mte_parent(mas->node),
- mas_parent_enum(mas, mas->node));
+ mas_parent_type(mas, mas->node));
old_enode = mas_slot_locked(mas, slots, offset);
}

@@ -3251,7 +3243,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end
l_mas.max = l_pivs[split];
mas->min = l_mas.max + 1;
eparent = mt_mk_node(mte_parent(l_mas.node),
- mas_parent_enum(&l_mas, l_mas.node));
+ mas_parent_type(&l_mas, l_mas.node));
tmp += end;
if (!in_rcu) {
unsigned char max_p = mt_pivots[mt];
@@ -3294,7 +3286,7 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end

/* replace parent. */
offset = mte_parent_slot(mas->node);
- mt = mas_parent_enum(&l_mas, l_mas.node);
+ mt = mas_parent_type(&l_mas, l_mas.node);
parent = mas_pop_node(mas);
slots = ma_slots(parent, mt);
pivs = ma_pivots(parent, mt);
@@ -6995,27 +6987,29 @@ static void mas_validate_gaps(struct ma_state *mas)
p_slot = mte_parent_slot(mas->node);
p_mn = mte_parent(mte);
MT_BUG_ON(mas->tree, max_gap > mas->max);
- if (ma_gaps(p_mn, mas_parent_enum(mas, mte))[p_slot] != max_gap) {
+ if (ma_gaps(p_mn, mas_parent_type(mas, mte))[p_slot] != max_gap) {
pr_err("gap %p[%u] != %lu\n", p_mn, p_slot, max_gap);
mt_dump(mas->tree);
}

MT_BUG_ON(mas->tree,
- ma_gaps(p_mn, mas_parent_enum(mas, mte))[p_slot] != max_gap);
+ ma_gaps(p_mn, mas_parent_type(mas, mte))[p_slot] != max_gap);
}

static void mas_validate_parent_slot(struct ma_state *mas)
{
struct maple_node *parent;
struct maple_enode *node;
- enum maple_type p_type = mas_parent_enum(mas, mas->node);
- unsigned char p_slot = mte_parent_slot(mas->node);
+ enum maple_type p_type;
+ unsigned char p_slot;
void __rcu **slots;
int i;

if (mte_is_root(mas->node))
return;

+ p_slot = mte_parent_slot(mas->node);
+ p_type = mas_parent_type(mas, mas->node);
parent = mte_parent(mas->node);
slots = ma_slots(parent, p_type);
MT_BUG_ON(mas->tree, mas_mn(mas) == parent);
--
2.39.2

2023-05-05 17:54:54

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 31/36] maple_tree: Add mas_prev_range() and mas_find_range_rev interface

Some users of the maple tree may want to move to the previous range
regardless of the value stored there. Add this interface as well as the
'find' variant to support walking to the first value, then iterating
over the previous ranges.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 1 +
lib/maple_tree.c | 160 +++++++++++++++++++++++++++----------
2 files changed, 121 insertions(+), 40 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index a4cd8f891a090..542b09118a09f 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -467,6 +467,7 @@ void mas_destroy(struct ma_state *mas);
int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);

void *mas_prev(struct ma_state *mas, unsigned long min);
+void *mas_prev_range(struct ma_state *mas, unsigned long max);
void *mas_next(struct ma_state *mas, unsigned long max);
void *mas_next_range(struct ma_state *mas, unsigned long max);

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e050fd1f7cce8..f060c71965c0d 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5924,18 +5924,8 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max)
}
EXPORT_SYMBOL_GPL(mt_next);

-/**
- * mas_prev() - Get the previous entry
- * @mas: The maple state
- * @min: The minimum value to check.
- *
- * Must hold rcu_read_lock or the write lock.
- * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
- * searchable nodes.
- *
- * Return: the previous value or %NULL.
- */
-void *mas_prev(struct ma_state *mas, unsigned long min)
+static inline bool mas_prev_setup(struct ma_state *mas, unsigned long min,
+ void **entry)
{
if (mas->index <= min)
goto none;
@@ -5953,7 +5943,8 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
if (!mas->index)
goto none;
mas->index = mas->last = 0;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}

if (mas_is_none(mas)) {
@@ -5961,18 +5952,64 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
/* Walked to out-of-range pointer? */
mas->index = mas->last = 0;
mas->node = MAS_ROOT;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}
- return NULL;
+ return true;
}
- return mas_prev_slot(mas, min, false);
+
+ return false;

none:
mas->node = MAS_NONE;
- return NULL;
+ return true;
+}
+
+/**
+ * mas_prev() - Get the previous entry
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * searchable nodes.
+ *
+ * Return: the previous value or %NULL.
+ */
+void *mas_prev(struct ma_state *mas, unsigned long min)
+{
+ void *entry = NULL;
+
+ if (mas_prev_setup(mas, min, &entry))
+ return entry;
+
+ return mas_prev_slot(mas, min, false);
}
EXPORT_SYMBOL_GPL(mas_prev);

+/**
+ * mas_prev_range() - Advance to the previous range
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Sets @mas->index and @mas->last to the range.
+ * Must hold rcu_read_lock or the write lock.
+ * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * searchable nodes.
+ *
+ * Return: the previous value or %NULL.
+ */
+void *mas_prev_range(struct ma_state *mas, unsigned long min)
+{
+ void *entry = NULL;
+
+ if (mas_prev_setup(mas, min, &entry))
+ return entry;
+
+ return mas_prev_slot(mas, min, true);
+}
+EXPORT_SYMBOL_GPL(mas_prev_slot);
+
/**
* mt_prev() - get the previous value in the maple tree
* @mt: The maple tree
@@ -6116,20 +6153,15 @@ void *mas_find_range(struct ma_state *mas, unsigned long max)
EXPORT_SYMBOL_GPL(mas_find_range);

/**
- * mas_find_rev: On the first call, find the first non-null entry at or below
- * mas->index down to %min. Otherwise find the first non-null entry below
- * mas->index down to %min.
- * @mas: The maple state
- * @min: The minimum value to check.
+ * mas_find_rev_setup() - Internal function to set up mas_find_*_rev()
*
- * Must hold rcu_read_lock or the write lock.
- * If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
- *
- * Return: The entry or %NULL.
+ * Returns: True if entry is the answer, false otherwise.
*/
-void *mas_find_rev(struct ma_state *mas, unsigned long min)
+static inline bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
+ void **entry)
{
+ *entry = NULL;
+
if (unlikely(mas_is_none(mas))) {
if (mas->index <= min)
goto none;
@@ -6141,7 +6173,7 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
if (unlikely(mas_is_paused(mas))) {
if (unlikely(mas->index <= min)) {
mas->node = MAS_NONE;
- return NULL;
+ return true;
}
mas->node = MAS_START;
mas->last = --mas->index;
@@ -6149,14 +6181,12 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)

if (unlikely(mas_is_start(mas))) {
/* First run or continue */
- void *entry;
-
if (mas->index < min)
- return NULL;
+ return true;

- entry = mas_walk(mas);
- if (entry)
- return entry;
+ *entry = mas_walk(mas);
+ if (*entry)
+ return true;
}

if (unlikely(!mas_searchable(mas))) {
@@ -6170,22 +6200,72 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
*/
mas->last = mas->index = 0;
mas->node = MAS_ROOT;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}
}

if (mas->index < min)
- return NULL;
+ return true;

- /* Retries on dead nodes handled by mas_prev_slot */
- return mas_prev_slot(mas, min, false);
+ return false;

none:
mas->node = MAS_NONE;
- return NULL;
+ return true;
+}
+
+/**
+ * mas_find_rev: On the first call, find the first non-null entry at or below
+ * mas->index down to %min. Otherwise find the first non-null entry below
+ * mas->index down to %min.
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find_rev(struct ma_state *mas, unsigned long min)
+{
+ void *entry;
+
+ if (mas_find_rev_setup(mas, min, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_prev_slot */
+ return mas_prev_slot(mas, min, false);
+
}
EXPORT_SYMBOL_GPL(mas_find_rev);

+/**
+ * mas_find_range_rev: On the first call, find the first non-null entry at or
+ * below mas->index down to %min. Otherwise advance to the previous slot after
+ * mas->index down to %min.
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find_range_rev(struct ma_state *mas, unsigned long min)
+{
+ void *entry;
+
+ if (mas_find_rev_setup(mas, min, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_prev_slot */
+ return mas_prev_slot(mas, min, true);
+}
+EXPORT_SYMBOL_GPL(mas_find_range_rev);
+
/**
* mas_erase() - Find the range in which index resides and erase the entire
* range.
--
2.39.2

2023-05-05 17:56:02

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 07/36] maple_tree: Convert BUG_ON() to MT_BUG_ON()

From: "Liam R. Howlett" <[email protected]>

Use MT_BUG_ON() to get more information when running with
MAPLE_TREE_DEBUG enabled.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index a28b021f740f1..c3ce2bc594123 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -194,7 +194,7 @@ static void mas_set_height(struct ma_state *mas)
unsigned int new_flags = mas->tree->ma_flags;

new_flags &= ~MT_FLAGS_HEIGHT_MASK;
- BUG_ON(mas->depth > MAPLE_HEIGHT_MAX);
+ MT_BUG_ON(mas->tree, mas->depth > MAPLE_HEIGHT_MAX);
new_flags |= mas->depth << MT_FLAGS_HEIGHT_OFFSET;
mas->tree->ma_flags = new_flags;
}
--
2.39.2

2023-05-05 18:35:24

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 26/36] maple_tree: Fix testing mas_empty_area()

Empty area will return -EINVAL if the search window is smaller than the
requested size. Fix the test case to check for this error code.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index fbf7b6f227f34..48bc4ad984c86 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -2696,7 +2696,7 @@ static noinline void __init check_empty_area_window(struct maple_tree *mt)
MT_BUG_ON(mt, mas_empty_area(&mas, 5, 100, 6) != -EBUSY);

mas_reset(&mas);
- MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EBUSY);
+ MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EINVAL);

mas_reset(&mas);
mas_empty_area(&mas, 100, 165, 3);
--
2.39.2

2023-05-05 18:49:23

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 32/36] maple_tree: Clear up index and last setting in single entry tree

When there is a single entry tree (range of 0-0 pointing to an entry),
then ensure the limit is either 0-0 or 1-oo, depending on where the user
walks. Ensure the correct node setting as well; either MAS_ROOT or
MAS_NONE.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index f060c71965c0d..914399519cf54 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5022,24 +5022,25 @@ void *mas_walk(struct ma_state *mas)
{
void *entry;

+ if (mas_is_none(mas) || mas_is_paused(mas))
+ mas->node = MAS_START;
retry:
entry = mas_state_walk(mas);
- if (mas_is_start(mas))
+ if (mas_is_start(mas)) {
goto retry;
-
- if (mas_is_ptr(mas)) {
+ } else if (mas_is_none(mas)) {
+ mas->index = 0;
+ mas->last = ULONG_MAX;
+ } else if (mas_is_ptr(mas)) {
if (!mas->index) {
mas->last = 0;
- } else {
- mas->index = 1;
- mas->last = ULONG_MAX;
+ return entry;
}
- return entry;
- }

- if (mas_is_none(mas)) {
- mas->index = 0;
+ mas->index = 1;
mas->last = ULONG_MAX;
+ mas->node = MAS_NONE;
+ return NULL;
}

return entry;
--
2.39.2

2023-05-05 19:02:40

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 21/36] maple_tree: mas_start() reset depth on dead node

When a dead node is detected, the depth has already been set to 1 so
reset it to 0.

Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index b3e5ae43ff8ff..e233f41ed4da2 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1396,9 +1396,9 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)

mas->min = 0;
mas->max = ULONG_MAX;
- mas->depth = 0;

retry:
+ mas->depth = 0;
root = mas_root(mas);
/* Tree with nodes */
if (likely(xa_is_node(root))) {
--
2.39.2

2023-05-05 19:12:26

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 27/36] maple_tree: Introduce mas_next_slot() interface

Sometimes, during a tree walk, the user needs the next slot regardless
of if it is empty or not. Add an interface to get the next slot.

Since there are no consecutive NULLs allowed in the tree, the mas_next()
function can only advance two slots at most. So use the new
mas_next_slot() interface to align both implementations. Use this
method for mas_find() as well.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 229 +++++++++++++++++++++--------------------------
1 file changed, 104 insertions(+), 125 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 580310741d892..2865a987bc559 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4606,11 +4606,10 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
unsigned long max)
{
- unsigned long min, pivot;
+ unsigned long min;
unsigned long *pivots;
struct maple_enode *enode;
int level = 0;
- unsigned char offset;
unsigned char node_end;
enum maple_type mt;
void __rcu **slots;
@@ -4618,19 +4617,16 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
if (mas->max >= max)
goto no_entry;

+ min = mas->max + 1;
level = 0;
do {
if (ma_is_root(node))
goto no_entry;

- min = mas->max + 1;
- if (min > max)
- goto no_entry;
-
+ /* Walk up. */
if (unlikely(mas_ascend(mas)))
return 1;

- offset = mas->offset;
level++;
node = mas_mn(mas);
mt = mte_node_type(mas->node);
@@ -4639,36 +4635,37 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
if (unlikely(ma_dead_node(node)))
return 1;

- } while (unlikely(offset == node_end));
+ } while (unlikely(mas->offset == node_end));

slots = ma_slots(node, mt);
- pivot = mas_safe_pivot(mas, pivots, ++offset, mt);
- while (unlikely(level > 1)) {
- /* Descend, if necessary */
- enode = mas_slot(mas, slots, offset);
- if (unlikely(ma_dead_node(node)))
- return 1;
+ mas->offset++;
+ enode = mas_slot(mas, slots, mas->offset);
+ if (unlikely(ma_dead_node(node)))
+ return 1;

- mas->node = enode;
+ if (level > 1)
+ mas->offset = 0;
+
+ while (unlikely(level > 1)) {
level--;
+ mas->node = enode;
node = mas_mn(mas);
mt = mte_node_type(mas->node);
slots = ma_slots(node, mt);
- pivots = ma_pivots(node, mt);
+ enode = mas_slot(mas, slots, 0);
if (unlikely(ma_dead_node(node)))
return 1;
-
- offset = 0;
- pivot = pivots[0];
}

- enode = mas_slot(mas, slots, offset);
+ if (!mas->offset)
+ pivots = ma_pivots(node, mt);
+
+ mas->max = mas_safe_pivot(mas, pivots, mas->offset, mt);
if (unlikely(ma_dead_node(node)))
return 1;

mas->node = enode;
mas->min = min;
- mas->max = pivot;
return 0;

no_entry:
@@ -4679,83 +4676,106 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
return 0;
}

+static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
+{
+retry:
+ mas_set(mas, index);
+ mas_state_walk(mas);
+ if (mas_is_start(mas))
+ goto retry;
+}
+
+static inline bool mas_rewalk_if_dead(struct ma_state *mas,
+ struct maple_node *node, const unsigned long index)
+{
+ if (unlikely(ma_dead_node(node))) {
+ mas_rewalk(mas, index);
+ return true;
+ }
+ return false;
+}
+
/*
- * mas_next_nentry() - Get the next node entry
- * @mas: The maple state
- * @max: The maximum value to check
- * @*range_start: Pointer to store the start of the range.
+ * mas_next_slot() - Get the entry in the next slot
*
- * Sets @mas->offset to the offset of the next node entry, @mas->last to the
- * pivot of the entry.
+ * @mas: The maple state
+ * @max: The maximum starting range
+ * @empty: Can be empty
*
- * Return: The next entry, %NULL otherwise
+ * Return: The entry in the next slot which is possibly NULL
*/
-static inline void *mas_next_nentry(struct ma_state *mas,
- struct maple_node *node, unsigned long max, enum maple_type type)
+void *mas_next_slot(struct ma_state *mas, unsigned long max, bool empty)
{
- unsigned char count;
- unsigned long pivot;
- unsigned long *pivots;
void __rcu **slots;
+ unsigned long *pivots;
+ unsigned long pivot;
+ enum maple_type type;
+ struct maple_node *node;
+ unsigned char data_end;
+ unsigned long save_point = mas->last;
void *entry;

- if (mas->last == mas->max) {
- mas->index = mas->max;
- return NULL;
- }
-
- slots = ma_slots(node, type);
+retry:
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
pivots = ma_pivots(node, type);
- count = ma_data_end(node, type, pivots, mas->max);
- if (unlikely(ma_dead_node(node)))
- return NULL;
-
- mas->index = mas_safe_min(mas, pivots, mas->offset);
- if (unlikely(ma_dead_node(node)))
- return NULL;
-
- if (mas->index > max)
- return NULL;
-
- if (mas->offset > count)
- return NULL;
+ data_end = ma_data_end(node, type, pivots, mas->max);
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;

- while (mas->offset < count) {
- pivot = pivots[mas->offset];
- entry = mas_slot(mas, slots, mas->offset);
- if (ma_dead_node(node))
- return NULL;
+again:
+ if (mas->max >= max) {
+ if (likely(mas->offset < data_end))
+ pivot = pivots[mas->offset];
+ else
+ return NULL; /* must be mas->max */

- mas->last = pivot;
- if (entry)
- return entry;
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;

if (pivot >= max)
return NULL;
+ }

- if (pivot >= mas->max)
+ if (likely(mas->offset < data_end)) {
+ mas->index = pivots[mas->offset] + 1;
+ mas->offset++;
+ if (likely(mas->offset < data_end))
+ mas->last = pivots[mas->offset];
+ else
+ mas->last = mas->max;
+ } else {
+ if (mas_next_node(mas, node, max)) {
+ mas_rewalk(mas, save_point);
+ goto retry;
+ }
+
+ if (mas_is_none(mas))
return NULL;

- mas->index = pivot + 1;
- mas->offset++;
+ mas->offset = 0;
+ mas->index = mas->min;
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
+ pivots = ma_pivots(node, type);
+ mas->last = pivots[0];
}

- pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
- entry = mas_slot(mas, slots, mas->offset);
- if (ma_dead_node(node))
- return NULL;
+ slots = ma_slots(node, type);
+ entry = mt_slot(mas->tree, slots, mas->offset);
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;

- mas->last = pivot;
- return entry;
-}
+ if (entry)
+ return entry;

-static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
-{
-retry:
- mas_set(mas, index);
- mas_state_walk(mas);
- if (mas_is_start(mas))
- goto retry;
+ if (!empty) {
+ if (!mas->offset)
+ data_end = 2;
+ goto again;
+ }
+
+ return entry;
}

/*
@@ -4773,47 +4793,12 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
{
void *entry = NULL;
- struct maple_node *node;
- unsigned long last;
- enum maple_type mt;

if (mas->last >= limit)
return NULL;

- last = mas->last;
-retry:
- node = mas_mn(mas);
- mt = mte_node_type(mas->node);
- mas->offset++;
- if (unlikely(mas->offset >= mt_slots[mt])) {
- mas->offset = mt_slots[mt] - 1;
- goto next_node;
- }
-
- while (!mas_is_none(mas)) {
- entry = mas_next_nentry(mas, node, limit, mt);
- if (unlikely(ma_dead_node(node))) {
- mas_rewalk(mas, last);
- goto retry;
- }
-
- if (likely(entry))
- return entry;
-
- if (unlikely((mas->last >= limit)))
- return NULL;
-
-next_node:
- if (unlikely(mas_next_node(mas, node, limit))) {
- mas_rewalk(mas, last);
- goto retry;
- }
- mas->offset = 0;
- node = mas_mn(mas);
- mt = mte_node_type(mas->node);
- }
-
- return NULL;
+ entry = mas_next_slot(mas, limit, false);
+ return entry;
}

/*
@@ -4844,10 +4829,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
slots = ma_slots(mn, mt);
pivots = ma_pivots(mn, mt);
count = ma_data_end(mn, mt, pivots, mas->max);
- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

offset = mas->offset - 1;
if (offset >= mt_slots[mt])
@@ -4860,10 +4843,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
pivot = pivots[offset];
}

- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

while (offset && !mas_slot(mas, slots, offset)) {
pivot = pivots[--offset];
@@ -4880,10 +4861,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,

min = mas_safe_min(mas, pivots, offset);
entry = mas_slot(mas, slots, offset);
- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

mas->offset = offset;
mas->last = pivot;
@@ -6108,8 +6087,8 @@ void *mas_find(struct ma_state *mas, unsigned long max)
if (mas->index == max)
return NULL;

- /* Retries on dead nodes handled by mas_next_entry */
- return mas_next_entry(mas, max);
+ /* Retries on dead nodes handled by mas_next_slot */
+ return mas_next_slot(mas, max, false);

ptr_out_of_range:
mas->node = MAS_NONE;
--
2.39.2

2023-05-05 19:12:48

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 20/36] maple_tree: Remove unnecessary check from mas_destroy()

mas_destroy currently checks if mas->node is MAS_START prior to calling
mas_start(), but this is unnecessary as mas_start() will do nothing if
the node is anything but MAS_START.

Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
---
lib/maple_tree.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 9f3784f4a5b7c..b3e5ae43ff8ff 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5816,9 +5816,7 @@ void mas_destroy(struct ma_state *mas)
if (mas->mas_flags & MA_STATE_REBALANCE) {
unsigned char end;

- if (mas_is_start(mas))
- mas_start(mas);
-
+ mas_start(mas);
mtree_range_walk(mas);
end = mas_data_end(mas) + 1;
if (end < mt_min_slot_count(mas->node) - 1)
--
2.39.2

2023-05-05 19:31:00

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 34/36] mm: Add vma_iter_{next,prev}_range() to vma iterator

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/mm.h | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7b1d4e7393efd..1bd731a2972b0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -866,11 +866,24 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
return mas_find(&vmi->mas, ULONG_MAX);
}

+static inline
+struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
+{
+ return mas_next_range(&vmi->mas, ULONG_MAX);
+}
+
+
static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
{
return mas_prev(&vmi->mas, 0);
}

+static inline
+struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
+{
+ return mas_prev_range(&vmi->mas, 0);
+}
+
static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
{
return vmi->mas.index;
--
2.39.2

2023-05-05 19:42:33

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 23/36] maple_tree: Try harder to keep active node after mas_next()

Clean up the mas_next() call to try and keep a node reference when
possible. This will avoid re-walking the tree in most cases.

Also clean up the single entry tree handling to ensure index/last are
consistent with what one would expect. (returning NULL with limit of
1-oo).

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 89 +++++++++++++++++++++++++-----------------------
1 file changed, 47 insertions(+), 42 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index e233f41ed4da2..09142af082148 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4726,33 +4726,25 @@ static inline void *mas_next_nentry(struct ma_state *mas,
if (ma_dead_node(node))
return NULL;

+ mas->last = pivot;
if (entry)
- goto found;
+ return entry;

if (pivot >= max)
return NULL;

+ if (pivot >= mas->max)
+ return NULL;
+
mas->index = pivot + 1;
mas->offset++;
}

- if (mas->index > mas->max) {
- mas->index = mas->last;
- return NULL;
- }
-
- pivot = mas_safe_pivot(mas, pivots, mas->offset, type);
+ pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
entry = mas_slot(mas, slots, mas->offset);
if (ma_dead_node(node))
return NULL;

- if (!pivot)
- return NULL;
-
- if (!entry)
- return NULL;
-
-found:
mas->last = pivot;
return entry;
}
@@ -4781,21 +4773,15 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
{
void *entry = NULL;
- struct maple_enode *prev_node;
struct maple_node *node;
- unsigned char offset;
unsigned long last;
enum maple_type mt;

- if (mas->index > limit) {
- mas->index = mas->last = limit;
- mas_pause(mas);
+ if (mas->last >= limit)
return NULL;
- }
+
last = mas->last;
retry:
- offset = mas->offset;
- prev_node = mas->node;
node = mas_mn(mas);
mt = mte_node_type(mas->node);
mas->offset++;
@@ -4814,12 +4800,10 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
if (likely(entry))
return entry;

- if (unlikely((mas->index > limit)))
- break;
+ if (unlikely((mas->last >= limit)))
+ return NULL;

next_node:
- prev_node = mas->node;
- offset = mas->offset;
if (unlikely(mas_next_node(mas, node, limit))) {
mas_rewalk(mas, last);
goto retry;
@@ -4829,9 +4813,6 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
mt = mte_node_type(mas->node);
}

- mas->index = mas->last = limit;
- mas->offset = offset;
- mas->node = prev_node;
return NULL;
}

@@ -5919,6 +5900,8 @@ EXPORT_SYMBOL_GPL(mas_expected_entries);
*/
void *mas_next(struct ma_state *mas, unsigned long max)
{
+ bool was_none = mas_is_none(mas);
+
if (mas_is_none(mas) || mas_is_paused(mas))
mas->node = MAS_START;

@@ -5926,16 +5909,16 @@ void *mas_next(struct ma_state *mas, unsigned long max)
mas_walk(mas); /* Retries on dead nodes handled by mas_walk */

if (mas_is_ptr(mas)) {
- if (!mas->index) {
- mas->index = 1;
- mas->last = ULONG_MAX;
+ if (was_none && mas->index == 0) {
+ mas->index = mas->last = 0;
+ return mas_root(mas);
}
+ mas->index = 1;
+ mas->last = ULONG_MAX;
+ mas->node = MAS_NONE;
return NULL;
}

- if (mas->last == ULONG_MAX)
- return NULL;
-
/* Retries on dead nodes handled by mas_next_entry */
return mas_next_entry(mas, max);
}
@@ -6059,17 +6042,25 @@ EXPORT_SYMBOL_GPL(mas_pause);
*/
void *mas_find(struct ma_state *mas, unsigned long max)
{
+ if (unlikely(mas_is_none(mas))) {
+ if (unlikely(mas->last >= max))
+ return NULL;
+
+ mas->index = mas->last;
+ mas->node = MAS_START;
+ }
+
if (unlikely(mas_is_paused(mas))) {
- if (unlikely(mas->last == ULONG_MAX)) {
- mas->node = MAS_NONE;
+ if (unlikely(mas->last >= max))
return NULL;
- }
+
mas->node = MAS_START;
mas->index = ++mas->last;
}

- if (unlikely(mas_is_none(mas)))
- mas->node = MAS_START;
+
+ if (unlikely(mas_is_ptr(mas)))
+ goto ptr_out_of_range;

if (unlikely(mas_is_start(mas))) {
/* First run or continue */
@@ -6081,13 +6072,27 @@ void *mas_find(struct ma_state *mas, unsigned long max)
entry = mas_walk(mas);
if (entry)
return entry;
+
}

- if (unlikely(!mas_searchable(mas)))
+ if (unlikely(!mas_searchable(mas))) {
+ if (unlikely(mas_is_ptr(mas)))
+ goto ptr_out_of_range;
+
+ return NULL;
+ }
+
+ if (mas->index == max)
return NULL;

/* Retries on dead nodes handled by mas_next_entry */
return mas_next_entry(mas, max);
+
+ptr_out_of_range:
+ mas->node = MAS_NONE;
+ mas->index = 1;
+ mas->last = ULONG_MAX;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_find);

@@ -6518,7 +6523,7 @@ void *mt_find(struct maple_tree *mt, unsigned long *index, unsigned long max)
if (entry)
goto unlock;

- while (mas_searchable(&mas) && (mas.index < max)) {
+ while (mas_searchable(&mas) && (mas.last < max)) {
entry = mas_next_entry(&mas, max);
if (likely(entry && !xa_is_zero(entry)))
break;
--
2.39.2

2023-05-05 19:52:22

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 29/36] maple_tree: Relocate mas_rewalk() and mas_rewalk_if_dead()

These functions need to move for future use.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 2f43afeae94e4..b0f185a8154b4 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4497,6 +4497,25 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)

}

+static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
+{
+retry:
+ mas_set(mas, index);
+ mas_state_walk(mas);
+ if (mas_is_start(mas))
+ goto retry;
+}
+
+static inline bool mas_rewalk_if_dead(struct ma_state *mas,
+ struct maple_node *node, const unsigned long index)
+{
+ if (unlikely(ma_dead_node(node))) {
+ mas_rewalk(mas, index);
+ return true;
+ }
+ return false;
+}
+
/*
* mas_prev_node() - Find the prev non-null entry at the same level in the
* tree. The prev value will be mas->node[mas->offset] or MAS_NONE.
@@ -4676,25 +4695,6 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
return 0;
}

-static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
-{
-retry:
- mas_set(mas, index);
- mas_state_walk(mas);
- if (mas_is_start(mas))
- goto retry;
-}
-
-static inline bool mas_rewalk_if_dead(struct ma_state *mas,
- struct maple_node *node, const unsigned long index)
-{
- if (unlikely(ma_dead_node(node))) {
- mas_rewalk(mas, index);
- return true;
- }
- return false;
-}
-
/*
* mas_next_slot() - Get the entry in the next slot
*
--
2.39.2

2023-05-05 20:32:36

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 25/36] maple_tree: Revise limit checks in mas_empty_area{_rev}()

Since the maple tree is inclusive in range, ensure that a range of 1
(min = max) works for searching for a gap in either direction, and make
sure the size is at least 1 but not larger than the delta between min
and max.

This commit also updates the testing. Unfortunately there isn't a way
to safely update the tests and code without a test failure.

Suggested-by: Peng Zhang <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 20 +++++++++++++-------
lib/test_maple_tree.c | 27 ++++++++++++++++++++-------
2 files changed, 33 insertions(+), 14 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 425ad922bb2d6..580310741d892 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5282,7 +5282,10 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
unsigned long *pivots;
enum maple_type mt;

- if (min >= max)
+ if (min > max)
+ return -EINVAL;
+
+ if (size == 0 || max - min < size - 1)
return -EINVAL;

if (mas_is_start(mas))
@@ -5337,7 +5340,10 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
{
struct maple_enode *last = mas->node;

- if (min >= max)
+ if (min > max)
+ return -EINVAL;
+
+ if (size == 0 || max - min < size - 1)
return -EINVAL;

if (mas_is_start(mas)) {
@@ -5373,7 +5379,7 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
return -EBUSY;

/* Trim the upper limit to the max. */
- if (max <= mas->last)
+ if (max < mas->last)
mas->last = max;

mas->index = mas->last - size + 1;
@@ -6409,7 +6415,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;

- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, min, min);
if (!mt_is_alloc(mt))
return -EINVAL;

@@ -6429,7 +6435,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
retry:
mas.offset = 0;
mas.index = min;
- mas.last = max - size;
+ mas.last = max - size + 1;
ret = mas_alloc(&mas, entry, size, startp);
if (mas_nomem(&mas, gfp))
goto retry;
@@ -6445,14 +6451,14 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;

- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, min, max - size + 1);
if (!mt_is_alloc(mt))
return -EINVAL;

if (WARN_ON_ONCE(mt_is_reserved(entry)))
return -EINVAL;

- if (min >= max)
+ if (min > max)
return -EINVAL;

if (max < size - 1)
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 19b130c9dddea..fbf7b6f227f34 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -123,7 +123,7 @@ static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt,
unsigned long result = expected + 1;
int ret;

- ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end - 1,
+ ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end,
GFP_KERNEL);
MT_BUG_ON(mt, ret != eret);
if (ret)
@@ -701,7 +701,7 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0, /* Return value success. */

0x0, /* Min */
- 0x565234AF1 << 12, /* Max */
+ 0x565234AF0 << 12, /* Max */
0x3000, /* Size */
0x565234AEE << 12, /* max - 3. */
0, /* Return value success. */
@@ -713,14 +713,14 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0, /* Return value success. */

0x0, /* Min */
- 0x7F36D510A << 12, /* Max */
+ 0x7F36D5109 << 12, /* Max */
0x4000, /* Size */
0x7F36D5106 << 12, /* First rev hole of size 0x4000 */
0, /* Return value success. */

/* Ascend test. */
0x0,
- 34148798629 << 12,
+ 34148798628 << 12,
19 << 12,
34148797418 << 12,
0x0,
@@ -732,6 +732,12 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0x0,
-EBUSY,

+ /* Single space test. */
+ 34148798725 << 12,
+ 34148798725 << 12,
+ 1 << 12,
+ 34148798725 << 12,
+ 0,
};

int i, range_count = ARRAY_SIZE(range);
@@ -780,9 +786,9 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
mas_unlock(&mas);
for (i = 0; i < req_range_count; i += 5) {
#if DEBUG_REV_RANGE
- pr_debug("\tReverse request between %lu-%lu size %lu, should get %lu\n",
- req_range[i] >> 12,
- (req_range[i + 1] >> 12) - 1,
+ pr_debug("\tReverse request %d between %lu-%lu size %lu, should get %lu\n",
+ i, req_range[i] >> 12,
+ (req_range[i + 1] >> 12),
req_range[i+2] >> 12,
req_range[i+3] >> 12);
#endif
@@ -901,6 +907,13 @@ static noinline void __init check_alloc_range(struct maple_tree *mt)
4503599618982063UL << 12, /* Size */
34359052178 << 12, /* Expected location */
-EBUSY, /* Return failure. */
+
+ /* Test a single entry */
+ 34148798648 << 12, /* Min */
+ 34148798648 << 12, /* Max */
+ 4096, /* Size of 1 */
+ 34148798648 << 12, /* Location is the same as min/max */
+ 0, /* Success */
};
int i, range_count = ARRAY_SIZE(range);
int req_range_count = ARRAY_SIZE(req_range);
--
2.39.2

2023-05-05 20:33:36

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH v2 33/36] maple_tree: Update testing code for mas_{next,prev,walk}

Now that the functions have changed the limits, update the testing of
the maple tree to test these new settings.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 638 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 633 insertions(+), 5 deletions(-)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 48bc4ad984c86..aa3c4fb9175ed 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -1321,6 +1321,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
mas_lock(&mas);
mas_set(&mas, 3);
ptr = mas_walk(&mas);
+ MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, ptr != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
@@ -1390,7 +1391,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
mas_store_gfp(&mas, ptr, GFP_KERNEL);
ptr = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, ptr != NULL);
- MT_BUG_ON(mt, (mas.index != 1) && (mas.last != ULONG_MAX));
+ MT_BUG_ON(mt, (mas.index != ULONG_MAX) && (mas.last != ULONG_MAX));

mas_set(&mas, 1);
ptr = mas_prev(&mas, 0);
@@ -1799,7 +1800,6 @@ static noinline void __init check_iteration(struct maple_tree *mt)
mas.index = 760;
mas.last = 765;
mas_store(&mas, val);
- mas_next(&mas, ULONG_MAX);
}
i++;
}
@@ -2010,7 +2010,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)

val = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, val != NULL);
- MT_BUG_ON(mt, mas.index != ULONG_MAX);
+ MT_BUG_ON(mt, mas.index != 0x7d6);
MT_BUG_ON(mt, mas.last != ULONG_MAX);

val = mas_prev(&mas, 0);
@@ -2034,7 +2034,8 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
val = mas_prev(&mas, 0);
MT_BUG_ON(mt, val != NULL);
MT_BUG_ON(mt, mas.index != 0);
- MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.last != 5);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);

mas.index = 0;
mas.last = 5;
@@ -2046,7 +2047,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
val = mas_prev(&mas, 0);
MT_BUG_ON(mt, val != NULL);
MT_BUG_ON(mt, mas.index != 0);
- MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.last != 9);
mas_unlock(&mas);

mtree_destroy(mt);
@@ -2749,6 +2750,629 @@ static noinline void __init check_empty_area_fill(struct maple_tree *mt)
mt_set_non_kernel(0);
}

+/*
+ * Check MAS_START, MAS_PAUSE, active (implied), and MAS_NONE transitions.
+ *
+ * The table below shows the single entry tree (0-0 pointer) and normal tree
+ * with nodes.
+ *
+ * Function ENTRY Start Result index & last
+ * ┬ ┬ ┬ ┬ ┬
+ * │ │ │ │ └─ the final range
+ * │ │ │ └─ The node value after execution
+ * │ │ └─ The node value before execution
+ * │ └─ If the entry exists or does not exists (DNE)
+ * └─ The function name
+ *
+ * Function ENTRY Start Result index & last
+ * mas_next()
+ * - after last
+ * Single entry tree at 0-0
+ * ------------------------
+ * DNE MAS_START MAS_NONE 1 - oo
+ * DNE MAS_PAUSE MAS_NONE 1 - oo
+ * DNE MAS_ROOT MAS_NONE 1 - oo
+ * when index = 0
+ * DNE MAS_NONE MAS_ROOT 0
+ * when index > 0
+ * DNE MAS_NONE MAS_NONE 1 - oo
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to last range
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to last range
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active set to last range
+ *
+ * Function ENTRY Start Result index & last
+ * mas_prev()
+ * - before index
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ *
+ * if index == 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to min
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to min
+ * exists MAS_NONE active range
+ * DNE MAS_NONE MAS_NONE set to min
+ * any MAS_ROOT MAS_NONE 0
+ * exists active active range
+ * DNE active active last range
+ *
+ * Function ENTRY Start Result index & last
+ * mas_find()
+ * - at index or next
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * if index == 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to max
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to max
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active last range (max < last)
+ *
+ * Function ENTRY Start Result index & last
+ * mas_find_rev()
+ * - at index or before
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ * if index == 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to min
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to min
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active last range (min > index)
+ *
+ * Function ENTRY Start Result index & last
+ * mas_walk()
+ * - Look up index
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * DNE MAS_START MAS_ROOT 1 - oo
+ * DNE MAS_PAUSE MAS_ROOT 1 - oo
+ * DNE MAS_NONE MAS_ROOT 1 - oo
+ * DNE MAS_ROOT MAS_ROOT 1 - oo
+ * if index == 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ * exists MAS_ROOT MAS_ROOT 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active range of NULL
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active range of NULL
+ * exists MAS_NONE active range
+ * DNE MAS_NONE active range of NULL
+ * exists active active range
+ * DNE active active range of NULL
+ */
+
+#define mas_active(x) (((x).node != MAS_ROOT) && \
+ ((x).node != MAS_START) && \
+ ((x).node != MAS_PAUSE) && \
+ ((x).node != MAS_NONE))
+static noinline void __init check_state_handling(struct maple_tree *mt)
+{
+ MA_STATE(mas, mt, 0, 0);
+ void *entry, *ptr = (void *) 0x1234500;
+ void *ptr2 = &ptr;
+ void *ptr3 = &ptr2;
+
+ /* Check MAS_ROOT First */
+ mtree_store_range(mt, 0, 0, ptr, GFP_KERNEL);
+
+ mas_lock(&mas);
+ /* prev: Start -> none */
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* prev: Start -> root */
+ mas_set(&mas, 10);
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* prev: pause -> root */
+ mas_set(&mas, 10);
+ mas_pause(&mas);
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* next: start -> none */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* next: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* find: root -> none */
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find: none -> none */
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find_rev: none -> root */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* find_rev: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* find_rev: root -> none */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find_rev: none -> none */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* find_rev: start -> root */
+ mas_set(&mas, 10);
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* walk: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* walk: pause -> none*/
+ mas_set(&mas, 10);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* walk: none -> none */
+ mas.index = mas.last = 10;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* walk: none -> none */
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* walk: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* walk: pause -> root */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* walk: none -> root */
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* walk: root -> root */
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ /* walk: root -> none */
+ mas_set(&mas, 10);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 1);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);
+
+ /* walk: none -> root */
+ mas.index = mas.last = 0;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.node != MAS_ROOT);
+
+ mas_unlock(&mas);
+
+ /* Check when there is an actual node */
+ mtree_store_range(mt, 0, 0, NULL, GFP_KERNEL);
+ mtree_store_range(mt, 0x1000, 0x1500, ptr, GFP_KERNEL);
+ mtree_store_range(mt, 0x2000, 0x2500, ptr2, GFP_KERNEL);
+ mtree_store_range(mt, 0x3000, 0x3500, ptr3, GFP_KERNEL);
+
+ mas_lock(&mas);
+
+ /* next: start ->active */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next: pause ->active */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next: none ->active */
+ mas.index = mas.last = 0;
+ mas.offset = 0;
+ mas.node = MAS_NONE;
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next:active ->active */
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr2);
+ MT_BUG_ON(mt, mas.index != 0x2000);
+ MT_BUG_ON(mt, mas.last != 0x2500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next:active -> active out of range*/
+ entry = mas_next(&mas, 0x2999);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x2501);
+ MT_BUG_ON(mt, mas.last != 0x2fff);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* Continue after out of range*/
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr3);
+ MT_BUG_ON(mt, mas.index != 0x3000);
+ MT_BUG_ON(mt, mas.last != 0x3500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next:active -> active out of range*/
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x3501);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* next: none -> active, skip value at location */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ mas.node = MAS_NONE;
+ mas.offset = 0;
+ entry = mas_next(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr2);
+ MT_BUG_ON(mt, mas.index != 0x2000);
+ MT_BUG_ON(mt, mas.last != 0x2500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* prev:active ->active */
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* prev:active -> active out of range*/
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0x0FFF);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* prev: pause ->active */
+ mas_set(&mas, 0x3600);
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr3);
+ mas_pause(&mas);
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr2);
+ MT_BUG_ON(mt, mas.index != 0x2000);
+ MT_BUG_ON(mt, mas.last != 0x2500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* prev:active -> active out of range*/
+ entry = mas_prev(&mas, 0x1600);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1501);
+ MT_BUG_ON(mt, mas.last != 0x1FFF);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* prev: active ->active, continue*/
+ entry = mas_prev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find: start ->active */
+ mas_set(&mas, 0);
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find: pause ->active */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find: start ->active on value */;
+ mas_set(&mas, 1200);
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find:active ->active */
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != ptr2);
+ MT_BUG_ON(mt, mas.index != 0x2000);
+ MT_BUG_ON(mt, mas.last != 0x2500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+
+ /* find:active -> active (NULL)*/
+ entry = mas_find(&mas, 0x2700);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x2501);
+ MT_BUG_ON(mt, mas.last != 0x2FFF);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find: none ->active */
+ entry = mas_find(&mas, 0x5000);
+ MT_BUG_ON(mt, entry != ptr3);
+ MT_BUG_ON(mt, mas.index != 0x3000);
+ MT_BUG_ON(mt, mas.last != 0x3500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find:active -> active (NULL) end*/
+ entry = mas_find(&mas, ULONG_MAX);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x3501);
+ MT_BUG_ON(mt, mas.last != ULONG_MAX);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find_rev: active (END) ->active */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr3);
+ MT_BUG_ON(mt, mas.index != 0x3000);
+ MT_BUG_ON(mt, mas.last != 0x3500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find_rev:active ->active */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr2);
+ MT_BUG_ON(mt, mas.index != 0x2000);
+ MT_BUG_ON(mt, mas.last != 0x2500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find_rev: pause ->active */
+ mas_pause(&mas);
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find_rev:active -> active */
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0);
+ MT_BUG_ON(mt, mas.last != 0x0FFF);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* find_rev: start ->active */
+ mas_set(&mas, 0x1200);
+ entry = mas_find_rev(&mas, 0);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk start ->active */
+ mas_set(&mas, 0x1200);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk start ->active */
+ mas_set(&mas, 0x1600);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1501);
+ MT_BUG_ON(mt, mas.last != 0x1fff);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk pause ->active */
+ mas_set(&mas, 0x1200);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk pause -> active */
+ mas_set(&mas, 0x1600);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1501);
+ MT_BUG_ON(mt, mas.last != 0x1fff);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk none -> active */
+ mas_set(&mas, 0x1200);
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk none -> active */
+ mas_set(&mas, 0x1600);
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1501);
+ MT_BUG_ON(mt, mas.last != 0x1fff);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk active -> active */
+ mas.index = 0x1200;
+ mas.last = 0x1200;
+ mas.offset = 0;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != ptr);
+ MT_BUG_ON(mt, mas.index != 0x1000);
+ MT_BUG_ON(mt, mas.last != 0x1500);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ /* mas_walk active -> active */
+ mas.index = 0x1600;
+ mas.last = 0x1600;
+ entry = mas_walk(&mas);
+ MT_BUG_ON(mt, entry != NULL);
+ MT_BUG_ON(mt, mas.index != 0x1501);
+ MT_BUG_ON(mt, mas.last != 0x1fff);
+ MT_BUG_ON(mt, !mas_active(mas));
+
+ mas_unlock(&mas);
+}
+
static DEFINE_MTREE(tree);
static int __init maple_tree_seed(void)
{
@@ -3010,6 +3634,10 @@ static int __init maple_tree_seed(void)
mtree_destroy(&tree);


+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_state_handling(&tree);
+ mtree_destroy(&tree);
+
#if defined(BENCH)
skip:
#endif
--
2.39.2

2023-05-06 02:58:24

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v2 18/36] mm: Update vma_iter_store() to use MAS_WARN_ON()

On (23/05/05 13:41), Liam R. Howlett wrote:
> MAS_WARN_ON() will provide more information on the maple state and can
> be more useful for debugging. Use this version of WARN_ON() in the
> debugging code when storing to the tree.
>
> Update the printk to a pr_warn(), but this will only be printed when
> maple tree debug is enabled anyways.
>
> Making all print statements into one will keep them together on a busy
> terminal.
>
> Cc: Sergey Senozhatsky <[email protected]>
> Signed-off-by: Liam R. Howlett <[email protected]>

Reviewed-by: Sergey Senozhatsky <[email protected]>

2023-05-06 03:19:18

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v2 18/36] mm: Update vma_iter_store() to use MAS_WARN_ON()

On (23/05/05 13:41), Liam R. Howlett wrote:
> #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
> - if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.index > vma->vm_start)) {
> - printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
> - printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
> - printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
> - vma_iter_dump_tree(vmi);
> + if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
> + vmi->mas.index > vma->vm_start)) {
> + pr_warn("%lx > %lx\n"
> + "store of vma %lx-%lx\n"
> + "into slot %lx-%lx\n",

A minor side note: we usually prefer to avoid breaking printk() format
strings because it makes grep-ing difficult. But in this particular case
it's fine, since all the format sub-strings end with a new line character.

2023-05-08 13:37:44

by Vernon Yang

[permalink] [raw]
Subject: Re: [PATCH v2 31/36] maple_tree: Add mas_prev_range() and mas_find_range_rev interface

On Fri, May 05, 2023 at 01:41:59PM -0400, Liam R. Howlett wrote:
> Some users of the maple tree may want to move to the previous range
> regardless of the value stored there. Add this interface as well as the
> 'find' variant to support walking to the first value, then iterating
> over the previous ranges.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> include/linux/maple_tree.h | 1 +
> lib/maple_tree.c | 160 +++++++++++++++++++++++++++----------
> 2 files changed, 121 insertions(+), 40 deletions(-)
>
> diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
> index a4cd8f891a090..542b09118a09f 100644
> --- a/include/linux/maple_tree.h
> +++ b/include/linux/maple_tree.h
> @@ -467,6 +467,7 @@ void mas_destroy(struct ma_state *mas);
> int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);
>
> void *mas_prev(struct ma_state *mas, unsigned long min);
> +void *mas_prev_range(struct ma_state *mas, unsigned long max);
> void *mas_next(struct ma_state *mas, unsigned long max);
> void *mas_next_range(struct ma_state *mas, unsigned long max);
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index e050fd1f7cce8..f060c71965c0d 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5924,18 +5924,8 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max)
> }
> EXPORT_SYMBOL_GPL(mt_next);
>
> -/**
> - * mas_prev() - Get the previous entry
> - * @mas: The maple state
> - * @min: The minimum value to check.
> - *
> - * Must hold rcu_read_lock or the write lock.
> - * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> - * searchable nodes.
> - *
> - * Return: the previous value or %NULL.
> - */
> -void *mas_prev(struct ma_state *mas, unsigned long min)
> +static inline bool mas_prev_setup(struct ma_state *mas, unsigned long min,
> + void **entry)
> {
> if (mas->index <= min)
> goto none;
> @@ -5953,7 +5943,8 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
> if (!mas->index)
> goto none;
> mas->index = mas->last = 0;
> - return mas_root(mas);
> + *entry = mas_root(mas);
> + return true;
> }
>
> if (mas_is_none(mas)) {
> @@ -5961,18 +5952,64 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
> /* Walked to out-of-range pointer? */
> mas->index = mas->last = 0;
> mas->node = MAS_ROOT;
> - return mas_root(mas);
> + *entry = mas_root(mas);
> + return true;
> }
> - return NULL;
> + return true;
> }
> - return mas_prev_slot(mas, min, false);
> +
> + return false;
>
> none:
> mas->node = MAS_NONE;
> - return NULL;
> + return true;
> +}
> +
> +/**
> + * mas_prev() - Get the previous entry
> + * @mas: The maple state
> + * @min: The minimum value to check.
> + *
> + * Must hold rcu_read_lock or the write lock.
> + * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> + * searchable nodes.
> + *
> + * Return: the previous value or %NULL.
> + */
> +void *mas_prev(struct ma_state *mas, unsigned long min)
> +{
> + void *entry = NULL;
> +
> + if (mas_prev_setup(mas, min, &entry))
> + return entry;
> +
> + return mas_prev_slot(mas, min, false);
> }
> EXPORT_SYMBOL_GPL(mas_prev);
>
> +/**
> + * mas_prev_range() - Advance to the previous range
> + * @mas: The maple state
> + * @min: The minimum value to check.
> + *
> + * Sets @mas->index and @mas->last to the range.
> + * Must hold rcu_read_lock or the write lock.
> + * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> + * searchable nodes.
> + *
> + * Return: the previous value or %NULL.
> + */
> +void *mas_prev_range(struct ma_state *mas, unsigned long min)
> +{
> + void *entry = NULL;
> +
> + if (mas_prev_setup(mas, min, &entry))
> + return entry;
> +
> + return mas_prev_slot(mas, min, true);
> +}
> +EXPORT_SYMBOL_GPL(mas_prev_slot);

Hi Liam,

I guess you want to export mas_prev_range symbol instead of mas_prev_slot.

> +
> /**
> * mt_prev() - get the previous value in the maple tree
> * @mt: The maple tree
> @@ -6116,20 +6153,15 @@ void *mas_find_range(struct ma_state *mas, unsigned long max)
> EXPORT_SYMBOL_GPL(mas_find_range);
>
> /**
> - * mas_find_rev: On the first call, find the first non-null entry at or below
> - * mas->index down to %min. Otherwise find the first non-null entry below
> - * mas->index down to %min.
> - * @mas: The maple state
> - * @min: The minimum value to check.
> + * mas_find_rev_setup() - Internal function to set up mas_find_*_rev()
> *
> - * Must hold rcu_read_lock or the write lock.
> - * If an entry exists, last and index are updated accordingly.
> - * May set @mas->node to MAS_NONE.
> - *
> - * Return: The entry or %NULL.
> + * Returns: True if entry is the answer, false otherwise.
> */
> -void *mas_find_rev(struct ma_state *mas, unsigned long min)
> +static inline bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
> + void **entry)
> {
> + *entry = NULL;
> +
> if (unlikely(mas_is_none(mas))) {
> if (mas->index <= min)
> goto none;
> @@ -6141,7 +6173,7 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
> if (unlikely(mas_is_paused(mas))) {
> if (unlikely(mas->index <= min)) {
> mas->node = MAS_NONE;
> - return NULL;
> + return true;
> }
> mas->node = MAS_START;
> mas->last = --mas->index;
> @@ -6149,14 +6181,12 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
>
> if (unlikely(mas_is_start(mas))) {
> /* First run or continue */
> - void *entry;
> -
> if (mas->index < min)
> - return NULL;
> + return true;
>
> - entry = mas_walk(mas);
> - if (entry)
> - return entry;
> + *entry = mas_walk(mas);
> + if (*entry)
> + return true;
> }
>
> if (unlikely(!mas_searchable(mas))) {
> @@ -6170,22 +6200,72 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
> */
> mas->last = mas->index = 0;
> mas->node = MAS_ROOT;
> - return mas_root(mas);
> + *entry = mas_root(mas);
> + return true;
> }
> }
>
> if (mas->index < min)
> - return NULL;
> + return true;
>
> - /* Retries on dead nodes handled by mas_prev_slot */
> - return mas_prev_slot(mas, min, false);
> + return false;
>
> none:
> mas->node = MAS_NONE;
> - return NULL;
> + return true;
> +}
> +
> +/**
> + * mas_find_rev: On the first call, find the first non-null entry at or below
> + * mas->index down to %min. Otherwise find the first non-null entry below
> + * mas->index down to %min.
> + * @mas: The maple state
> + * @min: The minimum value to check.
> + *
> + * Must hold rcu_read_lock or the write lock.
> + * If an entry exists, last and index are updated accordingly.
> + * May set @mas->node to MAS_NONE.
> + *
> + * Return: The entry or %NULL.
> + */
> +void *mas_find_rev(struct ma_state *mas, unsigned long min)
> +{
> + void *entry;
> +
> + if (mas_find_rev_setup(mas, min, &entry))
> + return entry;
> +
> + /* Retries on dead nodes handled by mas_prev_slot */
> + return mas_prev_slot(mas, min, false);
> +
> }
> EXPORT_SYMBOL_GPL(mas_find_rev);
>
> +/**
> + * mas_find_range_rev: On the first call, find the first non-null entry at or
> + * below mas->index down to %min. Otherwise advance to the previous slot after
> + * mas->index down to %min.
> + * @mas: The maple state
> + * @min: The minimum value to check.
> + *
> + * Must hold rcu_read_lock or the write lock.
> + * If an entry exists, last and index are updated accordingly.
> + * May set @mas->node to MAS_NONE.
> + *
> + * Return: The entry or %NULL.
> + */
> +void *mas_find_range_rev(struct ma_state *mas, unsigned long min)
> +{
> + void *entry;
> +
> + if (mas_find_rev_setup(mas, min, &entry))
> + return entry;
> +
> + /* Retries on dead nodes handled by mas_prev_slot */
> + return mas_prev_slot(mas, min, true);
> +}
> +EXPORT_SYMBOL_GPL(mas_find_range_rev);
> +
> /**
> * mas_erase() - Find the range in which index resides and erase the entire
> * range.
>
> --
> 2.39.2
>

2023-05-08 16:29:47

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 31/36] maple_tree: Add mas_prev_range() and mas_find_range_rev interface

* Vernon Yang <[email protected]> [230508 09:27]:
> On Fri, May 05, 2023 at 01:41:59PM -0400, Liam R. Howlett wrote:
> > Some users of the maple tree may want to move to the previous range
> > regardless of the value stored there. Add this interface as well as the
> > 'find' variant to support walking to the first value, then iterating
> > over the previous ranges.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > include/linux/maple_tree.h | 1 +
> > lib/maple_tree.c | 160 +++++++++++++++++++++++++++----------
> > 2 files changed, 121 insertions(+), 40 deletions(-)
> >
> > diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
> > index a4cd8f891a090..542b09118a09f 100644
> > --- a/include/linux/maple_tree.h
> > +++ b/include/linux/maple_tree.h
> > @@ -467,6 +467,7 @@ void mas_destroy(struct ma_state *mas);
> > int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);
> >
> > void *mas_prev(struct ma_state *mas, unsigned long min);
> > +void *mas_prev_range(struct ma_state *mas, unsigned long max);
> > void *mas_next(struct ma_state *mas, unsigned long max);
> > void *mas_next_range(struct ma_state *mas, unsigned long max);
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index e050fd1f7cce8..f060c71965c0d 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -5924,18 +5924,8 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max)
> > }
> > EXPORT_SYMBOL_GPL(mt_next);
> >
> > -/**
> > - * mas_prev() - Get the previous entry
> > - * @mas: The maple state
> > - * @min: The minimum value to check.
> > - *
> > - * Must hold rcu_read_lock or the write lock.
> > - * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> > - * searchable nodes.
> > - *
> > - * Return: the previous value or %NULL.
> > - */
> > -void *mas_prev(struct ma_state *mas, unsigned long min)
> > +static inline bool mas_prev_setup(struct ma_state *mas, unsigned long min,
> > + void **entry)
> > {
> > if (mas->index <= min)
> > goto none;
> > @@ -5953,7 +5943,8 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
> > if (!mas->index)
> > goto none;
> > mas->index = mas->last = 0;
> > - return mas_root(mas);
> > + *entry = mas_root(mas);
> > + return true;
> > }
> >
> > if (mas_is_none(mas)) {
> > @@ -5961,18 +5952,64 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
> > /* Walked to out-of-range pointer? */
> > mas->index = mas->last = 0;
> > mas->node = MAS_ROOT;
> > - return mas_root(mas);
> > + *entry = mas_root(mas);
> > + return true;
> > }
> > - return NULL;
> > + return true;
> > }
> > - return mas_prev_slot(mas, min, false);
> > +
> > + return false;
> >
> > none:
> > mas->node = MAS_NONE;
> > - return NULL;
> > + return true;
> > +}
> > +
> > +/**
> > + * mas_prev() - Get the previous entry
> > + * @mas: The maple state
> > + * @min: The minimum value to check.
> > + *
> > + * Must hold rcu_read_lock or the write lock.
> > + * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> > + * searchable nodes.
> > + *
> > + * Return: the previous value or %NULL.
> > + */
> > +void *mas_prev(struct ma_state *mas, unsigned long min)
> > +{
> > + void *entry = NULL;
> > +
> > + if (mas_prev_setup(mas, min, &entry))
> > + return entry;
> > +
> > + return mas_prev_slot(mas, min, false);
> > }
> > EXPORT_SYMBOL_GPL(mas_prev);
> >
> > +/**
> > + * mas_prev_range() - Advance to the previous range
> > + * @mas: The maple state
> > + * @min: The minimum value to check.
> > + *
> > + * Sets @mas->index and @mas->last to the range.
> > + * Must hold rcu_read_lock or the write lock.
> > + * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
> > + * searchable nodes.
> > + *
> > + * Return: the previous value or %NULL.
> > + */
> > +void *mas_prev_range(struct ma_state *mas, unsigned long min)
> > +{
> > + void *entry = NULL;
> > +
> > + if (mas_prev_setup(mas, min, &entry))
> > + return entry;
> > +
> > + return mas_prev_slot(mas, min, true);
> > +}
> > +EXPORT_SYMBOL_GPL(mas_prev_slot);
>
> Hi Liam,
>
> I guess you want to export mas_prev_range symbol instead of mas_prev_slot.

Yes.. and it mas_prev_slot should be static.

Thanks for catching this. I noticed this only after the bot complained
about non-static functions in this series.


Regards,
Liam

2023-05-09 12:43:06

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 32/36] maple_tree: Clear up index and last setting in single entry tree



在 2023/5/6 01:42, Liam R. Howlett 写道:
> When there is a single entry tree (range of 0-0 pointing to an entry),
> then ensure the limit is either 0-0 or 1-oo, depending on where the user
> walks. Ensure the correct node setting as well; either MAS_ROOT or
> MAS_NONE.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/maple_tree.c | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index f060c71965c0d..914399519cf54 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5022,24 +5022,25 @@ void *mas_walk(struct ma_state *mas)
> {
> void *entry;
>
> + if (mas_is_none(mas) || mas_is_paused(mas))
if (mas_is_none(mas) || mas_is_paused(mas) || mas_is_ptr(mas))
> + mas->node = MAS_START;

Hi, Liam
There is an issue that it cannot pass the user space test program
with this patchset.
I tested it based on 47cba14ce6fc4(linux-next/master).

The reason is that mas_walk() does not handle the state that mas is
root. The root cause is that mas_start() only handles the start state,
and returns NULL for the root state. When encountering the root state,
we can reset to start so that it is handled in mas_start().

log:
BUG at check_state_handling:3076 (1)
maple_tree(0x55d6a9838ca0) flags 1, height 0 root 0x1234500
0: 0x1234500
Pass: 453406336 Run:453406337
maple: ../../../lib/test_maple_tree.c:3076: check_state_handling:
Assertion `0' failed.
Aborted (core dumped)
> retry:
> entry = mas_state_walk(mas);
> - if (mas_is_start(mas))
> + if (mas_is_start(mas)) {
> goto retry;
> -
> - if (mas_is_ptr(mas)) {
> + } else if (mas_is_none(mas)) {
> + mas->index = 0;
> + mas->last = ULONG_MAX;
> + } else if (mas_is_ptr(mas)) {
> if (!mas->index) {
> mas->last = 0;
> - } else {
> - mas->index = 1;
> - mas->last = ULONG_MAX;
> + return entry;
> }
> - return entry;
> - }
>
> - if (mas_is_none(mas)) {
> - mas->index = 0;
> + mas->index = 1;
> mas->last = ULONG_MAX;
> + mas->node = MAS_NONE;
> + return NULL;
> }
>
> return entry;

2023-05-12 16:13:48

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 32/36] maple_tree: Clear up index and last setting in single entry tree

* Peng Zhang <[email protected]> [230509 08:39]:
>
>
> 在 2023/5/6 01:42, Liam R. Howlett 写道:
> > When there is a single entry tree (range of 0-0 pointing to an entry),
> > then ensure the limit is either 0-0 or 1-oo, depending on where the user
> > walks. Ensure the correct node setting as well; either MAS_ROOT or
> > MAS_NONE.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/maple_tree.c | 21 +++++++++++----------
> > 1 file changed, 11 insertions(+), 10 deletions(-)
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index f060c71965c0d..914399519cf54 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -5022,24 +5022,25 @@ void *mas_walk(struct ma_state *mas)
> > {
> > void *entry;
> > + if (mas_is_none(mas) || mas_is_paused(mas))
> if (mas_is_none(mas) || mas_is_paused(mas) || mas_is_ptr(mas))
> > + mas->node = MAS_START;
>
> Hi, Liam
> There is an issue that it cannot pass the user space test program
> with this patchset.
> I tested it based on 47cba14ce6fc4(linux-next/master).
>
> The reason is that mas_walk() does not handle the state that mas is
> root. The root cause is that mas_start() only handles the start state,
> and returns NULL for the root state. When encountering the root state,
> we can reset to start so that it is handled in mas_start().

Thanks. I'll have a look and adjust v3.

>
> log:
> BUG at check_state_handling:3076 (1)
> maple_tree(0x55d6a9838ca0) flags 1, height 0 root 0x1234500
> 0: 0x1234500
> Pass: 453406336 Run:453406337
> maple: ../../../lib/test_maple_tree.c:3076: check_state_handling: Assertion
> `0' failed.
> Aborted (core dumped)
> > retry:
> > entry = mas_state_walk(mas);
> > - if (mas_is_start(mas))
> > + if (mas_is_start(mas)) {
> > goto retry;
> > -
> > - if (mas_is_ptr(mas)) {
> > + } else if (mas_is_none(mas)) {
> > + mas->index = 0;
> > + mas->last = ULONG_MAX;
> > + } else if (mas_is_ptr(mas)) {
> > if (!mas->index) {
> > mas->last = 0;
> > - } else {
> > - mas->index = 1;
> > - mas->last = ULONG_MAX;
> > + return entry;
> > }
> > - return entry;
> > - }
> > - if (mas_is_none(mas)) {
> > - mas->index = 0;
> > + mas->index = 1;
> > mas->last = ULONG_MAX;
> > + mas->node = MAS_NONE;
> > + return NULL;
> > }
> > return entry;

2023-05-12 17:55:42

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 32/36] maple_tree: Clear up index and last setting in single entry tree

* Liam R. Howlett <[email protected]> [230512 11:54]:
> * Peng Zhang <[email protected]> [230509 08:39]:
> >
> >
> > 在 2023/5/6 01:42, Liam R. Howlett 写道:
> > > When there is a single entry tree (range of 0-0 pointing to an entry),
> > > then ensure the limit is either 0-0 or 1-oo, depending on where the user
> > > walks. Ensure the correct node setting as well; either MAS_ROOT or
> > > MAS_NONE.
> > >
> > > Signed-off-by: Liam R. Howlett <[email protected]>
> > > ---
> > > lib/maple_tree.c | 21 +++++++++++----------
> > > 1 file changed, 11 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > > index f060c71965c0d..914399519cf54 100644
> > > --- a/lib/maple_tree.c
> > > +++ b/lib/maple_tree.c
> > > @@ -5022,24 +5022,25 @@ void *mas_walk(struct ma_state *mas)
> > > {
> > > void *entry;
> > > + if (mas_is_none(mas) || mas_is_paused(mas))
> > if (mas_is_none(mas) || mas_is_paused(mas) || mas_is_ptr(mas))
> > > + mas->node = MAS_START;
> >
> > Hi, Liam
> > There is an issue that it cannot pass the user space test program
> > with this patchset.
> > I tested it based on 47cba14ce6fc4(linux-next/master).
> >
> > The reason is that mas_walk() does not handle the state that mas is
> > root. The root cause is that mas_start() only handles the start state,
> > and returns NULL for the root state. When encountering the root state,
> > we can reset to start so that it is handled in mas_start().
>
> Thanks. I'll have a look and adjust v3.

This was introduced with the fix for the race condition between v1 and
v2.

It is probably best to reset to MAS_START instead of the potential stale
data.


>
> >
> > log:
> > BUG at check_state_handling:3076 (1)
> > maple_tree(0x55d6a9838ca0) flags 1, height 0 root 0x1234500
> > 0: 0x1234500
> > Pass: 453406336 Run:453406337
> > maple: ../../../lib/test_maple_tree.c:3076: check_state_handling: Assertion
> > `0' failed.
> > Aborted (core dumped)
> > > retry:
> > > entry = mas_state_walk(mas);
> > > - if (mas_is_start(mas))
> > > + if (mas_is_start(mas)) {
> > > goto retry;
> > > -
> > > - if (mas_is_ptr(mas)) {
> > > + } else if (mas_is_none(mas)) {
> > > + mas->index = 0;
> > > + mas->last = ULONG_MAX;
> > > + } else if (mas_is_ptr(mas)) {
> > > if (!mas->index) {
> > > mas->last = 0;
> > > - } else {
> > > - mas->index = 1;
> > > - mas->last = ULONG_MAX;
> > > + return entry;
> > > }
> > > - return entry;
> > > - }
> > > - if (mas_is_none(mas)) {
> > > - mas->index = 0;
> > > + mas->index = 1;
> > > mas->last = ULONG_MAX;
> > > + mas->node = MAS_NONE;
> > > + return NULL;
> > > }
> > > return entry;