2023-04-25 14:11:52

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 00/34] Maple tree mas_{next,prev}_range() and cleanup

This is a number of clean ups to the code to make it more usable, The
addition of debug output formatting, the addition of printing the maple
state information in the WARN_ON/BUG_ON code.

There is also work done here to keep nodes active during iterations to
reduce the necessity of re-walking the tree.

Finally, there is a new interface added to move to the next or previous
range regardless of if there is anything in that range.

The organisation of the patches is as follows:

0001-0004 - Small clean ups
0005-0018 - Additional debug options and WARN_ON/BUG_ON changes
0019 - Test module __init and __exit addition
0020-0021 - More functional clean ups
0022-0026 - Changes to keep nodes active
0027-0034 - Add new mas_{prev,next}_range()

Liam R. Howlett (34):
maple_tree: Fix static analyser cppcheck issue
maple_tree: Clean up mas_parent_enum()
maple_tree: Avoid unnecessary ascending
maple_tree: Clean up mas_dfs_postorder()
maple_tree: Add format option to mt_dump()
maple_tree: Add debug BUG_ON and WARN_ON variants
maple_tree: Convert BUG_ON() to MT_BUG_ON()
maple_tree: Change RCU checks to WARN_ON() instead of BUG_ON()
maple_tree: Convert debug code to use MT_WARN_ON() and MAS_WARN_ON()
maple_tree: Use MAS_BUG_ON() when setting a leaf node as a parent
maple_tree: Use MAS_BUG_ON() in mas_set_height()
maple_tree: Use MAS_BUG_ON() from mas_topiary_range()
maple_tree: Use MAS_WR_BUG_ON() in mas_store_prealloc()
maple_tree: Use MAS_BUG_ON() prior to calling mas_meta_gap()
maple_tree: Return error on mte_pivots() out of range
maple_tree: Make test code work without debug enabled
mm: Update validate_mm() to use vma iterator
mm: Update vma_iter_store() to use MAS_WARN_ON()
maple_tree: Add __init and __exit to test module
maple_tree: Remove unnecessary check from mas_destroy()
maple_tree: mas_start() reset depth on dead node
mm/mmap: Change do_vmi_align_munmap() for maple tree iterator changes
maple_tree: Try harder to keep active node after mas_next()
maple_tree: Try harder to keep active node with mas_prev()
maple_tree: Clear up index and last setting in single entry tree
maple_tree: Update testing code for mas_{next,prev,walk}
maple_tree: Introduce mas_next_slot() interface
maple_tree: Revise limit checks in mas_empty_area{_rev}()
maple_tree: Introduce mas_prev_slot() interface
maple_tree: Fix comments for mas_next_entry() and mas_prev_entry()
maple_tree: Add mas_next_range() and mas_find_range() interfaces
maple_tree: Add mas_prev_range() and mas_find_range_rev interface
maple_tree: Add testing for mas_{prev,next}_range()
mm: Add vma_iter_{next,prev}_range() to vma iterator

include/linux/maple_tree.h | 128 ++-
include/linux/mm.h | 13 +
include/linux/mmdebug.h | 14 +
lib/Kconfig.debug | 10 +-
lib/maple_tree.c | 1042 +++++++++++++++----------
lib/test_maple_tree.c | 993 ++++++++++++++++++++---
mm/debug.c | 9 +
mm/internal.h | 20 +-
mm/mmap.c | 107 ++-
tools/testing/radix-tree/linux/init.h | 1 +
tools/testing/radix-tree/maple.c | 164 ++--
11 files changed, 1823 insertions(+), 678 deletions(-)

--
2.39.2


2023-04-25 14:11:57

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 03/34] maple_tree: Avoid unnecessary ascending

The maple tree node limits are implied by the parent. When walking up
the tree, the limit may not be known until a slot that does not have
implied limits are encountered. However, if the node is the left-most
or right-most node, the walking up to find that limit can be skipped.

This commit also fixes the debug/testing code that was not setting the
limit on walking down the tree as that optimization is not compatible
with this change.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 6 ++++++
tools/testing/radix-tree/maple.c | 4 ++++
2 files changed, 10 insertions(+)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index ac0245dd88dad..60bae5be008a6 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1132,6 +1132,12 @@ static int mas_ascend(struct ma_state *mas)
return 0;
}

+ if (!mas->min)
+ set_min = true;
+
+ if (mas->max == ULONG_MAX)
+ set_max = true;
+
min = 0;
max = ULONG_MAX;
do {
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 9286d3baa12d6..75df543e019c9 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -35259,6 +35259,7 @@ static void mas_dfs_preorder(struct ma_state *mas)

struct maple_enode *prev;
unsigned char end, slot = 0;
+ unsigned long *pivots;

if (mas->node == MAS_START) {
mas_start(mas);
@@ -35291,6 +35292,9 @@ static void mas_dfs_preorder(struct ma_state *mas)
mas_ascend(mas);
goto walk_up;
}
+ pivots = ma_pivots(mte_to_node(prev), mte_node_type(prev));
+ mas->max = mas_safe_pivot(mas, pivots, slot, mte_node_type(prev));
+ mas->min = mas_safe_min(mas, pivots, slot);

return;
done:
--
2.39.2

2023-04-25 14:12:05

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 02/34] maple_tree: Clean up mas_parent_enum()

From: "Liam R. Howlett" <[email protected]>

mas_parent_enum() is a simple wrapper for mte_parent_enum() which is
only called from that wrapper. Remove the wrapper and inline
mte_parent_enum() into mas_parent_enum().

At the same time, clean up the bit masking of the root pointer since it
cannot be set by the time the bit masking occurs. Change the check on
the root bit to a WARN_ON(), and fix the verification code to not
trigger the WARN_ON() before checking if the node is root.

Reported-by: Wei Yang <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 9cf4fca42310c..ac0245dd88dad 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -428,25 +428,23 @@ static inline unsigned long mte_parent_slot_mask(unsigned long parent)
* mas_parent_enum() - Return the maple_type of the parent from the stored
* parent type.
* @mas: The maple state
- * @node: The maple_enode to extract the parent's enum
+ * @enode: The maple_enode to extract the parent's enum
* Return: The node->parent maple_type
*/
static inline
-enum maple_type mte_parent_enum(struct maple_enode *p_enode,
- struct maple_tree *mt)
+enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
{
unsigned long p_type;

- p_type = (unsigned long)p_enode;
- if (p_type & MAPLE_PARENT_ROOT)
- return 0; /* Validated in the caller. */
+ p_type = (unsigned long)mte_to_node(enode)->parent;
+ if (WARN_ON(p_type & MAPLE_PARENT_ROOT))
+ return 0;

p_type &= MAPLE_NODE_MASK;
- p_type = p_type & ~(MAPLE_PARENT_ROOT | mte_parent_slot_mask(p_type));
-
+ p_type &= ~mte_parent_slot_mask(p_type);
switch (p_type) {
case MAPLE_PARENT_RANGE64: /* or MAPLE_PARENT_ARANGE64 */
- if (mt_is_alloc(mt))
+ if (mt_is_alloc(mas->tree))
return maple_arange_64;
return maple_range_64;
}
@@ -454,12 +452,6 @@ enum maple_type mte_parent_enum(struct maple_enode *p_enode,
return 0;
}

-static inline
-enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
-{
- return mte_parent_enum(ma_enode_ptr(mte_to_node(enode)->parent), mas->tree);
-}
-
/*
* mte_set_parent() - Set the parent node and encode the slot
* @enode: The encoded maple node.
@@ -7008,14 +7000,16 @@ static void mas_validate_parent_slot(struct ma_state *mas)
{
struct maple_node *parent;
struct maple_enode *node;
- enum maple_type p_type = mas_parent_enum(mas, mas->node);
- unsigned char p_slot = mte_parent_slot(mas->node);
+ enum maple_type p_type;
+ unsigned char p_slot;
void __rcu **slots;
int i;

if (mte_is_root(mas->node))
return;

+ p_slot = mte_parent_slot(mas->node);
+ p_type = mas_parent_enum(mas, mas->node);
parent = mte_parent(mas->node);
slots = ma_slots(parent, p_type);
MT_BUG_ON(mas->tree, mas_mn(mas) == parent);
--
2.39.2

2023-04-25 14:12:10

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 04/34] maple_tree: Clean up mas_dfs_postorder()

Convert loop type to ensure all variables are set to make the compiler
happy, and use the mas_is_none() function instead of explicitly checking
the node in the maple state.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 60bae5be008a6..dcab027b73440 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6740,15 +6740,12 @@ static void mas_dfs_postorder(struct ma_state *mas, unsigned long max)

mas->node = mn;
mas_ascend(mas);
- while (mas->node != MAS_NONE) {
+ do {
p = mas->node;
p_min = mas->min;
p_max = mas->max;
mas_prev_node(mas, 0);
- }
-
- if (p == MAS_NONE)
- return;
+ } while (!mas_is_none(mas));

mas->node = p;
mas->max = p_max;
--
2.39.2

2023-04-25 14:12:16

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 01/34] maple_tree: Fix static analyser cppcheck issue

Static analyser of the maple tree code noticed that the split variable
is being used to dereference into an array prior to checking the
variable itself. Fix this issue by changing the order of the statement
to check the variable first.

Reported-by: David Binderman <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 110a36479dced..9cf4fca42310c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1943,8 +1943,9 @@ static inline int mab_calc_split(struct ma_state *mas,
* causes one node to be deficient.
* NOTE: mt_min_slots is 1 based, b_end and split are zero.
*/
- while (((bn->pivot[split] - min) < slot_count - 1) &&
- (split < slot_count - 1) && (b_end - split > slot_min))
+ while ((split < slot_count - 1) &&
+ ((bn->pivot[split] - min) < slot_count - 1) &&
+ (b_end - split > slot_min))
split++;
}

--
2.39.2

2023-04-25 14:12:18

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 05/34] maple_tree: Add format option to mt_dump()

From: "Liam R. Howlett" <[email protected]>

Allow different formatting strings to be used when dumping the tree.
Currently supports hex and decimal.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 9 +++-
lib/maple_tree.c | 87 +++++++++++++++++++++-----------
lib/test_maple_tree.c | 10 ++--
mm/internal.h | 4 +-
mm/mmap.c | 8 +--
tools/testing/radix-tree/maple.c | 12 ++---
6 files changed, 82 insertions(+), 48 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 1fadb5f5978b6..140fb271be4a4 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -670,10 +670,15 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max);


#ifdef CONFIG_DEBUG_MAPLE_TREE
+enum mt_dump_format {
+ mt_dump_dec,
+ mt_dump_hex,
+};
+
extern atomic_t maple_tree_tests_run;
extern atomic_t maple_tree_tests_passed;

-void mt_dump(const struct maple_tree *mt);
+void mt_dump(const struct maple_tree *mt, enum mt_dump_format format);
void mt_validate(struct maple_tree *mt);
void mt_cache_shrink(void);
#define MT_BUG_ON(__tree, __x) do { \
@@ -681,7 +686,7 @@ void mt_cache_shrink(void);
if (__x) { \
pr_info("BUG at %s:%d (%u)\n", \
__func__, __LINE__, __x); \
- mt_dump(__tree); \
+ mt_dump(__tree, mt_dump_hex); \
pr_info("Pass: %u Run:%u\n", \
atomic_read(&maple_tree_tests_passed), \
atomic_read(&maple_tree_tests_run)); \
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index dcab027b73440..535efc39f7758 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5700,7 +5700,7 @@ void *mas_store(struct ma_state *mas, void *entry)
trace_ma_write(__func__, mas, 0, entry);
#ifdef CONFIG_DEBUG_MAPLE_TREE
if (mas->index > mas->last)
- pr_err("Error %lu > %lu %p\n", mas->index, mas->last, entry);
+ pr_err("Error %lX > %lX %p\n", mas->index, mas->last, entry);
MT_BUG_ON(mas->tree, mas->index > mas->last);
if (mas->index > mas->last) {
mas_set_err(mas, -EINVAL);
@@ -6754,22 +6754,33 @@ static void mas_dfs_postorder(struct ma_state *mas, unsigned long max)

/* Tree validations */
static void mt_dump_node(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth);
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format);
static void mt_dump_range(unsigned long min, unsigned long max,
- unsigned int depth)
+ unsigned int depth, enum mt_dump_format format)
{
static const char spaces[] = " ";

- if (min == max)
- pr_info("%.*s%lu: ", depth * 2, spaces, min);
- else
- pr_info("%.*s%lu-%lu: ", depth * 2, spaces, min, max);
+ switch(format) {
+ case mt_dump_hex:
+ if (min == max)
+ pr_info("%.*s%lx: ", depth * 2, spaces, min);
+ else
+ pr_info("%.*s%lx-%lx: ", depth * 2, spaces, min, max);
+ break;
+ default:
+ case mt_dump_dec:
+ if (min == max)
+ pr_info("%.*s%lu: ", depth * 2, spaces, min);
+ else
+ pr_info("%.*s%lu-%lu: ", depth * 2, spaces, min, max);
+ }
}

static void mt_dump_entry(void *entry, unsigned long min, unsigned long max,
- unsigned int depth)
+ unsigned int depth, enum mt_dump_format format)
{
- mt_dump_range(min, max, depth);
+ mt_dump_range(min, max, depth, format);

if (xa_is_value(entry))
pr_cont("value %ld (0x%lx) [%p]\n", xa_to_value(entry),
@@ -6783,7 +6794,8 @@ static void mt_dump_entry(void *entry, unsigned long min, unsigned long max,
}

static void mt_dump_range64(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_range_64 *node = &mte_to_node(entry)->mr64;
bool leaf = mte_is_leaf(entry);
@@ -6791,8 +6803,16 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
int i;

pr_cont(" contents: ");
- for (i = 0; i < MAPLE_RANGE64_SLOTS - 1; i++)
- pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
+ for (i = 0; i < MAPLE_RANGE64_SLOTS - 1; i++) {
+ switch(format) {
+ case mt_dump_hex:
+ pr_cont("%p %lX ", node->slot[i], node->pivot[i]);
+ break;
+ default:
+ case mt_dump_dec:
+ pr_cont("%p %lu ", node->slot[i], node->pivot[i]);
+ }
+ }
pr_cont("%p\n", node->slot[i]);
for (i = 0; i < MAPLE_RANGE64_SLOTS; i++) {
unsigned long last = max;
@@ -6805,24 +6825,32 @@ static void mt_dump_range64(const struct maple_tree *mt, void *entry,
break;
if (leaf)
mt_dump_entry(mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);
else if (node->slot[i])
mt_dump_node(mt, mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);

if (last == max)
break;
if (last > max) {
- pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n",
+ switch(format) {
+ case mt_dump_hex:
+ pr_err("node %p last (%lx) > max (%lx) at pivot %d!\n",
node, last, max, i);
- break;
+ break;
+ default:
+ case mt_dump_dec:
+ pr_err("node %p last (%lu) > max (%lu) at pivot %d!\n",
+ node, last, max, i);
+ }
}
first = last + 1;
}
}

static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_arange_64 *node = &mte_to_node(entry)->ma64;
bool leaf = mte_is_leaf(entry);
@@ -6847,10 +6875,10 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
break;
if (leaf)
mt_dump_entry(mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);
else if (node->slot[i])
mt_dump_node(mt, mt_slot(mt, node->slot, i),
- first, last, depth + 1);
+ first, last, depth + 1, format);

if (last == max)
break;
@@ -6864,13 +6892,14 @@ static void mt_dump_arange64(const struct maple_tree *mt, void *entry,
}

static void mt_dump_node(const struct maple_tree *mt, void *entry,
- unsigned long min, unsigned long max, unsigned int depth)
+ unsigned long min, unsigned long max, unsigned int depth,
+ enum mt_dump_format format)
{
struct maple_node *node = mte_to_node(entry);
unsigned int type = mte_node_type(entry);
unsigned int i;

- mt_dump_range(min, max, depth);
+ mt_dump_range(min, max, depth, format);

pr_cont("node %p depth %d type %d parent %p", node, depth, type,
node ? node->parent : NULL);
@@ -6881,15 +6910,15 @@ static void mt_dump_node(const struct maple_tree *mt, void *entry,
if (min + i > max)
pr_cont("OUT OF RANGE: ");
mt_dump_entry(mt_slot(mt, node->slot, i),
- min + i, min + i, depth);
+ min + i, min + i, depth, format);
}
break;
case maple_leaf_64:
case maple_range_64:
- mt_dump_range64(mt, entry, min, max, depth);
+ mt_dump_range64(mt, entry, min, max, depth, format);
break;
case maple_arange_64:
- mt_dump_arange64(mt, entry, min, max, depth);
+ mt_dump_arange64(mt, entry, min, max, depth, format);
break;

default:
@@ -6897,16 +6926,16 @@ static void mt_dump_node(const struct maple_tree *mt, void *entry,
}
}

-void mt_dump(const struct maple_tree *mt)
+void mt_dump(const struct maple_tree *mt, enum mt_dump_format format)
{
void *entry = rcu_dereference_check(mt->ma_root, mt_locked(mt));

pr_info("maple_tree(%p) flags %X, height %u root %p\n",
mt, mt->ma_flags, mt_height(mt), entry);
if (!xa_is_node(entry))
- mt_dump_entry(entry, 0, 0, 0);
+ mt_dump_entry(entry, 0, 0, 0, format);
else if (entry)
- mt_dump_node(mt, entry, 0, mt_node_max(entry), 0);
+ mt_dump_node(mt, entry, 0, mt_node_max(entry), 0, format);
}
EXPORT_SYMBOL_GPL(mt_dump);

@@ -6959,7 +6988,7 @@ static void mas_validate_gaps(struct ma_state *mas)
mas_mn(mas), i,
mas_get_slot(mas, i), gap,
p_end, p_start);
- mt_dump(mas->tree);
+ mt_dump(mas->tree, mt_dump_hex);

MT_BUG_ON(mas->tree,
gap != p_end - p_start + 1);
@@ -6992,7 +7021,7 @@ static void mas_validate_gaps(struct ma_state *mas)
MT_BUG_ON(mas->tree, max_gap > mas->max);
if (ma_gaps(p_mn, mas_parent_enum(mas, mte))[p_slot] != max_gap) {
pr_err("gap %p[%u] != %lu\n", p_mn, p_slot, max_gap);
- mt_dump(mas->tree);
+ mt_dump(mas->tree, mt_dump_hex);
}

MT_BUG_ON(mas->tree,
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index f1db333270e9f..d6929270dd36a 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -219,7 +219,7 @@ static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
#ifndef __KERNEL__
if (verbose) {
rcu_barrier();
- mt_dump(mt);
+ mt_dump(mt, mt_dump_dec);
pr_info(" %s test of 0-%lu %luK in %d active (%d total)\n",
__func__, max, mt_get_alloc_size()/1024, mt_nr_allocated(),
mt_nr_tallocated());
@@ -248,7 +248,7 @@ static noinline void check_seq(struct maple_tree *mt, unsigned long max,
#ifndef __KERNEL__
if (verbose) {
rcu_barrier();
- mt_dump(mt);
+ mt_dump(mt, mt_dump_dec);
pr_info(" seq test of 0-%lu %luK in %d active (%d total)\n",
max, mt_get_alloc_size()/1024, mt_nr_allocated(),
mt_nr_tallocated());
@@ -893,7 +893,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
#if DEBUG_ALLOC_RANGE
pr_debug("\tInsert %lu-%lu\n", range[i] >> 12,
(range[i + 1] >> 12) - 1);
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
check_insert_range(mt, range[i] >> 12, (range[i + 1] >> 12) - 1,
xa_mk_value(range[i] >> 12), 0);
@@ -934,7 +934,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
xa_mk_value(req_range[i] >> 12)); /* pointer */
mt_validate(mt);
#if DEBUG_ALLOC_RANGE
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
}

@@ -1572,7 +1572,7 @@ static noinline void check_node_overwrite(struct maple_tree *mt)
mtree_test_store_range(mt, i*100, i*100 + 50, xa_mk_value(i*100));

mtree_test_store_range(mt, 319951, 367950, NULL);
- /*mt_dump(mt); */
+ /*mt_dump(mt, mt_dump_dec); */
mt_validate(mt);
}

diff --git a/mm/internal.h b/mm/internal.h
index 68410c6d97aca..4c195920f5656 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1051,13 +1051,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree);
+ mt_dump(vmi->mas.tree, mt_dump_hex);
}
if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.last < vma->vm_start)) {
printk("%lu < %lu\n", vmi->mas.last, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree);
+ mt_dump(vmi->mas.tree, mt_dump_hex);
}
#endif

diff --git a/mm/mmap.c b/mm/mmap.c
index 536bbb8fa0aef..1554f90d497ef 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -301,7 +301,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)

#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
extern void mt_validate(struct maple_tree *mt);
-extern void mt_dump(const struct maple_tree *mt);
+extern void mt_dump(const struct maple_tree *mt, enum mt_dump_format fmt);

/* Validate the maple tree */
static void validate_mm_mt(struct mm_struct *mm)
@@ -323,18 +323,18 @@ static void validate_mm_mt(struct mm_struct *mm)
pr_emerg("mt vma: %p %lu - %lu\n", vma_mt,
vma_mt->vm_start, vma_mt->vm_end);

- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
if (vma_mt->vm_end != mas.last + 1) {
pr_err("vma: %p vma_mt %lu-%lu\tmt %lu-%lu\n",
mm, vma_mt->vm_start, vma_mt->vm_end,
mas.index, mas.last);
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
}
VM_BUG_ON_MM(vma_mt->vm_end != mas.last + 1, mm);
if (vma_mt->vm_start != mas.index) {
pr_err("vma: %p vma_mt %p %lu - %lu doesn't match\n",
mm, vma_mt, vma_mt->vm_start, vma_mt->vm_end);
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
}
VM_BUG_ON_MM(vma_mt->vm_start != mas.index, mm);
}
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 75df543e019c9..ebcb3faf85ea9 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -1054,7 +1054,7 @@ static noinline void check_erase2_testset(struct maple_tree *mt,
if (entry_count)
MT_BUG_ON(mt, !mt_height(mt));
#if check_erase2_debug > 1
- mt_dump(mt);
+ mt_dump(mt, mt_dump_hex);
#endif
#if check_erase2_debug
pr_err("Done\n");
@@ -1085,7 +1085,7 @@ static noinline void check_erase2_testset(struct maple_tree *mt,
mas_for_each(&mas, foo, ULONG_MAX) {
if (xa_is_zero(foo)) {
if (addr == mas.index) {
- mt_dump(mas.tree);
+ mt_dump(mas.tree, mt_dump_hex);
pr_err("retry failed %lu - %lu\n",
mas.index, mas.last);
MT_BUG_ON(mt, 1);
@@ -34513,7 +34513,7 @@ static void *rcu_reader_rev(void *ptr)
if (mas.index != r_start) {
alt = xa_mk_value(index + i * 2 + 1 +
RCU_RANGE_COUNT);
- mt_dump(test->mt);
+ mt_dump(test->mt, mt_dump_dec);
printk("Error: %lu-%lu %p != %lu-%lu %p %p line %d i %d\n",
mas.index, mas.last, entry,
r_start, r_end, expected, alt,
@@ -35784,10 +35784,10 @@ void farmer_tests(void)
struct maple_node *node;
DEFINE_MTREE(tree);

- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

tree.ma_root = xa_mk_value(0);
- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

node = mt_alloc_one(GFP_KERNEL);
node->parent = (void *)((unsigned long)(&tree) | 1);
@@ -35797,7 +35797,7 @@ void farmer_tests(void)
node->mr64.pivot[1] = 1;
node->mr64.pivot[2] = 0;
tree.ma_root = mt_mk_node(node, maple_leaf_64);
- mt_dump(&tree);
+ mt_dump(&tree, mt_dump_dec);

node->parent = ma_parent_ptr(node);
ma_free_rcu(node);
--
2.39.2

2023-04-25 14:12:23

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 06/34] maple_tree: Add debug BUG_ON and WARN_ON variants

Add debug macros to dump the maple state and/or the tree for both
warning and bug_on calls.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 100 +++++++++++++++++++++++++++++++++++--
lib/maple_tree.c | 34 ++++++++++++-
2 files changed, 129 insertions(+), 5 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 140fb271be4a4..204d7941a39ec 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -482,13 +482,13 @@ static inline void mas_init(struct ma_state *mas, struct maple_tree *tree,
}

/* Checks if a mas has not found anything */
-static inline bool mas_is_none(struct ma_state *mas)
+static inline bool mas_is_none(const struct ma_state *mas)
{
return mas->node == MAS_NONE;
}

/* Checks if a mas has been paused */
-static inline bool mas_is_paused(struct ma_state *mas)
+static inline bool mas_is_paused(const struct ma_state *mas)
{
return mas->node == MAS_PAUSE;
}
@@ -679,6 +679,8 @@ extern atomic_t maple_tree_tests_run;
extern atomic_t maple_tree_tests_passed;

void mt_dump(const struct maple_tree *mt, enum mt_dump_format format);
+void mas_dump(const struct ma_state *mas);
+void mas_wr_dump(const struct ma_wr_state *wr_mas);
void mt_validate(struct maple_tree *mt);
void mt_cache_shrink(void);
#define MT_BUG_ON(__tree, __x) do { \
@@ -695,8 +697,100 @@ void mt_cache_shrink(void);
atomic_inc(&maple_tree_tests_passed); \
} \
} while (0)
+
+#define MAS_BUG_ON(__mas, __x) do { \
+ atomic_inc(&maple_tree_tests_run); \
+ if (__x) { \
+ pr_info("BUG at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_dump(__mas); \
+ mt_dump((__mas)->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+} while (0)
+
+#define MAS_WR_BUG_ON(__wrmas, __x) do { \
+ atomic_inc(&maple_tree_tests_run); \
+ if (__x) { \
+ pr_info("BUG at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_wr_dump(__wrmas); \
+ mas_dump((__wrmas)->mas); \
+ mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+} while (0)
+
+#define MT_WARN_ON(__tree, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mt_dump(__tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
+
+#define MAS_WARN_ON(__mas, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_dump(__mas); \
+ mt_dump((__mas)->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
+
+#define MAS_WR_WARN_ON(__wrmas, __x) ({ \
+ int ret = !!(__x); \
+ atomic_inc(&maple_tree_tests_run); \
+ if (ret) { \
+ pr_info("WARN at %s:%d (%u)\n", \
+ __func__, __LINE__, __x); \
+ mas_wr_dump(__wrmas); \
+ mas_dump((__wrmas)->mas); \
+ mt_dump((__wrmas)->mas->tree, mt_dump_hex); \
+ pr_info("Pass: %u Run:%u\n", \
+ atomic_read(&maple_tree_tests_passed), \
+ atomic_read(&maple_tree_tests_run)); \
+ dump_stack(); \
+ } else { \
+ atomic_inc(&maple_tree_tests_passed); \
+ } \
+ unlikely(ret); \
+})
#else
-#define MT_BUG_ON(__tree, __x) BUG_ON(__x)
+#define MT_BUG_ON(__tree, __x) BUG_ON(__x)
+#define MAS_BUG_ON(__mas, __x) BUG_ON(__x)
+#define MAS_WR_BUG_ON(__mas, __x) BUG_ON(__x)
+#define MT_WARN_ON(__tree, __x) WARN_ON(__x)
+#define MAS_WARN_ON(__mas, __x) WARN_ON(__x)
+#define MAS_WR_WARN_ON(__mas, __x) WARN_ON(__x)
#endif /* CONFIG_DEBUG_MAPLE_TREE */

#endif /*_LINUX_MAPLE_TREE_H */
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 535efc39f7758..a4c880192333e 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -240,12 +240,12 @@ static inline void mas_set_err(struct ma_state *mas, long err)
mas->node = MA_ERROR(err);
}

-static inline bool mas_is_ptr(struct ma_state *mas)
+static inline bool mas_is_ptr(const struct ma_state *mas)
{
return mas->node == MAS_ROOT;
}

-static inline bool mas_is_start(struct ma_state *mas)
+static inline bool mas_is_start(const struct ma_state *mas)
{
return mas->node == MAS_START;
}
@@ -7252,4 +7252,34 @@ void mt_validate(struct maple_tree *mt)
}
EXPORT_SYMBOL_GPL(mt_validate);

+void mas_dump(const struct ma_state *mas)
+{
+ pr_err("MAS: tree=%p enode=%p ", mas->tree, mas->node);
+ if (mas_is_none(mas))
+ pr_err("(MAS_NONE) ");
+ else if (mas_is_ptr(mas))
+ pr_err("(MAS_ROOT) ");
+ else if (mas_is_start(mas))
+ pr_err("(MAS_START) ");
+ else if (mas_is_paused(mas))
+ pr_err("(MAS_PAUSED) ");
+
+ pr_err("[%u] index=%lx last=%lx\n", mas->offset, mas->index, mas->last);
+ pr_err(" min=%lx max=%lx alloc=%p, depth=%u, flags=%x\n",
+ mas->min, mas->max, mas->alloc, mas->depth, mas->mas_flags);
+ if (mas->index > mas->last)
+ pr_err("Check index & last\n");
+}
+EXPORT_SYMBOL_GPL(mas_dump);
+
+void mas_wr_dump(const struct ma_wr_state *wr_mas)
+{
+ pr_err("WR_MAS: node=%p r_min=%lx r_max=%lx\n",
+ wr_mas->node, wr_mas->r_min, wr_mas->r_max);
+ pr_err(" type=%u off_end=%u, node_end=%u, end_piv=%lx\n",
+ wr_mas->type, wr_mas->offset_end, wr_mas->node_end,
+ wr_mas->end_piv);
+}
+EXPORT_SYMBOL_GPL(mas_wr_dump);
+
#endif /* CONFIG_DEBUG_MAPLE_TREE */
--
2.39.2

2023-04-25 14:12:33

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 08/34] maple_tree: Change RCU checks to WARN_ON() instead of BUG_ON()

From: "Liam R. Howlett" <[email protected]>

If RCU is enabled and the tree isn't locked, just warn the user and
avoid crashing the kernel.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 204d7941a39ec..ed92abf4c1fb5 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -616,7 +616,7 @@ static inline void mt_clear_in_rcu(struct maple_tree *mt)
return;

if (mt_external_lock(mt)) {
- BUG_ON(!mt_lock_is_held(mt));
+ WARN_ON(!mt_lock_is_held(mt));
mt->ma_flags &= ~MT_FLAGS_USE_RCU;
} else {
mtree_lock(mt);
@@ -635,7 +635,7 @@ static inline void mt_set_in_rcu(struct maple_tree *mt)
return;

if (mt_external_lock(mt)) {
- BUG_ON(!mt_lock_is_held(mt));
+ WARN_ON(!mt_lock_is_held(mt));
mt->ma_flags |= MT_FLAGS_USE_RCU;
} else {
mtree_lock(mt);
--
2.39.2

2023-04-25 14:12:34

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 07/34] maple_tree: Convert BUG_ON() to MT_BUG_ON()

From: "Liam R. Howlett" <[email protected]>

Use MT_BUG_ON() to get more information when running with
MAPLE_TREE_DEBUG enabled.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index a4c880192333e..662a9ecccecbf 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -194,7 +194,7 @@ static void mas_set_height(struct ma_state *mas)
unsigned int new_flags = mas->tree->ma_flags;

new_flags &= ~MT_FLAGS_HEIGHT_MASK;
- BUG_ON(mas->depth > MAPLE_HEIGHT_MAX);
+ MT_BUG_ON(mas->tree, mas->depth > MAPLE_HEIGHT_MAX);
new_flags |= mas->depth << MT_FLAGS_HEIGHT_OFFSET;
mas->tree->ma_flags = new_flags;
}
--
2.39.2

2023-04-25 14:12:36

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 09/34] maple_tree: Convert debug code to use MT_WARN_ON() and MAS_WARN_ON()

From: "Liam R. Howlett" <[email protected]>

Using MT_WARN_ON() allows for the removal of if statements before
logging. Using MAS_WARN_ON() will provide more information when issues
are encountered.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 662a9ecccecbf..d22a337e9cb6b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5699,9 +5699,9 @@ void *mas_store(struct ma_state *mas, void *entry)

trace_ma_write(__func__, mas, 0, entry);
#ifdef CONFIG_DEBUG_MAPLE_TREE
- if (mas->index > mas->last)
+ if (MAS_WARN_ON(mas, mas->index > mas->last))
pr_err("Error %lX > %lX %p\n", mas->index, mas->last, entry);
- MT_BUG_ON(mas->tree, mas->index > mas->last);
+
if (mas->index > mas->last) {
mas_set_err(mas, -EINVAL);
return NULL;
@@ -6530,10 +6530,9 @@ void *mt_find(struct maple_tree *mt, unsigned long *index, unsigned long max)
if (likely(entry)) {
*index = mas.last + 1;
#ifdef CONFIG_DEBUG_MAPLE_TREE
- if ((*index) && (*index) <= copy)
+ if (MT_WARN_ON(mt, (*index) && ((*index) <= copy)))
pr_err("index not increased! %lx <= %lx\n",
*index, copy);
- MT_BUG_ON(mt, (*index) && ((*index) <= copy));
#endif
}

@@ -6679,7 +6678,7 @@ static inline void *mas_first_entry(struct ma_state *mas, struct maple_node *mn,
max = mas->max;
mas->offset = 0;
while (likely(!ma_is_leaf(mt))) {
- MT_BUG_ON(mas->tree, mte_dead_node(mas->node));
+ MAS_WARN_ON(mas, mte_dead_node(mas->node));
slots = ma_slots(mn, mt);
entry = mas_slot(mas, slots, 0);
pivots = ma_pivots(mn, mt);
@@ -6690,7 +6689,7 @@ static inline void *mas_first_entry(struct ma_state *mas, struct maple_node *mn,
mn = mas_mn(mas);
mt = mte_node_type(mas->node);
}
- MT_BUG_ON(mas->tree, mte_dead_node(mas->node));
+ MAS_WARN_ON(mas, mte_dead_node(mas->node));

mas->max = max;
slots = ma_slots(mn, mt);
@@ -7134,18 +7133,18 @@ static void mas_validate_limits(struct ma_state *mas)
if (prev_piv > piv) {
pr_err("%p[%u] piv %lu < prev_piv %lu\n",
mas_mn(mas), i, piv, prev_piv);
- MT_BUG_ON(mas->tree, piv < prev_piv);
+ MAS_WARN_ON(mas, piv < prev_piv);
}

if (piv < mas->min) {
pr_err("%p[%u] %lu < %lu\n", mas_mn(mas), i,
piv, mas->min);
- MT_BUG_ON(mas->tree, piv < mas->min);
+ MAS_WARN_ON(mas, piv < mas->min);
}
if (piv > mas->max) {
pr_err("%p[%u] %lu > %lu\n", mas_mn(mas), i,
piv, mas->max);
- MT_BUG_ON(mas->tree, piv > mas->max);
+ MAS_WARN_ON(mas, piv > mas->max);
}
prev_piv = piv;
if (piv == mas->max)
@@ -7168,7 +7167,7 @@ static void mas_validate_limits(struct ma_state *mas)

pr_err("%p[%u] should not have piv %lu\n",
mas_mn(mas), i, piv);
- MT_BUG_ON(mas->tree, i < mt_pivots[type] - 1);
+ MAS_WARN_ON(mas, i < mt_pivots[type] - 1);
}
}
}
@@ -7227,16 +7226,15 @@ void mt_validate(struct maple_tree *mt)

mas_first_entry(&mas, mas_mn(&mas), ULONG_MAX, mte_node_type(mas.node));
while (!mas_is_none(&mas)) {
- MT_BUG_ON(mas.tree, mte_dead_node(mas.node));
+ MAS_WARN_ON(&mas, mte_dead_node(mas.node));
if (!mte_is_root(mas.node)) {
end = mas_data_end(&mas);
- if ((end < mt_min_slot_count(mas.node)) &&
- (mas.max != ULONG_MAX)) {
+ if (MAS_WARN_ON(&mas,
+ (end < mt_min_slot_count(mas.node)) &&
+ (mas.max != ULONG_MAX))) {
pr_err("Invalid size %u of %p\n", end,
- mas_mn(&mas));
- MT_BUG_ON(mas.tree, 1);
+ mas_mn(&mas));
}
-
}
mas_validate_parent_slot(&mas);
mas_validate_child_slot(&mas);
--
2.39.2

2023-04-25 14:12:52

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 10/34] maple_tree: Use MAS_BUG_ON() when setting a leaf node as a parent

Use MAS_BUG_ON() to dump the maple state and tree in the unlikely even
of an issue.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index d22a337e9cb6b..441592be039a2 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -453,7 +453,7 @@ enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
}

/*
- * mte_set_parent() - Set the parent node and encode the slot
+ * mas_set_parent() - Set the parent node and encode the slot
* @enode: The encoded maple node.
* @parent: The encoded maple node that is the parent of @enode.
* @slot: The slot that @enode resides in @parent.
@@ -462,16 +462,16 @@ enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
* parent type.
*/
static inline
-void mte_set_parent(struct maple_enode *enode, const struct maple_enode *parent,
- unsigned char slot)
+void mas_set_parent(struct ma_state *mas, struct maple_enode *enode,
+ const struct maple_enode *parent, unsigned char slot)
{
unsigned long val = (unsigned long)parent;
unsigned long shift;
unsigned long type;
enum maple_type p_type = mte_node_type(parent);

- BUG_ON(p_type == maple_dense);
- BUG_ON(p_type == maple_leaf_64);
+ MAS_BUG_ON(mas, p_type == maple_dense);
+ MAS_BUG_ON(mas, p_type == maple_leaf_64);

switch (p_type) {
case maple_range_64:
@@ -1741,7 +1741,7 @@ static inline void mas_adopt_children(struct ma_state *mas,
offset = ma_data_end(node, type, pivots, mas->max);
do {
child = mas_slot_locked(mas, slots, offset);
- mte_set_parent(child, parent, offset);
+ mas_set_parent(mas, child, parent, offset);
} while (offset--);
}

@@ -2706,9 +2706,9 @@ static inline void mas_set_split_parent(struct ma_state *mas,
return;

if ((*slot) <= split)
- mte_set_parent(mas->node, left, *slot);
+ mas_set_parent(mas, mas->node, left, *slot);
else if (right)
- mte_set_parent(mas->node, right, (*slot) - split - 1);
+ mas_set_parent(mas, mas->node, right, (*slot) - split - 1);

(*slot)++;
}
@@ -3105,12 +3105,12 @@ static int mas_spanning_rebalance(struct ma_state *mas,
mte_node_type(mast->orig_l->node));
mast->orig_l->depth++;
mab_mas_cp(mast->bn, 0, mt_slots[mast->bn->type] - 1, &l_mas, true);
- mte_set_parent(left, l_mas.node, slot);
+ mas_set_parent(mas, left, l_mas.node, slot);
if (middle)
- mte_set_parent(middle, l_mas.node, ++slot);
+ mas_set_parent(mas, middle, l_mas.node, ++slot);

if (right)
- mte_set_parent(right, l_mas.node, ++slot);
+ mas_set_parent(mas, right, l_mas.node, ++slot);

if (mas_is_root_limits(mast->l)) {
new_root:
@@ -3337,8 +3337,8 @@ static inline bool mas_split_final_node(struct maple_subtree_state *mast,
* The Big_node data should just fit in a single node.
*/
ancestor = mas_new_ma_node(mas, mast->bn);
- mte_set_parent(mast->l->node, ancestor, mast->l->offset);
- mte_set_parent(mast->r->node, ancestor, mast->r->offset);
+ mas_set_parent(mas, mast->l->node, ancestor, mast->l->offset);
+ mas_set_parent(mas, mast->r->node, ancestor, mast->r->offset);
mte_to_node(ancestor)->parent = mas_mn(mas)->parent;

mast->l->node = ancestor;
--
2.39.2

2023-04-25 14:12:56

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 11/34] maple_tree: Use MAS_BUG_ON() in mas_set_height()

Use MAS_BUG_ON() instead of MT_BUG_ON() to get the maple state
information. In the unlikely even of a tree height of > 31, try to increase
the probability of useful information being logged.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 441592be039a2..f1ce3852712db 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -194,7 +194,7 @@ static void mas_set_height(struct ma_state *mas)
unsigned int new_flags = mas->tree->ma_flags;

new_flags &= ~MT_FLAGS_HEIGHT_MASK;
- MT_BUG_ON(mas->tree, mas->depth > MAPLE_HEIGHT_MAX);
+ MAS_BUG_ON(mas, mas->depth > MAPLE_HEIGHT_MAX);
new_flags |= mas->depth << MT_FLAGS_HEIGHT_OFFSET;
mas->tree->ma_flags = new_flags;
}
--
2.39.2

2023-04-25 14:12:56

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 12/34] maple_tree: Use MAS_BUG_ON() from mas_topiary_range()

In the even of trying to remove data from a leaf node by use of
mas_topiary_range(), log the maple state.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index f1ce3852712db..b8b8e5d9ed7e5 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2346,7 +2346,8 @@ static inline void mas_topiary_range(struct ma_state *mas,
void __rcu **slots;
unsigned char offset;

- MT_BUG_ON(mas->tree, mte_is_leaf(mas->node));
+ MAS_BUG_ON(mas, mte_is_leaf(mas->node));
+
slots = ma_slots(mas_mn(mas), mte_node_type(mas->node));
for (offset = start; offset <= end; offset++) {
struct maple_enode *enode = mas_slot_locked(mas, slots, offset);
--
2.39.2

2023-04-25 14:13:13

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 13/34] maple_tree: Use MAS_WR_BUG_ON() in mas_store_prealloc()

mas_store_prealloc() should never fail, but if it does due to internal
tree issues then get as much debug information as possible prior to
crashing the kernel.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index b8b8e5d9ed7e5..28853ed23fe8a 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5762,7 +5762,7 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
mas_wr_store_setup(&wr_mas);
trace_ma_write(__func__, mas, 0, entry);
mas_wr_store_entry(&wr_mas);
- BUG_ON(mas_is_err(mas));
+ MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
mas_destroy(mas);
}
EXPORT_SYMBOL_GPL(mas_store_prealloc);
--
2.39.2

2023-04-25 14:13:14

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 14/34] maple_tree: Use MAS_BUG_ON() prior to calling mas_meta_gap()

Replace the call to BUG_ON() in mas_meta_gap() with calls before the
function call MAS_BUG_ON() to get more information on error condition.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 28853ed23fe8a..41873d935cfa3 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -963,8 +963,6 @@ static inline unsigned char ma_meta_end(struct maple_node *mn,
static inline unsigned char ma_meta_gap(struct maple_node *mn,
enum maple_type mt)
{
- BUG_ON(mt != maple_arange_64);
-
return mn->ma64.meta.gap;
}

@@ -1629,6 +1627,7 @@ static inline unsigned long mas_max_gap(struct ma_state *mas)
return mas_leaf_max_gap(mas);

node = mas_mn(mas);
+ MAS_BUG_ON(mas, mt != maple_arange_64);
offset = ma_meta_gap(node, mt);
if (offset == MAPLE_ARANGE64_META_MAX)
return 0;
@@ -1662,6 +1661,7 @@ static inline void mas_parent_gap(struct ma_state *mas, unsigned char offset,
pgaps = ma_gaps(pnode, pmt);

ascend:
+ MAS_BUG_ON(mas, pmt != maple_arange_64);
meta_offset = ma_meta_gap(pnode, pmt);
if (meta_offset == MAPLE_ARANGE64_META_MAX)
meta_gap = 0;
--
2.39.2

2023-04-25 14:13:27

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 15/34] maple_tree: Return error on mte_pivots() out of range

Rename mte_pivots() to mas_pivots() and pass through the ma_state to set
the error code to -EIO when the offset is out of range for the node
type. Change the WARN_ON() to MAS_WARN_ON() to log the maple state.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 41873d935cfa3..89e30462f8b62 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -663,22 +663,22 @@ static inline unsigned long *ma_gaps(struct maple_node *node,
}

/*
- * mte_pivot() - Get the pivot at @piv of the maple encoded node.
- * @mn: The maple encoded node.
+ * mas_pivot() - Get the pivot at @piv of the maple encoded node.
+ * @mas: The maple state.
* @piv: The pivot.
*
* Return: the pivot at @piv of @mn.
*/
-static inline unsigned long mte_pivot(const struct maple_enode *mn,
- unsigned char piv)
+static inline unsigned long mas_pivot(struct ma_state *mas, unsigned char piv)
{
- struct maple_node *node = mte_to_node(mn);
- enum maple_type type = mte_node_type(mn);
+ struct maple_node *node = mas_mn(mas);
+ enum maple_type type = mte_node_type(mas->node);

- if (piv >= mt_pivots[type]) {
- WARN_ON(1);
+ if (MAS_WARN_ON(mas, piv >= mt_pivots[type])) {
+ mas_set_err(mas, -EIO);
return 0;
}
+
switch (type) {
case maple_arange_64:
return node->ma64.pivot[piv];
@@ -5400,8 +5400,8 @@ static inline int mas_alloc(struct ma_state *mas, void *entry,
return xa_err(mas->node);

if (!mas->index)
- return mte_pivot(mas->node, 0);
- return mte_pivot(mas->node, 1);
+ return mas_pivot(mas, 0);
+ return mas_pivot(mas, 1);
}

/* Must be walking a tree. */
@@ -5418,7 +5418,10 @@ static inline int mas_alloc(struct ma_state *mas, void *entry,
*/
min = mas->min;
if (mas->offset)
- min = mte_pivot(mas->node, mas->offset - 1) + 1;
+ min = mas_pivot(mas, mas->offset - 1) + 1;
+
+ if (mas_is_err(mas))
+ return xa_err(mas->node);

if (mas->index < min)
mas->index = min;
--
2.39.2

2023-04-25 14:13:29

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 16/34] maple_tree: Make test code work without debug enabled

The test code is less useful without debug, but can still do general
validations. Define mt_dump(), mas_dump() and mas_wr_dump() as a noop
if debug is not enabled and document it in the test module information
that more information can be obtained with another kernel config option.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/Kconfig.debug | 10 +++++++---
lib/test_maple_tree.c | 9 ++++++---
tools/testing/radix-tree/maple.c | 1 -
3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5cd8183bb4c13..11736e17a62d8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2281,9 +2281,13 @@ config TEST_XARRAY
tristate "Test the XArray code at runtime"

config TEST_MAPLE_TREE
- depends on DEBUG_KERNEL
- select DEBUG_MAPLE_TREE
- tristate "Test the Maple Tree code at runtime"
+ tristate "Test the Maple Tree code at runtime or module load"
+ help
+ Enable this option to test the maple tree code functions at boot, or
+ when the module is loaded. Enable "Debug Maple Trees" will enable
+ more verbose output on failures.
+
+ If unsure, say N.

config TEST_RHASHTABLE
tristate "Perform selftest on resizable hash table"
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index d6929270dd36a..89383eedb70af 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -11,12 +11,15 @@
#include <linux/module.h>

#define MTREE_ALLOC_MAX 0x2000000000000Ul
-#ifndef CONFIG_DEBUG_MAPLE_TREE
-#define CONFIG_DEBUG_MAPLE_TREE
-#endif
#define CONFIG_MAPLE_SEARCH
#define MAPLE_32BIT (MAPLE_NODE_SLOTS > 31)

+#ifndef CONFIG_DEBUG_MAPLE_TREE
+#define mt_dump(mt, fmt) do {} while (0)
+#define mas_dump(mas) do {} while (0)
+#define mas_wr_dump(mas) do {} while (0)
+#endif
+
/* #define BENCH_SLOT_STORE */
/* #define BENCH_NODE_STORE */
/* #define BENCH_AWALK */
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index ebcb3faf85ea9..cf37ed9ab6c4d 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -22,7 +22,6 @@
#define dump_stack() assert(0)

#include "../../../lib/maple_tree.c"
-#undef CONFIG_DEBUG_MAPLE_TREE
#include "../../../lib/test_maple_tree.c"

#define RCU_RANGE_COUNT 1000
--
2.39.2

2023-04-25 14:13:47

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 18/34] mm: Update vma_iter_store() to use MAS_WARN_ON()

MAS_WARN_ON() will provide more information on the maple state and can
be more useful for debugging. Use this version of WARN_ON() in the
debugging code when storing to the tree.

Signed-off-by: Liam R. Howlett <[email protected]>
---
mm/internal.h | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 8d1a8bd001247..76612a860e58e 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1047,18 +1047,17 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
{

#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
- if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.index > vma->vm_start)) {
- printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
- printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
- printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- vma_iter_dump_tree(vmi);
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ vmi->mas.index > vma->vm_start)) {
+ printk("%lx > %lx\n", vmi->mas.index, vma->vm_start);
+ printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
+ printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
}
- if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.last < vma->vm_start)) {
- printk("%lu < %lu\n", vmi->mas.last, vma->vm_start);
- printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
- printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree, mt_dump_hex);
- vma_iter_dump_tree(vmi);
+ if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
+ vmi->mas.last < vma->vm_start)) {
+ printk("%lx < %lx\n", vmi->mas.last, vma->vm_start);
+ printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
+ printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
}
#endif

--
2.39.2

2023-04-25 14:13:59

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 17/34] mm: Update validate_mm() to use vma iterator

Use the vma iterator in the validation code and combine the code to
check the maple tree into the main validate_mm() function.

Introduce a new function vma_iter_dump_tree() to dump the maple tree in
hex layout.

Replace all calls to validate_mm_mt() with validate_mm().

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/mmdebug.h | 14 ++++++
mm/debug.c | 9 ++++
mm/internal.h | 3 +-
mm/mmap.c | 101 ++++++++++++++++------------------------
4 files changed, 66 insertions(+), 61 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index b8728d11c9490..7c3e7b0b0e8fd 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -8,10 +8,12 @@
struct page;
struct vm_area_struct;
struct mm_struct;
+struct vma_iterator;

void dump_page(struct page *page, const char *reason);
void dump_vma(const struct vm_area_struct *vma);
void dump_mm(const struct mm_struct *mm);
+void vma_iter_dump_tree(const struct vma_iterator *vmi);

#ifdef CONFIG_DEBUG_VM
#define VM_BUG_ON(cond) BUG_ON(cond)
@@ -74,6 +76,17 @@ void dump_mm(const struct mm_struct *mm);
} \
unlikely(__ret_warn_once); \
})
+#define VM_WARN_ON_ONCE_MM(cond, mm) ({ \
+ static bool __section(".data.once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+ dump_mm(mm); \
+ __warned = true; \
+ WARN_ON(1); \
+ } \
+ unlikely(__ret_warn_once); \
+})

#define VM_WARN_ON(cond) (void)WARN_ON(cond)
#define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond)
@@ -90,6 +103,7 @@ void dump_mm(const struct mm_struct *mm);
#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ON_ONCE_FOLIO(cond, folio) BUILD_BUG_ON_INVALID(cond)
+#define VM_WARN_ON_ONCE_MM(cond, mm) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
#define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond)
#endif
diff --git a/mm/debug.c b/mm/debug.c
index c7b228097bd98..ee533a5ceb79d 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -268,4 +268,13 @@ void page_init_poison(struct page *page, size_t size)
if (page_init_poisoning)
memset(page, PAGE_POISON_PATTERN, size);
}
+
+void vma_iter_dump_tree(const struct vma_iterator *vmi)
+{
+#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
+ mas_dump(&vmi->mas);
+ mt_dump(vmi->mas.tree, mt_dump_hex);
+#endif /* CONFIG_DEBUG_VM_MAPLE_TREE */
+}
+
#endif /* CONFIG_DEBUG_VM */
diff --git a/mm/internal.h b/mm/internal.h
index 4c195920f5656..8d1a8bd001247 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1051,13 +1051,14 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
printk("%lu > %lu\n", vmi->mas.index, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
- mt_dump(vmi->mas.tree, mt_dump_hex);
+ vma_iter_dump_tree(vmi);
}
if (WARN_ON(vmi->mas.node != MAS_START && vmi->mas.last < vma->vm_start)) {
printk("%lu < %lu\n", vmi->mas.last, vma->vm_start);
printk("store of vma %lu-%lu", vma->vm_start, vma->vm_end);
printk("into slot %lu-%lu", vmi->mas.index, vmi->mas.last);
mt_dump(vmi->mas.tree, mt_dump_hex);
+ vma_iter_dump_tree(vmi);
}
#endif

diff --git a/mm/mmap.c b/mm/mmap.c
index 1554f90d497ef..d34a41791ddb2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -299,62 +299,44 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
return origbrk;
}

-#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
-extern void mt_validate(struct maple_tree *mt);
-extern void mt_dump(const struct maple_tree *mt, enum mt_dump_format fmt);
-
-/* Validate the maple tree */
-static void validate_mm_mt(struct mm_struct *mm)
-{
- struct maple_tree *mt = &mm->mm_mt;
- struct vm_area_struct *vma_mt;
-
- MA_STATE(mas, mt, 0, 0);
-
- mt_validate(&mm->mm_mt);
- mas_for_each(&mas, vma_mt, ULONG_MAX) {
- if ((vma_mt->vm_start != mas.index) ||
- (vma_mt->vm_end - 1 != mas.last)) {
- pr_emerg("issue in %s\n", current->comm);
- dump_stack();
- dump_vma(vma_mt);
- pr_emerg("mt piv: %p %lu - %lu\n", vma_mt,
- mas.index, mas.last);
- pr_emerg("mt vma: %p %lu - %lu\n", vma_mt,
- vma_mt->vm_start, vma_mt->vm_end);
-
- mt_dump(mas.tree, mt_dump_hex);
- if (vma_mt->vm_end != mas.last + 1) {
- pr_err("vma: %p vma_mt %lu-%lu\tmt %lu-%lu\n",
- mm, vma_mt->vm_start, vma_mt->vm_end,
- mas.index, mas.last);
- mt_dump(mas.tree, mt_dump_hex);
- }
- VM_BUG_ON_MM(vma_mt->vm_end != mas.last + 1, mm);
- if (vma_mt->vm_start != mas.index) {
- pr_err("vma: %p vma_mt %p %lu - %lu doesn't match\n",
- mm, vma_mt, vma_mt->vm_start, vma_mt->vm_end);
- mt_dump(mas.tree, mt_dump_hex);
- }
- VM_BUG_ON_MM(vma_mt->vm_start != mas.index, mm);
- }
- }
-}
-
+#if defined(CONFIG_DEBUG_VM)
static void validate_mm(struct mm_struct *mm)
{
int bug = 0;
int i = 0;
struct vm_area_struct *vma;
- MA_STATE(mas, &mm->mm_mt, 0, 0);
+ VMA_ITERATOR(vmi, mm, 0);

- validate_mm_mt(mm);
+#if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
+ mt_validate(&mm->mm_mt);
+#endif

- mas_for_each(&mas, vma, ULONG_MAX) {
+ for_each_vma(vmi, vma) {
#ifdef CONFIG_DEBUG_VM_RB
struct anon_vma *anon_vma = vma->anon_vma;
struct anon_vma_chain *avc;
+#endif
+ unsigned long vmi_start, vmi_end;
+ bool warn = 0;
+
+ vmi_start = vma_iter_addr(&vmi);
+ vmi_end = vma_iter_end(&vmi);
+ if (VM_WARN_ON_ONCE_MM(vma->vm_end != vmi_end, mm))
+ warn = 1;
+
+ if (VM_WARN_ON_ONCE_MM(vma->vm_start != vmi_start, mm))
+ warn = 1;
+
+ if (warn) {
+ pr_emerg("issue in %s\n", current->comm);
+ dump_stack();
+ dump_vma(vma);
+ pr_emerg("tree range: %px start %lx end %lx\n", vma,
+ vmi_start, vmi_end - 1);
+ vma_iter_dump_tree(&vmi);
+ }

+#ifdef CONFIG_DEBUG_VM_RB
if (anon_vma) {
anon_vma_lock_read(anon_vma);
list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
@@ -365,16 +347,15 @@ static void validate_mm(struct mm_struct *mm)
i++;
}
if (i != mm->map_count) {
- pr_emerg("map_count %d mas_for_each %d\n", mm->map_count, i);
+ pr_emerg("map_count %d vma iterator %d\n", mm->map_count, i);
bug = 1;
}
VM_BUG_ON_MM(bug, mm);
}

-#else /* !CONFIG_DEBUG_VM_MAPLE_TREE */
-#define validate_mm_mt(root) do { } while (0)
+#else /* !CONFIG_DEBUG_VM */
#define validate_mm(mm) do { } while (0)
-#endif /* CONFIG_DEBUG_VM_MAPLE_TREE */
+#endif /* CONFIG_DEBUG_VM */

/*
* vma has some anon_vma assigned, and is already inserted on that
@@ -2234,7 +2215,7 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
struct vm_area_struct *new;
int err;

- validate_mm_mt(vma->vm_mm);
+ validate_mm(vma->vm_mm);

WARN_ON(vma->vm_start >= addr);
WARN_ON(vma->vm_end <= addr);
@@ -2292,7 +2273,7 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
/* Success. */
if (new_below)
vma_next(vmi);
- validate_mm_mt(vma->vm_mm);
+ validate_mm(vma->vm_mm);
return 0;

out_free_mpol:
@@ -2301,7 +2282,7 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
vma_iter_free(vmi);
out_free_vma:
vm_area_free(new);
- validate_mm_mt(vma->vm_mm);
+ validate_mm(vma->vm_mm);
return err;
}

@@ -2936,7 +2917,7 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,

arch_unmap(mm, start, end);
ret = do_vmi_align_munmap(vmi, vma, mm, start, end, uf, downgrade);
- validate_mm_mt(mm);
+ validate_mm(mm);
return ret;
}

@@ -2958,7 +2939,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
struct mm_struct *mm = current->mm;
struct vma_prepare vp;

- validate_mm_mt(mm);
+ validate_mm(mm);
/*
* Check against address space limits by the changed size
* Note: This happens *after* clearing old mappings in some code paths.
@@ -3199,7 +3180,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
bool faulted_in_anon_vma = true;
VMA_ITERATOR(vmi, mm, addr);

- validate_mm_mt(mm);
+ validate_mm(mm);
/*
* If anonymous vma has not yet been faulted, update new pgoff
* to match new location, to increase its chance of merging.
@@ -3258,7 +3239,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
goto out_vma_link;
*need_rmap_locks = false;
}
- validate_mm_mt(mm);
+ validate_mm(mm);
return new_vma;

out_vma_link:
@@ -3274,7 +3255,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
out_free_vma:
vm_area_free(new_vma);
out:
- validate_mm_mt(mm);
+ validate_mm(mm);
return NULL;
}

@@ -3411,7 +3392,7 @@ static struct vm_area_struct *__install_special_mapping(
int ret;
struct vm_area_struct *vma;

- validate_mm_mt(mm);
+ validate_mm(mm);
vma = vm_area_alloc(mm);
if (unlikely(vma == NULL))
return ERR_PTR(-ENOMEM);
@@ -3434,12 +3415,12 @@ static struct vm_area_struct *__install_special_mapping(

perf_event_mmap(vma);

- validate_mm_mt(mm);
+ validate_mm(mm);
return vma;

out:
vm_area_free(vma);
- validate_mm_mt(mm);
+ validate_mm(mm);
return ERR_PTR(ret);
}

--
2.39.2

2023-04-25 14:14:18

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 20/34] maple_tree: Remove unnecessary check from mas_destroy()

mas_destroy currently checks if mas->node is MAS_START prior to calling
mas_start(), but this is unnecessary as mas_start() will do nothing if
the node is anything but MAS_START.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 89e30462f8b62..35c6e12ca9482 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5817,9 +5817,7 @@ void mas_destroy(struct ma_state *mas)
if (mas->mas_flags & MA_STATE_REBALANCE) {
unsigned char end;

- if (mas_is_start(mas))
- mas_start(mas);
-
+ mas_start(mas);
mtree_range_walk(mas);
end = mas_data_end(mas) + 1;
if (end < mt_min_slot_count(mas->node) - 1)
--
2.39.2

2023-04-25 14:14:20

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 19/34] maple_tree: Add __init and __exit to test module

The test functions are not needed after the module is removed, so mark
them as such. Add __exit to the module removal function. Some other
variables have been marked as const static as well.

Suggested-by: Andrew Morton <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 158 +++++++++++++-------------
tools/testing/radix-tree/linux/init.h | 1 +
tools/testing/radix-tree/maple.c | 147 ++++++++++++------------
3 files changed, 155 insertions(+), 151 deletions(-)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 89383eedb70af..ae08d34d1d3c4 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -33,54 +33,54 @@
#else
#define cond_resched() do {} while (0)
#endif
-static
-int mtree_insert_index(struct maple_tree *mt, unsigned long index, gfp_t gfp)
+static int __init mtree_insert_index(struct maple_tree *mt,
+ unsigned long index, gfp_t gfp)
{
return mtree_insert(mt, index, xa_mk_value(index & LONG_MAX), gfp);
}

-static void mtree_erase_index(struct maple_tree *mt, unsigned long index)
+static void __init mtree_erase_index(struct maple_tree *mt, unsigned long index)
{
MT_BUG_ON(mt, mtree_erase(mt, index) != xa_mk_value(index & LONG_MAX));
MT_BUG_ON(mt, mtree_load(mt, index) != NULL);
}

-static int mtree_test_insert(struct maple_tree *mt, unsigned long index,
+static int __init mtree_test_insert(struct maple_tree *mt, unsigned long index,
void *ptr)
{
return mtree_insert(mt, index, ptr, GFP_KERNEL);
}

-static int mtree_test_store_range(struct maple_tree *mt, unsigned long start,
- unsigned long end, void *ptr)
+static int __init mtree_test_store_range(struct maple_tree *mt,
+ unsigned long start, unsigned long end, void *ptr)
{
return mtree_store_range(mt, start, end, ptr, GFP_KERNEL);
}

-static int mtree_test_store(struct maple_tree *mt, unsigned long start,
+static int __init mtree_test_store(struct maple_tree *mt, unsigned long start,
void *ptr)
{
return mtree_test_store_range(mt, start, start, ptr);
}

-static int mtree_test_insert_range(struct maple_tree *mt, unsigned long start,
- unsigned long end, void *ptr)
+static int __init mtree_test_insert_range(struct maple_tree *mt,
+ unsigned long start, unsigned long end, void *ptr)
{
return mtree_insert_range(mt, start, end, ptr, GFP_KERNEL);
}

-static void *mtree_test_load(struct maple_tree *mt, unsigned long index)
+static void __init *mtree_test_load(struct maple_tree *mt, unsigned long index)
{
return mtree_load(mt, index);
}

-static void *mtree_test_erase(struct maple_tree *mt, unsigned long index)
+static void __init *mtree_test_erase(struct maple_tree *mt, unsigned long index)
{
return mtree_erase(mt, index);
}

#if defined(CONFIG_64BIT)
-static noinline void check_mtree_alloc_range(struct maple_tree *mt,
+static noinline void __init check_mtree_alloc_range(struct maple_tree *mt,
unsigned long start, unsigned long end, unsigned long size,
unsigned long expected, int eret, void *ptr)
{
@@ -97,7 +97,7 @@ static noinline void check_mtree_alloc_range(struct maple_tree *mt,
MT_BUG_ON(mt, result != expected);
}

-static noinline void check_mtree_alloc_rrange(struct maple_tree *mt,
+static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt,
unsigned long start, unsigned long end, unsigned long size,
unsigned long expected, int eret, void *ptr)
{
@@ -115,8 +115,8 @@ static noinline void check_mtree_alloc_rrange(struct maple_tree *mt,
}
#endif

-static noinline void check_load(struct maple_tree *mt, unsigned long index,
- void *ptr)
+static noinline void __init check_load(struct maple_tree *mt,
+ unsigned long index, void *ptr)
{
void *ret = mtree_test_load(mt, index);

@@ -125,7 +125,7 @@ static noinline void check_load(struct maple_tree *mt, unsigned long index,
MT_BUG_ON(mt, ret != ptr);
}

-static noinline void check_store_range(struct maple_tree *mt,
+static noinline void __init check_store_range(struct maple_tree *mt,
unsigned long start, unsigned long end, void *ptr, int expected)
{
int ret = -EINVAL;
@@ -141,7 +141,7 @@ static noinline void check_store_range(struct maple_tree *mt,
check_load(mt, i, ptr);
}

-static noinline void check_insert_range(struct maple_tree *mt,
+static noinline void __init check_insert_range(struct maple_tree *mt,
unsigned long start, unsigned long end, void *ptr, int expected)
{
int ret = -EINVAL;
@@ -157,8 +157,8 @@ static noinline void check_insert_range(struct maple_tree *mt,
check_load(mt, i, ptr);
}

-static noinline void check_insert(struct maple_tree *mt, unsigned long index,
- void *ptr)
+static noinline void __init check_insert(struct maple_tree *mt,
+ unsigned long index, void *ptr)
{
int ret = -EINVAL;

@@ -166,7 +166,7 @@ static noinline void check_insert(struct maple_tree *mt, unsigned long index,
MT_BUG_ON(mt, ret != 0);
}

-static noinline void check_dup_insert(struct maple_tree *mt,
+static noinline void __init check_dup_insert(struct maple_tree *mt,
unsigned long index, void *ptr)
{
int ret = -EINVAL;
@@ -176,13 +176,13 @@ static noinline void check_dup_insert(struct maple_tree *mt,
}


-static noinline
-void check_index_load(struct maple_tree *mt, unsigned long index)
+static noinline void __init check_index_load(struct maple_tree *mt,
+ unsigned long index)
{
return check_load(mt, index, xa_mk_value(index & LONG_MAX));
}

-static inline int not_empty(struct maple_node *node)
+static inline __init int not_empty(struct maple_node *node)
{
int i;

@@ -197,8 +197,8 @@ static inline int not_empty(struct maple_node *node)
}


-static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
- bool verbose)
+static noinline void __init check_rev_seq(struct maple_tree *mt,
+ unsigned long max, bool verbose)
{
unsigned long i = max, j;

@@ -230,7 +230,7 @@ static noinline void check_rev_seq(struct maple_tree *mt, unsigned long max,
#endif
}

-static noinline void check_seq(struct maple_tree *mt, unsigned long max,
+static noinline void __init check_seq(struct maple_tree *mt, unsigned long max,
bool verbose)
{
unsigned long i, j;
@@ -259,7 +259,7 @@ static noinline void check_seq(struct maple_tree *mt, unsigned long max,
#endif
}

-static noinline void check_lb_not_empty(struct maple_tree *mt)
+static noinline void __init check_lb_not_empty(struct maple_tree *mt)
{
unsigned long i, j;
unsigned long huge = 4000UL * 1000 * 1000;
@@ -278,13 +278,13 @@ static noinline void check_lb_not_empty(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_lower_bound_split(struct maple_tree *mt)
+static noinline void __init check_lower_bound_split(struct maple_tree *mt)
{
MT_BUG_ON(mt, !mtree_empty(mt));
check_lb_not_empty(mt);
}

-static noinline void check_upper_bound_split(struct maple_tree *mt)
+static noinline void __init check_upper_bound_split(struct maple_tree *mt)
{
unsigned long i, j;
unsigned long huge;
@@ -309,7 +309,7 @@ static noinline void check_upper_bound_split(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_mid_split(struct maple_tree *mt)
+static noinline void __init check_mid_split(struct maple_tree *mt)
{
unsigned long huge = 8000UL * 1000 * 1000;

@@ -318,7 +318,7 @@ static noinline void check_mid_split(struct maple_tree *mt)
check_lb_not_empty(mt);
}

-static noinline void check_rev_find(struct maple_tree *mt)
+static noinline void __init check_rev_find(struct maple_tree *mt)
{
int i, nr_entries = 200;
void *val;
@@ -357,7 +357,7 @@ static noinline void check_rev_find(struct maple_tree *mt)
rcu_read_unlock();
}

-static noinline void check_find(struct maple_tree *mt)
+static noinline void __init check_find(struct maple_tree *mt)
{
unsigned long val = 0;
unsigned long count;
@@ -574,7 +574,7 @@ static noinline void check_find(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_find_2(struct maple_tree *mt)
+static noinline void __init check_find_2(struct maple_tree *mt)
{
unsigned long i, j;
void *entry;
@@ -619,7 +619,7 @@ static noinline void check_find_2(struct maple_tree *mt)


#if defined(CONFIG_64BIT)
-static noinline void check_alloc_rev_range(struct maple_tree *mt)
+static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
{
/*
* Generated by:
@@ -627,7 +627,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
* awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}'
*/

- unsigned long range[] = {
+ static const unsigned long range[] = {
/* Inclusive , Exclusive. */
0x565234af2000, 0x565234af4000,
0x565234af4000, 0x565234af9000,
@@ -655,7 +655,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
0x7fff58791000, 0x7fff58793000,
};

- unsigned long holes[] = {
+ static const unsigned long holes[] = {
/*
* Note: start of hole is INCLUSIVE
* end of hole is EXCLUSIVE
@@ -675,7 +675,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
* 4. number that should be returned.
* 5. return value
*/
- unsigned long req_range[] = {
+ static const unsigned long req_range[] = {
0x565234af9000, /* Min */
0x7fff58791000, /* Max */
0x1000, /* Size */
@@ -786,7 +786,7 @@ static noinline void check_alloc_rev_range(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_alloc_range(struct maple_tree *mt)
+static noinline void __init check_alloc_range(struct maple_tree *mt)
{
/*
* Generated by:
@@ -794,7 +794,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
* awk -F "-" '{printf "0x%s, 0x%s, ", $1, $2}'
*/

- unsigned long range[] = {
+ static const unsigned long range[] = {
/* Inclusive , Exclusive. */
0x565234af2000, 0x565234af4000,
0x565234af4000, 0x565234af9000,
@@ -821,7 +821,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
0x7fff5878e000, 0x7fff58791000,
0x7fff58791000, 0x7fff58793000,
};
- unsigned long holes[] = {
+ static const unsigned long holes[] = {
/* Start of hole, end of hole, size of hole (+1) */
0x565234afb000, 0x565234afc000, 0x1000,
0x565234afe000, 0x565235def000, 0x12F1000,
@@ -836,7 +836,7 @@ static noinline void check_alloc_range(struct maple_tree *mt)
* 4. number that should be returned.
* 5. return value
*/
- unsigned long req_range[] = {
+ static const unsigned long req_range[] = {
0x565234af9000, /* Min */
0x7fff58791000, /* Max */
0x1000, /* Size */
@@ -945,10 +945,10 @@ static noinline void check_alloc_range(struct maple_tree *mt)
}
#endif

-static noinline void check_ranges(struct maple_tree *mt)
+static noinline void __init check_ranges(struct maple_tree *mt)
{
int i, val, val2;
- unsigned long r[] = {
+ static const unsigned long r[] = {
10, 15,
20, 25,
17, 22, /* Overlaps previous range. */
@@ -1213,7 +1213,7 @@ static noinline void check_ranges(struct maple_tree *mt)
MT_BUG_ON(mt, mt_height(mt) != 4);
}

-static noinline void check_next_entry(struct maple_tree *mt)
+static noinline void __init check_next_entry(struct maple_tree *mt)
{
void *entry = NULL;
unsigned long limit = 30, i = 0;
@@ -1237,7 +1237,7 @@ static noinline void check_next_entry(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_prev_entry(struct maple_tree *mt)
+static noinline void __init check_prev_entry(struct maple_tree *mt)
{
unsigned long index = 16;
void *value;
@@ -1281,7 +1281,7 @@ static noinline void check_prev_entry(struct maple_tree *mt)
mas_unlock(&mas);
}

-static noinline void check_root_expand(struct maple_tree *mt)
+static noinline void __init check_root_expand(struct maple_tree *mt)
{
MA_STATE(mas, mt, 0, 0);
void *ptr;
@@ -1370,13 +1370,13 @@ static noinline void check_root_expand(struct maple_tree *mt)
mas_unlock(&mas);
}

-static noinline void check_gap_combining(struct maple_tree *mt)
+static noinline void __init check_gap_combining(struct maple_tree *mt)
{
struct maple_enode *mn1, *mn2;
void *entry;
unsigned long singletons = 100;
- unsigned long *seq100;
- unsigned long seq100_64[] = {
+ static const unsigned long *seq100;
+ static const unsigned long seq100_64[] = {
/* 0-5 */
74, 75, 76,
50, 100, 2,
@@ -1390,7 +1390,7 @@ static noinline void check_gap_combining(struct maple_tree *mt)
76, 2, 79, 85, 4,
};

- unsigned long seq100_32[] = {
+ static const unsigned long seq100_32[] = {
/* 0-5 */
61, 62, 63,
50, 100, 2,
@@ -1404,11 +1404,11 @@ static noinline void check_gap_combining(struct maple_tree *mt)
76, 2, 79, 85, 4,
};

- unsigned long seq2000[] = {
+ static const unsigned long seq2000[] = {
1152, 1151,
1100, 1200, 2,
};
- unsigned long seq400[] = {
+ static const unsigned long seq400[] = {
286, 318,
256, 260, 266, 270, 275, 280, 290, 398,
286, 310,
@@ -1567,7 +1567,7 @@ static noinline void check_gap_combining(struct maple_tree *mt)
mt_set_non_kernel(0);
mtree_destroy(mt);
}
-static noinline void check_node_overwrite(struct maple_tree *mt)
+static noinline void __init check_node_overwrite(struct maple_tree *mt)
{
int i, max = 4000;

@@ -1580,7 +1580,7 @@ static noinline void check_node_overwrite(struct maple_tree *mt)
}

#if defined(BENCH_SLOT_STORE)
-static noinline void bench_slot_store(struct maple_tree *mt)
+static noinline void __init bench_slot_store(struct maple_tree *mt)
{
int i, brk = 105, max = 1040, brk_start = 100, count = 20000000;

@@ -1596,7 +1596,7 @@ static noinline void bench_slot_store(struct maple_tree *mt)
#endif

#if defined(BENCH_NODE_STORE)
-static noinline void bench_node_store(struct maple_tree *mt)
+static noinline void __init bench_node_store(struct maple_tree *mt)
{
int i, overwrite = 76, max = 240, count = 20000000;

@@ -1615,7 +1615,7 @@ static noinline void bench_node_store(struct maple_tree *mt)
#endif

#if defined(BENCH_AWALK)
-static noinline void bench_awalk(struct maple_tree *mt)
+static noinline void __init bench_awalk(struct maple_tree *mt)
{
int i, max = 2500, count = 50000000;
MA_STATE(mas, mt, 1470, 1470);
@@ -1632,7 +1632,7 @@ static noinline void bench_awalk(struct maple_tree *mt)
}
#endif
#if defined(BENCH_WALK)
-static noinline void bench_walk(struct maple_tree *mt)
+static noinline void __init bench_walk(struct maple_tree *mt)
{
int i, max = 2500, count = 550000000;
MA_STATE(mas, mt, 1470, 1470);
@@ -1649,7 +1649,7 @@ static noinline void bench_walk(struct maple_tree *mt)
#endif

#if defined(BENCH_MT_FOR_EACH)
-static noinline void bench_mt_for_each(struct maple_tree *mt)
+static noinline void __init bench_mt_for_each(struct maple_tree *mt)
{
int i, count = 1000000;
unsigned long max = 2500, index = 0;
@@ -1673,7 +1673,7 @@ static noinline void bench_mt_for_each(struct maple_tree *mt)
#endif

/* check_forking - simulate the kernel forking sequence with the tree. */
-static noinline void check_forking(struct maple_tree *mt)
+static noinline void __init check_forking(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1712,7 +1712,7 @@ static noinline void check_forking(struct maple_tree *mt)
mtree_destroy(&newmt);
}

-static noinline void check_iteration(struct maple_tree *mt)
+static noinline void __init check_iteration(struct maple_tree *mt)
{
int i, nr_entries = 125;
void *val;
@@ -1780,7 +1780,7 @@ static noinline void check_iteration(struct maple_tree *mt)
mt_set_non_kernel(0);
}

-static noinline void check_mas_store_gfp(struct maple_tree *mt)
+static noinline void __init check_mas_store_gfp(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1813,7 +1813,7 @@ static noinline void check_mas_store_gfp(struct maple_tree *mt)
}

#if defined(BENCH_FORK)
-static noinline void bench_forking(struct maple_tree *mt)
+static noinline void __init bench_forking(struct maple_tree *mt)
{

struct maple_tree newmt;
@@ -1855,15 +1855,17 @@ static noinline void bench_forking(struct maple_tree *mt)
}
#endif

-static noinline void next_prev_test(struct maple_tree *mt)
+static noinline void __init next_prev_test(struct maple_tree *mt)
{
int i, nr_entries;
void *val;
MA_STATE(mas, mt, 0, 0);
struct maple_enode *mn;
- unsigned long *level2;
- unsigned long level2_64[] = {707, 1000, 710, 715, 720, 725};
- unsigned long level2_32[] = {1747, 2000, 1750, 1755, 1760, 1765};
+ static const unsigned long *level2;
+ static const unsigned long level2_64[] = { 707, 1000, 710, 715, 720,
+ 725};
+ static const unsigned long level2_32[] = { 1747, 2000, 1750, 1755,
+ 1760, 1765};

if (MAPLE_32BIT) {
nr_entries = 500;
@@ -2031,7 +2033,7 @@ static noinline void next_prev_test(struct maple_tree *mt)


/* Test spanning writes that require balancing right sibling or right cousin */
-static noinline void check_spanning_relatives(struct maple_tree *mt)
+static noinline void __init check_spanning_relatives(struct maple_tree *mt)
{

unsigned long i, nr_entries = 1000;
@@ -2044,7 +2046,7 @@ static noinline void check_spanning_relatives(struct maple_tree *mt)
mtree_store_range(mt, 9365, 9955, NULL, GFP_KERNEL);
}

-static noinline void check_fuzzer(struct maple_tree *mt)
+static noinline void __init check_fuzzer(struct maple_tree *mt)
{
/*
* 1. Causes a spanning rebalance of a single root node.
@@ -2441,7 +2443,7 @@ static noinline void check_fuzzer(struct maple_tree *mt)
}

/* duplicate the tree with a specific gap */
-static noinline void check_dup_gaps(struct maple_tree *mt,
+static noinline void __init check_dup_gaps(struct maple_tree *mt,
unsigned long nr_entries, bool zero_start,
unsigned long gap)
{
@@ -2481,7 +2483,7 @@ static noinline void check_dup_gaps(struct maple_tree *mt,
}

/* Duplicate many sizes of trees. Mainly to test expected entry values */
-static noinline void check_dup(struct maple_tree *mt)
+static noinline void __init check_dup(struct maple_tree *mt)
{
int i;
int big_start = 100010;
@@ -2569,7 +2571,7 @@ static noinline void check_dup(struct maple_tree *mt)
}
}

-static noinline void check_bnode_min_spanning(struct maple_tree *mt)
+static noinline void __init check_bnode_min_spanning(struct maple_tree *mt)
{
int i = 50;
MA_STATE(mas, mt, 0, 0);
@@ -2588,7 +2590,7 @@ static noinline void check_bnode_min_spanning(struct maple_tree *mt)
mt_set_non_kernel(0);
}

-static noinline void check_empty_area_window(struct maple_tree *mt)
+static noinline void __init check_empty_area_window(struct maple_tree *mt)
{
unsigned long i, nr_entries = 20;
MA_STATE(mas, mt, 0, 0);
@@ -2673,7 +2675,7 @@ static noinline void check_empty_area_window(struct maple_tree *mt)
rcu_read_unlock();
}

-static noinline void check_empty_area_fill(struct maple_tree *mt)
+static noinline void __init check_empty_area_fill(struct maple_tree *mt)
{
const unsigned long max = 0x25D78000;
unsigned long size;
@@ -2717,11 +2719,11 @@ static noinline void check_empty_area_fill(struct maple_tree *mt)
}

static DEFINE_MTREE(tree);
-static int maple_tree_seed(void)
+static int __init maple_tree_seed(void)
{
- unsigned long set[] = {5015, 5014, 5017, 25, 1000,
- 1001, 1002, 1003, 1005, 0,
- 5003, 5002};
+ unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
+ 1001, 1002, 1003, 1005, 0,
+ 5003, 5002};
void *ptr = &set;

pr_info("\nTEST STARTING\n\n");
@@ -2991,7 +2993,7 @@ static int maple_tree_seed(void)
return -EINVAL;
}

-static void maple_tree_harvest(void)
+static void __exit maple_tree_harvest(void)
{

}
diff --git a/tools/testing/radix-tree/linux/init.h b/tools/testing/radix-tree/linux/init.h
index 1bb0afc213099..81563c3dfce79 100644
--- a/tools/testing/radix-tree/linux/init.h
+++ b/tools/testing/radix-tree/linux/init.h
@@ -1 +1,2 @@
#define __init
+#define __exit
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index cf37ed9ab6c4d..03539d86cdf0f 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -14,6 +14,7 @@
#include "test.h"
#include <stdlib.h>
#include <time.h>
+#include "linux/init.h"

#define module_init(x)
#define module_exit(x)
@@ -80,7 +81,7 @@ static void check_mas_alloc_node_count(struct ma_state *mas)
* check_new_node() - Check the creation of new nodes and error path
* verification.
*/
-static noinline void check_new_node(struct maple_tree *mt)
+static noinline void __init check_new_node(struct maple_tree *mt)
{

struct maple_node *mn, *mn2, *mn3;
@@ -454,7 +455,7 @@ static noinline void check_new_node(struct maple_tree *mt)
/*
* Check erasing including RCU.
*/
-static noinline void check_erase(struct maple_tree *mt, unsigned long index,
+static noinline void __init check_erase(struct maple_tree *mt, unsigned long index,
void *ptr)
{
MT_BUG_ON(mt, mtree_test_erase(mt, index) != ptr);
@@ -464,24 +465,24 @@ static noinline void check_erase(struct maple_tree *mt, unsigned long index,
#define erase_check_insert(mt, i) check_insert(mt, set[i], entry[i%2])
#define erase_check_erase(mt, i) check_erase(mt, set[i], entry[i%2])

-static noinline void check_erase_testset(struct maple_tree *mt)
+static noinline void __init check_erase_testset(struct maple_tree *mt)
{
- unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
- 1001, 1002, 1003, 1005, 0,
- 6003, 6002, 6008, 6012, 6015,
- 7003, 7002, 7008, 7012, 7015,
- 8003, 8002, 8008, 8012, 8015,
- 9003, 9002, 9008, 9012, 9015,
- 10003, 10002, 10008, 10012, 10015,
- 11003, 11002, 11008, 11012, 11015,
- 12003, 12002, 12008, 12012, 12015,
- 13003, 13002, 13008, 13012, 13015,
- 14003, 14002, 14008, 14012, 14015,
- 15003, 15002, 15008, 15012, 15015,
- };
-
-
- void *ptr = &set;
+ static const unsigned long set[] = { 5015, 5014, 5017, 25, 1000,
+ 1001, 1002, 1003, 1005, 0,
+ 6003, 6002, 6008, 6012, 6015,
+ 7003, 7002, 7008, 7012, 7015,
+ 8003, 8002, 8008, 8012, 8015,
+ 9003, 9002, 9008, 9012, 9015,
+ 10003, 10002, 10008, 10012, 10015,
+ 11003, 11002, 11008, 11012, 11015,
+ 12003, 12002, 12008, 12012, 12015,
+ 13003, 13002, 13008, 13012, 13015,
+ 14003, 14002, 14008, 14012, 14015,
+ 15003, 15002, 15008, 15012, 15015,
+ };
+
+
+ void *ptr = &check_erase_testset;
void *entry[2] = { ptr, mt };
void *root_node;

@@ -738,7 +739,7 @@ static noinline void check_erase_testset(struct maple_tree *mt)
int mas_ce2_over_count(struct ma_state *mas_start, struct ma_state *mas_end,
void *s_entry, unsigned long s_min,
void *e_entry, unsigned long e_max,
- unsigned long *set, int i, bool null_entry)
+ const unsigned long *set, int i, bool null_entry)
{
int count = 0, span = 0;
unsigned long retry = 0;
@@ -968,8 +969,8 @@ static inline void *mas_range_load(struct ma_state *mas,
}

#if defined(CONFIG_64BIT)
-static noinline void check_erase2_testset(struct maple_tree *mt,
- unsigned long *set, unsigned long size)
+static noinline void __init check_erase2_testset(struct maple_tree *mt,
+ const unsigned long *set, unsigned long size)
{
int entry_count = 0;
int check = 0;
@@ -1113,11 +1114,11 @@ static noinline void check_erase2_testset(struct maple_tree *mt,


/* These tests were pulled from KVM tree modifications which failed. */
-static noinline void check_erase2_sets(struct maple_tree *mt)
+static noinline void __init check_erase2_sets(struct maple_tree *mt)
{
void *entry;
unsigned long start = 0;
- unsigned long set[] = {
+ static const unsigned long set[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140721266458624, 140737488351231,
ERASE, 140721266458624, 140737488351231,
@@ -1135,7 +1136,7 @@ ERASE, 140253902692352, 140253902864383,
STORE, 140253902692352, 140253902696447,
STORE, 140253902696448, 140253902864383,
};
- unsigned long set2[] = {
+ static const unsigned long set2[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735933583360, 140737488351231,
ERASE, 140735933583360, 140737488351231,
@@ -1159,7 +1160,7 @@ STORE, 140277094813696, 140277094821887,
STORE, 140277094821888, 140277094825983,
STORE, 140735933906944, 140735933911039,
};
- unsigned long set3[] = {
+ static const unsigned long set3[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735790264320, 140737488351231,
ERASE, 140735790264320, 140737488351231,
@@ -1202,7 +1203,7 @@ STORE, 47135835840512, 47135835885567,
STORE, 47135835885568, 47135835893759,
};

- unsigned long set4[] = {
+ static const unsigned long set4[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140728251703296, 140737488351231,
ERASE, 140728251703296, 140737488351231,
@@ -1223,7 +1224,7 @@ ERASE, 47646523277312, 47646523445247,
STORE, 47646523277312, 47646523400191,
};

- unsigned long set5[] = {
+ static const unsigned long set5[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140726874062848, 140737488351231,
ERASE, 140726874062848, 140737488351231,
@@ -1356,7 +1357,7 @@ STORE, 47884791619584, 47884791623679,
STORE, 47884791623680, 47884791627775,
};

- unsigned long set6[] = {
+ static const unsigned long set6[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722999021568, 140737488351231,
ERASE, 140722999021568, 140737488351231,
@@ -1488,7 +1489,7 @@ ERASE, 47430432014336, 47430432022527,
STORE, 47430432014336, 47430432018431,
STORE, 47430432018432, 47430432022527,
};
- unsigned long set7[] = {
+ static const unsigned long set7[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140729808330752, 140737488351231,
ERASE, 140729808330752, 140737488351231,
@@ -1620,7 +1621,7 @@ ERASE, 47439987130368, 47439987138559,
STORE, 47439987130368, 47439987134463,
STORE, 47439987134464, 47439987138559,
};
- unsigned long set8[] = {
+ static const unsigned long set8[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722482974720, 140737488351231,
ERASE, 140722482974720, 140737488351231,
@@ -1753,7 +1754,7 @@ STORE, 47708488638464, 47708488642559,
STORE, 47708488642560, 47708488646655,
};

- unsigned long set9[] = {
+ static const unsigned long set9[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736427839488, 140737488351231,
ERASE, 140736427839488, 140736427839488,
@@ -5619,7 +5620,7 @@ ERASE, 47906195480576, 47906195480576,
STORE, 94641242615808, 94641242750975,
};

- unsigned long set10[] = {
+ static const unsigned long set10[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736427839488, 140737488351231,
ERASE, 140736427839488, 140736427839488,
@@ -9483,7 +9484,7 @@ STORE, 139726599680000, 139726599684095,
ERASE, 47906195480576, 47906195480576,
STORE, 94641242615808, 94641242750975,
};
- unsigned long set11[] = {
+ static const unsigned long set11[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140732658499584, 140737488351231,
ERASE, 140732658499584, 140732658499584,
@@ -9509,7 +9510,7 @@ STORE, 140732658565120, 140732658569215,
STORE, 140732658552832, 140732658565119,
};

- unsigned long set12[] = { /* contains 12 values. */
+ static const unsigned long set12[] = { /* contains 12 values. */
STORE, 140737488347136, 140737488351231,
STORE, 140732658499584, 140737488351231,
ERASE, 140732658499584, 140732658499584,
@@ -9536,7 +9537,7 @@ STORE, 140732658552832, 140732658565119,
STORE, 140014592741375, 140014592741375, /* contrived */
STORE, 140014592733184, 140014592741376, /* creates first entry retry. */
};
- unsigned long set13[] = {
+ static const unsigned long set13[] = {
STORE, 140373516247040, 140373516251135,/*: ffffa2e7b0e10d80 */
STORE, 140373516251136, 140373516255231,/*: ffffa2e7b1195d80 */
STORE, 140373516255232, 140373516443647,/*: ffffa2e7b0e109c0 */
@@ -9549,7 +9550,7 @@ STORE, 140373518684160, 140373518688254,/*: ffffa2e7b05fec00 */
STORE, 140373518688256, 140373518692351,/*: ffffa2e7bfbdcd80 */
STORE, 140373518692352, 140373518696447,/*: ffffa2e7b0749e40 */
};
- unsigned long set14[] = {
+ static const unsigned long set14[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731667996672, 140737488351231,
SNULL, 140731668000767, 140737488351231,
@@ -9833,7 +9834,7 @@ SNULL, 139826136543232, 139826136809471,
STORE, 139826136809472, 139826136842239,
STORE, 139826136543232, 139826136809471,
};
- unsigned long set15[] = {
+ static const unsigned long set15[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722061451264, 140737488351231,
SNULL, 140722061455359, 140737488351231,
@@ -10118,7 +10119,7 @@ STORE, 139906808958976, 139906808991743,
STORE, 139906808692736, 139906808958975,
};

- unsigned long set16[] = {
+ static const unsigned long set16[] = {
STORE, 94174808662016, 94174809321471,
STORE, 94174811414528, 94174811426815,
STORE, 94174811426816, 94174811430911,
@@ -10329,7 +10330,7 @@ STORE, 139921865613312, 139921865617407,
STORE, 139921865547776, 139921865564159,
};

- unsigned long set17[] = {
+ static const unsigned long set17[] = {
STORE, 94397057224704, 94397057646591,
STORE, 94397057650688, 94397057691647,
STORE, 94397057691648, 94397057695743,
@@ -10391,7 +10392,7 @@ STORE, 140720477511680, 140720477646847,
STORE, 140720478302208, 140720478314495,
STORE, 140720478314496, 140720478318591,
};
- unsigned long set18[] = {
+ static const unsigned long set18[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140724953673728, 140737488351231,
SNULL, 140724953677823, 140737488351231,
@@ -10424,7 +10425,7 @@ STORE, 140222970597376, 140222970605567,
ERASE, 140222970597376, 140222970605567,
STORE, 140222970597376, 140222970605567,
};
- unsigned long set19[] = {
+ static const unsigned long set19[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140725182459904, 140737488351231,
SNULL, 140725182463999, 140737488351231,
@@ -10693,7 +10694,7 @@ STORE, 140656836775936, 140656836780031,
STORE, 140656787476480, 140656791920639,
ERASE, 140656774639616, 140656779083775,
};
- unsigned long set20[] = {
+ static const unsigned long set20[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735952392192, 140737488351231,
SNULL, 140735952396287, 140737488351231,
@@ -10849,7 +10850,7 @@ STORE, 140590386819072, 140590386823167,
STORE, 140590386823168, 140590386827263,
SNULL, 140590376591359, 140590376595455,
};
- unsigned long set21[] = {
+ static const unsigned long set21[] = {
STORE, 93874710941696, 93874711363583,
STORE, 93874711367680, 93874711408639,
STORE, 93874711408640, 93874711412735,
@@ -10919,7 +10920,7 @@ ERASE, 140708393312256, 140708393316351,
ERASE, 140708393308160, 140708393312255,
ERASE, 140708393291776, 140708393308159,
};
- unsigned long set22[] = {
+ static const unsigned long set22[] = {
STORE, 93951397134336, 93951397183487,
STORE, 93951397183488, 93951397728255,
STORE, 93951397728256, 93951397826559,
@@ -11046,7 +11047,7 @@ STORE, 140551361253376, 140551361519615,
ERASE, 140551361253376, 140551361519615,
};

- unsigned long set23[] = {
+ static const unsigned long set23[] = {
STORE, 94014447943680, 94014448156671,
STORE, 94014450253824, 94014450257919,
STORE, 94014450257920, 94014450266111,
@@ -14370,7 +14371,7 @@ SNULL, 140175956627455, 140175985139711,
STORE, 140175927242752, 140175956627455,
STORE, 140175956627456, 140175985139711,
};
- unsigned long set24[] = {
+ static const unsigned long set24[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140735281639424, 140737488351231,
SNULL, 140735281643519, 140737488351231,
@@ -15532,7 +15533,7 @@ ERASE, 139635393024000, 139635401412607,
ERASE, 139635384627200, 139635384631295,
ERASE, 139635384631296, 139635393019903,
};
- unsigned long set25[] = {
+ static const unsigned long set25[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140737488343040, 140737488351231,
STORE, 140722547441664, 140737488351231,
@@ -22320,7 +22321,7 @@ STORE, 140249652703232, 140249682087935,
STORE, 140249682087936, 140249710600191,
};

- unsigned long set26[] = {
+ static const unsigned long set26[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140729464770560, 140737488351231,
SNULL, 140729464774655, 140737488351231,
@@ -22344,7 +22345,7 @@ ERASE, 140109040951296, 140109040959487,
STORE, 140109040955392, 140109040959487,
ERASE, 140109040955392, 140109040959487,
};
- unsigned long set27[] = {
+ static const unsigned long set27[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140726128070656, 140737488351231,
SNULL, 140726128074751, 140737488351231,
@@ -22740,7 +22741,7 @@ STORE, 140415509696512, 140415535910911,
ERASE, 140415537422336, 140415562588159,
STORE, 140415482433536, 140415509696511,
};
- unsigned long set28[] = {
+ static const unsigned long set28[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722475622400, 140737488351231,
SNULL, 140722475626495, 140737488351231,
@@ -22808,7 +22809,7 @@ STORE, 139918413348864, 139918413352959,
ERASE, 139918413316096, 139918413344767,
STORE, 93865848528896, 93865848664063,
};
- unsigned long set29[] = {
+ static const unsigned long set29[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734467944448, 140737488351231,
SNULL, 140734467948543, 140737488351231,
@@ -23683,7 +23684,7 @@ ERASE, 140143079972864, 140143088361471,
ERASE, 140143205793792, 140143205797887,
ERASE, 140143205797888, 140143214186495,
};
- unsigned long set30[] = {
+ static const unsigned long set30[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140733436743680, 140737488351231,
SNULL, 140733436747775, 140737488351231,
@@ -24565,7 +24566,7 @@ ERASE, 140165225893888, 140165225897983,
ERASE, 140165225897984, 140165234286591,
ERASE, 140165058105344, 140165058109439,
};
- unsigned long set31[] = {
+ static const unsigned long set31[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730890784768, 140737488351231,
SNULL, 140730890788863, 140737488351231,
@@ -25378,7 +25379,7 @@ ERASE, 140623906590720, 140623914979327,
ERASE, 140622950277120, 140622950281215,
ERASE, 140622950281216, 140622958669823,
};
- unsigned long set32[] = {
+ static const unsigned long set32[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731244212224, 140737488351231,
SNULL, 140731244216319, 140737488351231,
@@ -26174,7 +26175,7 @@ ERASE, 140400417288192, 140400425676799,
ERASE, 140400283066368, 140400283070463,
ERASE, 140400283070464, 140400291459071,
};
- unsigned long set33[] = {
+ static const unsigned long set33[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734562918400, 140737488351231,
SNULL, 140734562922495, 140737488351231,
@@ -26316,7 +26317,7 @@ STORE, 140582961786880, 140583003750399,
ERASE, 140582961786880, 140583003750399,
};

- unsigned long set34[] = {
+ static const unsigned long set34[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140731327180800, 140737488351231,
SNULL, 140731327184895, 140737488351231,
@@ -27197,7 +27198,7 @@ ERASE, 140012522094592, 140012530483199,
ERASE, 140012033142784, 140012033146879,
ERASE, 140012033146880, 140012041535487,
};
- unsigned long set35[] = {
+ static const unsigned long set35[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730536939520, 140737488351231,
SNULL, 140730536943615, 140737488351231,
@@ -27954,7 +27955,7 @@ ERASE, 140474471936000, 140474480324607,
ERASE, 140474396430336, 140474396434431,
ERASE, 140474396434432, 140474404823039,
};
- unsigned long set36[] = {
+ static const unsigned long set36[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140723893125120, 140737488351231,
SNULL, 140723893129215, 140737488351231,
@@ -28815,7 +28816,7 @@ ERASE, 140121890357248, 140121898745855,
ERASE, 140121269587968, 140121269592063,
ERASE, 140121269592064, 140121277980671,
};
- unsigned long set37[] = {
+ static const unsigned long set37[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140722404016128, 140737488351231,
SNULL, 140722404020223, 140737488351231,
@@ -28941,7 +28942,7 @@ STORE, 139759821246464, 139759888355327,
ERASE, 139759821246464, 139759888355327,
ERASE, 139759888355328, 139759955464191,
};
- unsigned long set38[] = {
+ static const unsigned long set38[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140730666221568, 140737488351231,
SNULL, 140730666225663, 140737488351231,
@@ -29751,7 +29752,7 @@ ERASE, 140613504712704, 140613504716799,
ERASE, 140613504716800, 140613513105407,
};

- unsigned long set39[] = {
+ static const unsigned long set39[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140736271417344, 140737488351231,
SNULL, 140736271421439, 140737488351231,
@@ -30123,7 +30124,7 @@ STORE, 140325364428800, 140325372821503,
STORE, 140325356036096, 140325364428799,
SNULL, 140325364432895, 140325372821503,
};
- unsigned long set40[] = {
+ static const unsigned long set40[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734309167104, 140737488351231,
SNULL, 140734309171199, 140737488351231,
@@ -30874,7 +30875,7 @@ ERASE, 140320289300480, 140320289304575,
ERASE, 140320289304576, 140320297693183,
ERASE, 140320163409920, 140320163414015,
};
- unsigned long set41[] = {
+ static const unsigned long set41[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140728157171712, 140737488351231,
SNULL, 140728157175807, 140737488351231,
@@ -31184,7 +31185,7 @@ STORE, 94376135090176, 94376135094271,
STORE, 94376135094272, 94376135098367,
SNULL, 94376135094272, 94377208836095,
};
- unsigned long set42[] = {
+ static const unsigned long set42[] = {
STORE, 314572800, 1388314623,
STORE, 1462157312, 1462169599,
STORE, 1462169600, 1462185983,
@@ -33861,7 +33862,7 @@ SNULL, 3798999040, 3799101439,
*/
};

- unsigned long set43[] = {
+ static const unsigned long set43[] = {
STORE, 140737488347136, 140737488351231,
STORE, 140734187720704, 140737488351231,
SNULL, 140734187724800, 140737488351231,
@@ -34995,7 +34996,7 @@ void run_check_rcu_slowread(struct maple_tree *mt, struct rcu_test_struct *vals)
MT_BUG_ON(mt, !vals->seen_entry3);
MT_BUG_ON(mt, !vals->seen_both);
}
-static noinline void check_rcu_simulated(struct maple_tree *mt)
+static noinline void __init check_rcu_simulated(struct maple_tree *mt)
{
unsigned long i, nr_entries = 1000;
unsigned long target = 4320;
@@ -35156,7 +35157,7 @@ static noinline void check_rcu_simulated(struct maple_tree *mt)
rcu_unregister_thread();
}

-static noinline void check_rcu_threaded(struct maple_tree *mt)
+static noinline void __init check_rcu_threaded(struct maple_tree *mt)
{
unsigned long i, nr_entries = 1000;
struct rcu_test_struct vals;
@@ -35369,7 +35370,7 @@ static void check_dfs_preorder(struct maple_tree *mt)
/* End of depth first search tests */

/* Preallocation testing */
-static noinline void check_prealloc(struct maple_tree *mt)
+static noinline void __init check_prealloc(struct maple_tree *mt)
{
unsigned long i, max = 100;
unsigned long allocated;
@@ -35497,7 +35498,7 @@ static noinline void check_prealloc(struct maple_tree *mt)
/* End of preallocation testing */

/* Spanning writes, writes that span nodes and layers of the tree */
-static noinline void check_spanning_write(struct maple_tree *mt)
+static noinline void __init check_spanning_write(struct maple_tree *mt)
{
unsigned long i, max = 5000;
MA_STATE(mas, mt, 1200, 2380);
@@ -35665,7 +35666,7 @@ static noinline void check_spanning_write(struct maple_tree *mt)
/* End of spanning write testing */

/* Writes to a NULL area that are adjacent to other NULLs */
-static noinline void check_null_expand(struct maple_tree *mt)
+static noinline void __init check_null_expand(struct maple_tree *mt)
{
unsigned long i, max = 100;
unsigned char data_end;
@@ -35726,7 +35727,7 @@ static noinline void check_null_expand(struct maple_tree *mt)
/* End of NULL area expansions */

/* Checking for no memory is best done outside the kernel */
-static noinline void check_nomem(struct maple_tree *mt)
+static noinline void __init check_nomem(struct maple_tree *mt)
{
MA_STATE(ms, mt, 1, 1);

@@ -35761,7 +35762,7 @@ static noinline void check_nomem(struct maple_tree *mt)
mtree_destroy(mt);
}

-static noinline void check_locky(struct maple_tree *mt)
+static noinline void __init check_locky(struct maple_tree *mt)
{
MA_STATE(ms, mt, 2, 2);
MA_STATE(reader, mt, 2, 2);
--
2.39.2

2023-04-25 14:14:40

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 22/34] mm/mmap: Change do_vmi_align_munmap() for maple tree iterator changes

The maple tree iterator clean up is incompatible with the way
do_vmi_align_munmap() expects it to behave. Update the expected
behaviour to map now since the change will work currently.

Signed-off-by: Liam R. Howlett <[email protected]>
---
mm/mmap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index d34a41791ddb2..c0140a556870a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
#endif
}

- next = vma_next(vmi);
+ if (vma_iter_end(vmi) > end)
+ next = vma_iter_load(vmi);
+ else
+ next = vma_next(vmi);
+
if (unlikely(uf)) {
/*
* If userfaultfd_unmap_prep returns an error the vmas
--
2.39.2

2023-04-25 14:14:40

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 21/34] maple_tree: mas_start() reset depth on dead node

When a dead node is detected, the depth has already been set to 1 so
reset it to 0.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 35c6e12ca9482..1542274dc2b7f 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -1397,9 +1397,9 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)

mas->min = 0;
mas->max = ULONG_MAX;
- mas->depth = 0;

retry:
+ mas->depth = 0;
root = mas_root(mas);
/* Tree with nodes */
if (likely(xa_is_node(root))) {
--
2.39.2

2023-04-25 14:19:14

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 25/34] maple_tree: Clear up index and last setting in single entry tree

When there is a single entry tree (range of 0-0 pointing to an entry),
then ensure the limit is either 0-0 or 1-oo, depending on where the user
walks. Ensure the correct node setting as well; either MAS_ROOT or
MAS_NONE.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 20f0a10dc5608..31cbfd7b44728 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5099,24 +5099,25 @@ void *mas_walk(struct ma_state *mas)
{
void *entry;

+ if (mas_is_none(mas) || mas_is_paused(mas))
+ mas->node = MAS_START;
retry:
entry = mas_state_walk(mas);
- if (mas_is_start(mas))
+ if (mas_is_start(mas)) {
goto retry;
-
- if (mas_is_ptr(mas)) {
+ } else if (mas_is_none(mas)) {
+ mas->index = 0;
+ mas->last = ULONG_MAX;
+ } else if (mas_is_ptr(mas)) {
if (!mas->index) {
mas->last = 0;
- } else {
- mas->index = 1;
- mas->last = ULONG_MAX;
+ return mas_root(mas);
}
- return entry;
- }

- if (mas_is_none(mas)) {
- mas->index = 0;
+ mas->index = 1;
mas->last = ULONG_MAX;
+ mas->node = MAS_NONE;
+ return NULL;
}

return entry;
--
2.39.2

2023-04-25 14:20:24

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 31/34] maple_tree: Add mas_next_range() and mas_find_range() interfaces

Some users of the maple tree may want to move to the next range in the
tree, even if it stores a NULL. This family of function provides that
functionality by advancing one slot at a time and returning the result,
while mas_contiguous() will iterate over the range and stop on
encountering the first NULL.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 14 ++++
lib/maple_tree.c | 148 +++++++++++++++++++++++++++----------
2 files changed, 125 insertions(+), 37 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index ed92abf4c1fb5..1fe19a9097462 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -467,6 +467,7 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);

void *mas_prev(struct ma_state *mas, unsigned long min);
void *mas_next(struct ma_state *mas, unsigned long max);
+void *mas_next_range(struct ma_state *mas, unsigned long max);

int mas_empty_area(struct ma_state *mas, unsigned long min, unsigned long max,
unsigned long size);
@@ -528,6 +529,19 @@ static inline void mas_reset(struct ma_state *mas)
#define mas_for_each(__mas, __entry, __max) \
while (((__entry) = mas_find((__mas), (__max))) != NULL)

+/**
+ * mas_contiguous() - Iterate over a contiguous range of the maple tree.
+ * @__mas: Maple Tree operation state (maple_state)
+ * @__entry: Entry retrieved from the tree
+ * @__max: maximum index to retrieve from the tree
+ *
+ * When returned, mas->index and mas->last will hold the entire range of the
+ * entry. The loop will terminate on the first NULL encountered.
+ *
+ * Note: may return the zero entry.
+ */
+#define mas_contiguous(__mas, __entry, __max) \
+ while (((__entry) = mas_find_range((__mas), (__max))) != NULL)

/**
* mas_set_range() - Set up Maple Tree operation state for a different index.
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 377b57bbe6b9b..137638cd95fc2 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5847,18 +5847,8 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries)
}
EXPORT_SYMBOL_GPL(mas_expected_entries);

-/**
- * mas_next() - Get the next entry.
- * @mas: The maple state
- * @max: The maximum index to check.
- *
- * Returns the next entry after @mas->index.
- * Must hold rcu_read_lock or the write lock.
- * Can return the zero entry.
- *
- * Return: The next entry or %NULL
- */
-void *mas_next(struct ma_state *mas, unsigned long max)
+static inline bool mas_next_setup(struct ma_state *mas, unsigned long max,
+ void **entry)
{
bool was_none = mas_is_none(mas);

@@ -5871,19 +5861,63 @@ void *mas_next(struct ma_state *mas, unsigned long max)
if (mas_is_ptr(mas)) {
if (was_none && mas->index == 0) {
mas->index = mas->last = 0;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}
mas->index = 1;
mas->last = ULONG_MAX;
mas->node = MAS_NONE;
- return NULL;
+ return true;
}
+ return false;
+}
+
+/**
+ * mas_next() - Get the next entry.
+ * @mas: The maple state
+ * @max: The maximum index to check.
+ *
+ * Returns the next entry after @mas->index.
+ * Must hold rcu_read_lock or the write lock.
+ * Can return the zero entry.
+ *
+ * Return: The next entry or %NULL
+ */
+void *mas_next(struct ma_state *mas, unsigned long max)
+{
+ void *entry = NULL;
+
+ if (mas_next_setup(mas, max, entry))
+ return entry;

/* Retries on dead nodes handled by mas_next_entry */
return mas_next_entry(mas, max);
}
EXPORT_SYMBOL_GPL(mas_next);

+/**
+ * mas_next_range() - Advance the maple state to the next range
+ * @mas: The maple state
+ * @max: The maximum index to check.
+ *
+ * Sets @mas->index and @mas->last to the range.
+ * Must hold rcu_read_lock or the write lock.
+ * Can return the zero entry.
+ *
+ * Return: The next entry or %NULL
+ */
+void *mas_next_range(struct ma_state *mas, unsigned long max)
+{
+ void *entry = NULL;
+
+ if (mas_next_setup(mas, max, entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_next_slot */
+ return mas_next_slot(mas, max);
+}
+EXPORT_SYMBOL_GPL(mas_next_range);
+
/**
* mt_next() - get the next value in the maple tree
* @mt: The maple tree
@@ -5993,22 +6027,18 @@ void mas_pause(struct ma_state *mas)
EXPORT_SYMBOL_GPL(mas_pause);

/**
- * mas_find() - On the first call, find the entry at or after mas->index up to
- * %max. Otherwise, find the entry after mas->index.
- * @mas: The maple state
- * @max: The maximum value to check.
+ * mas_find_setup() - Internal function to set up mas_find*().
*
- * Must hold rcu_read_lock or the write lock.
- * If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
- *
- * Return: The entry or %NULL.
+ * Returns: True if entry is the answer, false otherwise.
*/
-void *mas_find(struct ma_state *mas, unsigned long max)
+static inline bool mas_find_setup(struct ma_state *mas, unsigned long max,
+ void **entry)
{
+ *entry = NULL;
+
if (unlikely(mas_is_none(mas))) {
if (unlikely(mas->last >= max))
- return NULL;
+ return true;

mas->index = mas->last;
mas->node = MAS_START;
@@ -6016,7 +6046,7 @@ void *mas_find(struct ma_state *mas, unsigned long max)

if (unlikely(mas_is_paused(mas))) {
if (unlikely(mas->last >= max))
- return NULL;
+ return true;

mas->node = MAS_START;
mas->index = ++mas->last;
@@ -6028,14 +6058,12 @@ void *mas_find(struct ma_state *mas, unsigned long max)

if (unlikely(mas_is_start(mas))) {
/* First run or continue */
- void *entry;
-
if (mas->index > max)
- return NULL;
+ return true;

- entry = mas_walk(mas);
- if (entry)
- return entry;
+ *entry = mas_walk(mas);
+ if (*entry)
+ return true;

}

@@ -6043,23 +6071,69 @@ void *mas_find(struct ma_state *mas, unsigned long max)
if (unlikely(mas_is_ptr(mas)))
goto ptr_out_of_range;

- return NULL;
+ return true;
}

if (mas->index == max)
- return NULL;
+ return true;

- /* Retries on dead nodes handled by mas_next_entry */
- return mas_next_entry(mas, max);
+ return false;

ptr_out_of_range:
mas->node = MAS_NONE;
mas->index = 1;
mas->last = ULONG_MAX;
- return NULL;
+ return true;
+}
+
+/**
+ * mas_find() - On the first call, find the entry at or after mas->index up to
+ * %max. Otherwise, find the entry after mas->index.
+ * @mas: The maple state
+ * @max: The maximum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find(struct ma_state *mas, unsigned long max)
+{
+ void *entry = NULL;
+
+ if (mas_find_setup(mas, max, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_next_entry */
+ return mas_next_entry(mas, max);
}
EXPORT_SYMBOL_GPL(mas_find);

+/**
+ * mas_find_range() - On the first call, find the entry at or after
+ * mas->index up to %max. Otherwise, advance to the next slot mas->index.
+ * @mas: The maple state
+ * @max: The maximum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find_range(struct ma_state *mas, unsigned long max)
+{
+ void *entry;
+
+ if (mas_find_setup(mas, max, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_next_entry */
+ return mas_next_slot(mas, max);
+}
+EXPORT_SYMBOL_GPL(mas_find_range);
+
/**
* mas_find_rev: On the first call, find the first non-null entry at or below
* mas->index down to %min. Otherwise find the first non-null entry below
--
2.39.2

2023-04-25 14:25:28

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 30/34] maple_tree: Fix comments for mas_next_entry() and mas_prev_entry()

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 297d936321347..377b57bbe6b9b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4814,7 +4814,7 @@ void *mas_next_slot(struct ma_state *mas, unsigned long max)
*
* Set the @mas->node to the next entry and the range_start to
* the beginning value for the entry. Does not check beyond @limit.
- * Sets @mas->index and @mas->last to the limit if it is hit.
+ * Sets @mas->index and @mas->last to the limit if it is within the limit.
* Restarts on dead nodes.
*
* Return: the next entry or %NULL.
@@ -4836,21 +4836,33 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
return mas_next_slot(mas, limit);
}

-static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
+/*
+ * mas_prev_entry() - Internal function to get the previous entry.
+ * @mas: The maple state
+ * @limit: The minimum range start.
+ *
+ * Set the @mas->node to the previous entry and the range_start to
+ * the beginning value for the entry. Does not check beyond @limit.
+ * Sets @mas->index and @mas->last to the limit if it is within the limit.
+ * Restarts on dead nodes.
+ *
+ * Return: the next entry or %NULL.
+ */
+static inline void *mas_prev_entry(struct ma_state *mas, unsigned long limit)
{
void *entry;

- if (mas->index < min)
+ if (mas->index < limit)
return NULL;

- entry = mas_prev_slot(mas, min);
+ entry = mas_prev_slot(mas, limit);
if (entry)
return entry;

if (mas_is_none(mas))
return NULL;

- return mas_prev_slot(mas, min);
+ return mas_prev_slot(mas, limit);
}

/*
--
2.39.2

2023-04-25 14:29:10

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 32/34] maple_tree: Add mas_prev_range() and mas_find_range_rev interface

Some users of the maple tree may want to move to the previous range
regardless of the value stored there. Add this interface as well as the
'find' variant to support walking to the first value, then iterating
over the previous ranges.

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/maple_tree.h | 1 +
lib/maple_tree.c | 163 ++++++++++++++++++++++++++++---------
2 files changed, 124 insertions(+), 40 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 1fe19a9097462..bddced4ec7f2c 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -466,6 +466,7 @@ void mas_destroy(struct ma_state *mas);
int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries);

void *mas_prev(struct ma_state *mas, unsigned long min);
+void *mas_prev_range(struct ma_state *mas, unsigned long max);
void *mas_next(struct ma_state *mas, unsigned long max);
void *mas_next_range(struct ma_state *mas, unsigned long max);

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 137638cd95fc2..04b73499baffa 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4852,7 +4852,7 @@ static inline void *mas_prev_entry(struct ma_state *mas, unsigned long limit)
{
void *entry;

- if (mas->index < limit)
+ if (mas->index <= limit)
return NULL;

entry = mas_prev_slot(mas, limit);
@@ -5938,18 +5938,8 @@ void *mt_next(struct maple_tree *mt, unsigned long index, unsigned long max)
}
EXPORT_SYMBOL_GPL(mt_next);

-/**
- * mas_prev() - Get the previous entry
- * @mas: The maple state
- * @min: The minimum value to check.
- *
- * Must hold rcu_read_lock or the write lock.
- * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
- * searchable nodes.
- *
- * Return: the previous value or %NULL.
- */
-void *mas_prev(struct ma_state *mas, unsigned long min)
+static inline bool mas_prev_setup(struct ma_state *mas, unsigned long min,
+ void **entry)
{
if (mas->index <= min)
goto none;
@@ -5967,7 +5957,8 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
if (!mas->index)
goto none;
mas->index = mas->last = 0;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}

if (mas_is_none(mas)) {
@@ -5975,18 +5966,65 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
/* Walked to out-of-range pointer? */
mas->index = mas->last = 0;
mas->node = MAS_ROOT;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}
- return NULL;
+ return true;
}
- return mas_prev_entry(mas, min);
+
+ return false;

none:
mas->node = MAS_NONE;
- return NULL;
+ return true;
+}
+
+/**
+ * mas_prev() - Get the previous entry
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * searchable nodes.
+ *
+ * Return: the previous value or %NULL.
+ */
+void *mas_prev(struct ma_state *mas, unsigned long min)
+{
+ void *entry = NULL;
+
+ if (mas_prev_setup(mas, min, &entry))
+ return entry;
+
+ return mas_prev_entry(mas, min);
}
EXPORT_SYMBOL_GPL(mas_prev);

+/**
+ * mas_prev_range() - Advance to the previous range
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Sets @mas->index and @mas->last to the range.
+ * Must hold rcu_read_lock or the write lock.
+ * Will reset mas to MAS_START if the node is MAS_NONE. Will stop on not
+ * searchable nodes.
+ *
+ * Return: the previous value or %NULL.
+ */
+void *mas_prev_range(struct ma_state *mas, unsigned long min)
+{
+ void *entry = NULL;
+
+ if (mas_prev_setup(mas, min, &entry))
+ return entry;
+
+ return mas_prev_slot(mas, min);
+}
+EXPORT_SYMBOL_GPL(mas_prev_slot);
+
+
/**
* mt_prev() - get the previous value in the maple tree
* @mt: The maple tree
@@ -6134,21 +6172,17 @@ void *mas_find_range(struct ma_state *mas, unsigned long max)
}
EXPORT_SYMBOL_GPL(mas_find_range);

+
/**
- * mas_find_rev: On the first call, find the first non-null entry at or below
- * mas->index down to %min. Otherwise find the first non-null entry below
- * mas->index down to %min.
- * @mas: The maple state
- * @min: The minimum value to check.
+ * mas_find_rev_setup() - Internal function to set up mas_find_*_rev()
*
- * Must hold rcu_read_lock or the write lock.
- * If an entry exists, last and index are updated accordingly.
- * May set @mas->node to MAS_NONE.
- *
- * Return: The entry or %NULL.
+ * Returns: True if entry is the answer, false otherwise.
*/
-void *mas_find_rev(struct ma_state *mas, unsigned long min)
+static inline bool mas_find_rev_setup(struct ma_state *mas, unsigned long min,
+ void **entry)
{
+ *entry = NULL;
+
if (unlikely(mas_is_none(mas))) {
if (mas->index <= min)
goto none;
@@ -6160,7 +6194,7 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
if (unlikely(mas_is_paused(mas))) {
if (unlikely(mas->index <= min)) {
mas->node = MAS_NONE;
- return NULL;
+ return true;
}
mas->node = MAS_START;
mas->last = --mas->index;
@@ -6168,14 +6202,12 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)

if (unlikely(mas_is_start(mas))) {
/* First run or continue */
- void *entry;
-
if (mas->index < min)
- return NULL;
+ return true;

- entry = mas_walk(mas);
- if (entry)
- return entry;
+ *entry = mas_walk(mas);
+ if (*entry)
+ return true;
}

if (unlikely(!mas_searchable(mas))) {
@@ -6189,22 +6221,73 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
*/
mas->last = mas->index = 0;
mas->node = MAS_ROOT;
- return mas_root(mas);
+ *entry = mas_root(mas);
+ return true;
}
}

if (mas->index < min)
- return NULL;
+ return true;

/* Retries on dead nodes handled by mas_prev_entry */
- return mas_prev_entry(mas, min);
+ return false;

none:
mas->node = MAS_NONE;
- return NULL;
+ return true;
+}
+
+/**
+ * mas_find_rev: On the first call, find the first non-null entry at or below
+ * mas->index down to %min. Otherwise find the first non-null entry below
+ * mas->index down to %min.
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find_rev(struct ma_state *mas, unsigned long min)
+{
+ void *entry;
+
+ if (mas_find_rev_setup(mas, min, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_prev_entry */
+ return mas_prev_entry(mas, min);
+
}
EXPORT_SYMBOL_GPL(mas_find_rev);

+/**
+ * mas_find_range_rev: On the first call, find the first non-null entry at or
+ * below mas->index down to %min. Otherwise advance to the previous slot after
+ * mas->index down to %min.
+ * @mas: The maple state
+ * @min: The minimum value to check.
+ *
+ * Must hold rcu_read_lock or the write lock.
+ * If an entry exists, last and index are updated accordingly.
+ * May set @mas->node to MAS_NONE.
+ *
+ * Return: The entry or %NULL.
+ */
+void *mas_find_range_rev(struct ma_state *mas, unsigned long min)
+{
+ void *entry;
+
+ if (mas_find_rev_setup(mas, min, &entry))
+ return entry;
+
+ /* Retries on dead nodes handled by mas_prev_slot */
+ return mas_prev_slot(mas, min);
+}
+EXPORT_SYMBOL_GPL(mas_find_range_rev);
+
/**
* mas_erase() - Find the range in which index resides and erase the entire
* range.
--
2.39.2

2023-04-25 15:01:22

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 26/34] maple_tree: Update testing code for mas_{next,prev,walk}

Now that the functions have changed the limits, update the testing of
the maple tree to test these new settings.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 641 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 635 insertions(+), 6 deletions(-)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index ae08d34d1d3c4..345eef526d8b0 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -1290,6 +1290,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
mas_lock(&mas);
mas_set(&mas, 3);
ptr = mas_walk(&mas);
+ MAS_BUG_ON(&mas, mas.index != 0);
MT_BUG_ON(mt, ptr != NULL);
MT_BUG_ON(mt, mas.index != 0);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
@@ -1300,7 +1301,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)

mas_set(&mas, 0);
ptr = mas_walk(&mas);
- MT_BUG_ON(mt, ptr != NULL);
+ MAS_BUG_ON(&mas, ptr != NULL);

mas_set(&mas, 1);
ptr = mas_walk(&mas);
@@ -1359,7 +1360,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
mas_store_gfp(&mas, ptr, GFP_KERNEL);
ptr = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, ptr != NULL);
- MT_BUG_ON(mt, (mas.index != 1) && (mas.last != ULONG_MAX));
+ MAS_BUG_ON(&mas, (mas.index != ULONG_MAX) && (mas.last != ULONG_MAX));

mas_set(&mas, 1);
ptr = mas_prev(&mas, 0);
@@ -1768,12 +1769,12 @@ static noinline void __init check_iteration(struct maple_tree *mt)
mas.index = 760;
mas.last = 765;
mas_store(&mas, val);
- mas_next(&mas, ULONG_MAX);
}
i++;
}
/* Make sure the next find returns the one after 765, 766-769 */
val = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, val != xa_mk_value(76));
MT_BUG_ON(mt, val != xa_mk_value(76));
mas_unlock(&mas);
mas_destroy(&mas);
@@ -1979,7 +1980,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)

val = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, val != NULL);
- MT_BUG_ON(mt, mas.index != ULONG_MAX);
+ MT_BUG_ON(mt, mas.index != 0x7d6);
MT_BUG_ON(mt, mas.last != ULONG_MAX);

val = mas_prev(&mas, 0);
@@ -2003,7 +2004,8 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
val = mas_prev(&mas, 0);
MT_BUG_ON(mt, val != NULL);
MT_BUG_ON(mt, mas.index != 0);
- MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.last != 5);
+ MT_BUG_ON(mt, mas.node != MAS_NONE);

mas.index = 0;
mas.last = 5;
@@ -2015,7 +2017,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
val = mas_prev(&mas, 0);
MT_BUG_ON(mt, val != NULL);
MT_BUG_ON(mt, mas.index != 0);
- MT_BUG_ON(mt, mas.last != 0);
+ MT_BUG_ON(mt, mas.last != 9);
mas_unlock(&mas);

mtree_destroy(mt);
@@ -2718,6 +2720,629 @@ static noinline void __init check_empty_area_fill(struct maple_tree *mt)
mt_set_non_kernel(0);
}

+/*
+ * Check MAS_START, MAS_PAUSE, active (implied), and MAS_NONE transitions.
+ *
+ * The table below shows the single entry tree (0-0 pointer) and normal tree
+ * with nodes.
+ *
+ * Function ENTRY Start Result index & last
+ * ┬ ┬ ┬ ┬ ┬
+ * │ │ │ │ └─ the final range
+ * │ │ │ └─ The node value after execution
+ * │ │ └─ The node value before execution
+ * │ └─ If the entry exists of does not exists (DNE)
+ * └─ The function name
+ *
+ * Function ENTRY Start Result index & last
+ * mas_next()
+ * - after last
+ * Single entry tree at 0-0
+ * ------------------------
+ * DNE MAS_START MAS_NONE 1 - oo
+ * DNE MAS_PAUSE MAS_NONE 1 - oo
+ * DNE MAS_ROOT MAS_NONE 1 - oo
+ * when index = 0
+ * DNE MAS_NONE MAS_ROOT 0
+ * when index > 0
+ * DNE MAS_NONE MAS_NONE 1 - oo
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to last range
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to last range
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active set to last range
+ *
+ * Function ENTRY Start Result index & last
+ * mas_prev()
+ * - before index
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ *
+ * if index == 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to min
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to min
+ * exists MAS_NONE active range
+ * DNE MAS_NONE MAS_NONE set to min
+ * any MAS_ROOT MAS_NONE 0
+ * exists active active range
+ * DNE active active last range
+ *
+ * Function ENTRY Start Result index & last
+ * mas_find()
+ * - at index or next
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * if index == 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to max
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to max
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active last range (max < last)
+ *
+ * Function ENTRY Start Result index & last
+ * mas_find_rev()
+ * - at index or before
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ * if index == 0
+ * DNE MAS_START MAS_NONE 0
+ * DNE MAS_PAUSE MAS_NONE 0
+ * DNE MAS_NONE MAS_NONE 0
+ * DNE MAS_ROOT MAS_NONE 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active set to min
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active set to min
+ * exists MAS_NONE active range
+ * exists active active range
+ * DNE active active last range (min > index)
+ *
+ * Function ENTRY Start Result index & last
+ * mas_walk()
+ * - Look up index
+ * Single entry tree at 0-0
+ * ------------------------
+ * if index > 0
+ * DNE MAS_START MAS_ROOT 1 - oo
+ * DNE MAS_PAUSE MAS_ROOT 1 - oo
+ * DNE MAS_NONE MAS_ROOT 1 - oo
+ * DNE MAS_ROOT MAS_ROOT 1 - oo
+ * if index == 0
+ * exists MAS_START MAS_ROOT 0
+ * exists MAS_PAUSE MAS_ROOT 0
+ * exists MAS_NONE MAS_ROOT 0
+ * exists MAS_ROOT MAS_ROOT 0
+ *
+ * Normal tree
+ * -----------
+ * exists MAS_START active range
+ * DNE MAS_START active range of NULL
+ * exists MAS_PAUSE active range
+ * DNE MAS_PAUSE active range of NULL
+ * exists MAS_NONE active range
+ * DNE MAS_NONE active range of NULL
+ * exists active active range
+ * DNE active active range of NULL
+ */
+
+#define mas_active(x) (((x).node != MAS_ROOT) && \
+ ((x).node != MAS_START) && \
+ ((x).node != MAS_PAUSE) && \
+ ((x).node != MAS_NONE))
+static noinline void __init check_state_handling(struct maple_tree *mt)
+{
+ MA_STATE(mas, mt, 0, 0);
+ void *entry, *ptr = (void *) 0x1234500;
+ void *ptr2 = &ptr;
+ void *ptr3 = &ptr2;
+
+ /* Check MAS_ROOT First */
+ mtree_store_range(mt, 0, 0, ptr, GFP_KERNEL);
+
+ mas_lock(&mas);
+ /* prev: Start -> none */
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* prev: Start -> root */
+ mas_set(&mas, 10);
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* prev: pause -> root */
+ mas_set(&mas, 10);
+ mas_pause(&mas);
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* next: start -> none */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* next: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* find: root -> none */
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find: none -> none */
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find_rev: none -> root */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* find_rev: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* find_rev: root -> none */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find_rev: none -> none */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* find_rev: start -> root */
+ mas_set(&mas, 10);
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* walk: start -> none */
+ mas_set(&mas, 10);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* walk: pause -> none*/
+ mas_set(&mas, 10);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* walk: none -> none */
+ mas.index = mas.last = 10;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* walk: none -> none */
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* walk: start -> root */
+ mas_set(&mas, 0);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* walk: pause -> root */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* walk: none -> root */
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* walk: root -> root */
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ /* walk: root -> none */
+ mas_set(&mas, 10);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 1);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, mas.node != MAS_NONE);
+
+ /* walk: none -> root */
+ mas.index = mas.last = 0;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0);
+ MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
+
+ mas_unlock(&mas);
+
+ /* Check when there is an actual node */
+ mtree_store_range(mt, 0, 0, NULL, GFP_KERNEL);
+ mtree_store_range(mt, 0x1000, 0x1500, ptr, GFP_KERNEL);
+ mtree_store_range(mt, 0x2000, 0x2500, ptr2, GFP_KERNEL);
+ mtree_store_range(mt, 0x3000, 0x3500, ptr3, GFP_KERNEL);
+
+ mas_lock(&mas);
+
+ /* next: start ->active */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next: pause ->active */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next: none ->active */
+ mas.index = mas.last = 0;
+ mas.offset = 0;
+ mas.node = MAS_NONE;
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next:active ->active */
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr2);
+ MAS_BUG_ON(&mas, mas.index != 0x2000);
+ MAS_BUG_ON(&mas, mas.last != 0x2500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next:active -> active out of range*/
+ entry = mas_next(&mas, 0x2999);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x2501);
+ MAS_BUG_ON(&mas, mas.last != 0x2fff);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* Continue after out of range*/
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr3);
+ MAS_BUG_ON(&mas, mas.index != 0x3000);
+ MAS_BUG_ON(&mas, mas.last != 0x3500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next:active -> active out of range*/
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x3501);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* next: none -> active, skip value at location */
+ mas_set(&mas, 0);
+ entry = mas_next(&mas, ULONG_MAX);
+ mas.node = MAS_NONE;
+ mas.offset = 0;
+ entry = mas_next(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr2);
+ MAS_BUG_ON(&mas, mas.index != 0x2000);
+ MAS_BUG_ON(&mas, mas.last != 0x2500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* prev:active ->active */
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* prev:active -> active out of range*/
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0x0FFF);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* prev: pause ->active */
+ mas_set(&mas, 0x3600);
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr3);
+ mas_pause(&mas);
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr2);
+ MAS_BUG_ON(&mas, mas.index != 0x2000);
+ MAS_BUG_ON(&mas, mas.last != 0x2500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* prev:active -> active out of range*/
+ entry = mas_prev(&mas, 0x1600);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x1501);
+ MAS_BUG_ON(&mas, mas.last != 0x1FFF);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* prev: active ->active, continue*/
+ entry = mas_prev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find: start ->active */
+ mas_set(&mas, 0);
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find: pause ->active */
+ mas_set(&mas, 0);
+ mas_pause(&mas);
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find: start ->active on value */;
+ mas_set(&mas, 1200);
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find:active ->active */
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != ptr2);
+ MAS_BUG_ON(&mas, mas.index != 0x2000);
+ MAS_BUG_ON(&mas, mas.last != 0x2500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+
+ /* find:active -> active (NULL)*/
+ entry = mas_find(&mas, 0x2700);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x2501);
+ MAS_BUG_ON(&mas, mas.last != 0x2FFF);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find: none ->active */
+ entry = mas_find(&mas, 0x5000);
+ MAS_BUG_ON(&mas, entry != ptr3);
+ MAS_BUG_ON(&mas, mas.index != 0x3000);
+ MAS_BUG_ON(&mas, mas.last != 0x3500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find:active -> active (NULL) end*/
+ entry = mas_find(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x3501);
+ MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find_rev: active (END) ->active */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr3);
+ MAS_BUG_ON(&mas, mas.index != 0x3000);
+ MAS_BUG_ON(&mas, mas.last != 0x3500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find_rev:active ->active */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr2);
+ MAS_BUG_ON(&mas, mas.index != 0x2000);
+ MAS_BUG_ON(&mas, mas.last != 0x2500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find_rev: pause ->active */
+ mas_pause(&mas);
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find_rev:active -> active */
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 0x0FFF);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* find_rev: start ->active */
+ mas_set(&mas, 0x1200);
+ entry = mas_find_rev(&mas, 0);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk start ->active */
+ mas_set(&mas, 0x1200);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk start ->active */
+ mas_set(&mas, 0x1600);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x1501);
+ MAS_BUG_ON(&mas, mas.last != 0x1fff);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk pause ->active */
+ mas_set(&mas, 0x1200);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk pause -> active */
+ mas_set(&mas, 0x1600);
+ mas_pause(&mas);
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x1501);
+ MAS_BUG_ON(&mas, mas.last != 0x1fff);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk none -> active */
+ mas_set(&mas, 0x1200);
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk none -> active */
+ mas_set(&mas, 0x1600);
+ mas.node = MAS_NONE;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x1501);
+ MAS_BUG_ON(&mas, mas.last != 0x1fff);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk active -> active */
+ mas.index = 0x1200;
+ mas.last = 0x1200;
+ mas.offset = 0;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != ptr);
+ MAS_BUG_ON(&mas, mas.index != 0x1000);
+ MAS_BUG_ON(&mas, mas.last != 0x1500);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ /* mas_walk active -> active */
+ mas.index = 0x1600;
+ mas.last = 0x1600;
+ entry = mas_walk(&mas);
+ MAS_BUG_ON(&mas, entry != NULL);
+ MAS_BUG_ON(&mas, mas.index != 0x1501);
+ MAS_BUG_ON(&mas, mas.last != 0x1fff);
+ MAS_BUG_ON(&mas, !mas_active(mas));
+
+ mas_unlock(&mas);
+}
+
static DEFINE_MTREE(tree);
static int __init maple_tree_seed(void)
{
@@ -2979,6 +3604,10 @@ static int __init maple_tree_seed(void)
mtree_destroy(&tree);


+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_state_handling(&tree);
+ mtree_destroy(&tree);
+
#if defined(BENCH)
skip:
#endif
--
2.39.2

2023-04-25 15:07:01

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 24/34] maple_tree: Try harder to keep active node with mas_prev()

Keep a reference to the node when possible with mas_prev(). This will
avoid re-walking the tree. In keeping a reference to the node, keep the
last/index accurate to the range being referenced. This means the limit
may be within the range, but the range may extend outside of the limit.

Also fix the single entry tree to respect the range (of 0), or set the
node to MAS_NONE in the case of shifting beyond 0.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 125 +++++++++++++++++++++++++++++++----------------
1 file changed, 83 insertions(+), 42 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index ef7a6ceca864c..20f0a10dc5608 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4828,7 +4828,7 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
unsigned long index)
{
unsigned long pivot, min;
- unsigned char offset;
+ unsigned char offset, count;
struct maple_node *mn;
enum maple_type mt;
unsigned long *pivots;
@@ -4842,29 +4842,42 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
mn = mas_mn(mas);
mt = mte_node_type(mas->node);
offset = mas->offset - 1;
- if (offset >= mt_slots[mt])
- offset = mt_slots[mt] - 1;
-
slots = ma_slots(mn, mt);
pivots = ma_pivots(mn, mt);
+ count = ma_data_end(mn, mt, pivots, mas->max);
if (unlikely(ma_dead_node(mn))) {
mas_rewalk(mas, index);
goto retry;
}

- if (offset == mt_pivots[mt])
+ offset = mas->offset - 1;
+ if (offset >= mt_slots[mt])
+ offset = mt_slots[mt] - 1;
+
+ if (offset >= count) {
pivot = mas->max;
- else
+ offset = count;
+ } else {
pivot = pivots[offset];
+ }

if (unlikely(ma_dead_node(mn))) {
mas_rewalk(mas, index);
goto retry;
}

- while (offset && ((!mas_slot(mas, slots, offset) && pivot >= limit) ||
- !pivot))
+ while (offset && !mas_slot(mas, slots, offset)) {
pivot = pivots[--offset];
+ if (pivot >= limit)
+ break;
+ }
+
+ /*
+ * If the slot was null but we've shifted outside the limits, then set
+ * the range to the last NULL.
+ */
+ if (unlikely((pivot < limit) && (offset < mas->offset)))
+ pivot = pivots[++offset];

min = mas_safe_min(mas, pivots, offset);
entry = mas_slot(mas, slots, offset);
@@ -4873,32 +4886,33 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
goto retry;
}

- if (likely(entry)) {
- mas->offset = offset;
- mas->last = pivot;
- mas->index = min;
- }
+ mas->offset = offset;
+ mas->last = pivot;
+ mas->index = min;
return entry;
}

static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
{
void *entry;
+ struct maple_enode *prev_enode;
+ unsigned char prev_offset;

- if (mas->index < min) {
- mas->index = mas->last = min;
- mas->node = MAS_NONE;
+ if (mas->index < min)
return NULL;
- }
+
retry:
+ prev_enode = mas->node;
+ prev_offset = mas->offset;
while (likely(!mas_is_none(mas))) {
entry = mas_prev_nentry(mas, min, mas->index);
- if (unlikely(mas->last < min))
- goto not_found;

if (likely(entry))
return entry;

+ if (unlikely(mas->index <= min))
+ return NULL;
+
if (unlikely(mas_prev_node(mas, min))) {
mas_rewalk(mas, mas->index);
goto retry;
@@ -4907,9 +4921,8 @@ static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
mas->offset++;
}

- mas->offset--;
-not_found:
- mas->index = mas->last = min;
+ mas->node = prev_enode;
+ mas->offset = prev_offset;
return NULL;
}

@@ -5958,15 +5971,8 @@ EXPORT_SYMBOL_GPL(mt_next);
*/
void *mas_prev(struct ma_state *mas, unsigned long min)
{
- if (!mas->index) {
- /* Nothing comes before 0 */
- mas->last = 0;
- mas->node = MAS_NONE;
- return NULL;
- }
-
- if (unlikely(mas_is_ptr(mas)))
- return NULL;
+ if (mas->index <= min)
+ goto none;

if (mas_is_none(mas) || mas_is_paused(mas))
mas->node = MAS_START;
@@ -5974,19 +5980,30 @@ void *mas_prev(struct ma_state *mas, unsigned long min)
if (mas_is_start(mas)) {
mas_walk(mas);
if (!mas->index)
- return NULL;
+ goto none;
}

- if (mas_is_ptr(mas)) {
- if (!mas->index) {
- mas->last = 0;
- return NULL;
- }
-
+ if (unlikely(mas_is_ptr(mas))) {
+ if (!mas->index)
+ goto none;
mas->index = mas->last = 0;
- return mas_root_locked(mas);
+ return mas_root(mas);
+ }
+
+ if (mas_is_none(mas)) {
+ if (mas->index) {
+ /* Walked to out-of-range pointer? */
+ mas->index = mas->last = 0;
+ mas->node = MAS_ROOT;
+ return mas_root(mas);
+ }
+ return NULL;
}
return mas_prev_entry(mas, min);
+
+none:
+ mas->node = MAS_NONE;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_prev);

@@ -6112,8 +6129,16 @@ EXPORT_SYMBOL_GPL(mas_find);
*/
void *mas_find_rev(struct ma_state *mas, unsigned long min)
{
+ if (unlikely(mas_is_none(mas))) {
+ if (mas->index <= min)
+ goto none;
+
+ mas->last = mas->index;
+ mas->node = MAS_START;
+ }
+
if (unlikely(mas_is_paused(mas))) {
- if (unlikely(mas->last == ULONG_MAX)) {
+ if (unlikely(mas->index <= min)) {
mas->node = MAS_NONE;
return NULL;
}
@@ -6133,14 +6158,30 @@ void *mas_find_rev(struct ma_state *mas, unsigned long min)
return entry;
}

- if (unlikely(!mas_searchable(mas)))
- return NULL;
+ if (unlikely(!mas_searchable(mas))) {
+ if (mas_is_ptr(mas))
+ goto none;
+
+ if (mas_is_none(mas)) {
+ /*
+ * Walked to the location, and there was nothing so the
+ * previous location is 0.
+ */
+ mas->last = mas->index = 0;
+ mas->node = MAS_ROOT;
+ return mas_root(mas);
+ }
+ }

if (mas->index < min)
return NULL;

/* Retries on dead nodes handled by mas_prev_entry */
return mas_prev_entry(mas, min);
+
+none:
+ mas->node = MAS_NONE;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_find_rev);

--
2.39.2

2023-04-25 15:08:58

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 28/34] maple_tree: Revise limit checks in mas_empty_area{_rev}()

Since the maple tree is inclusive in range, ensure that a range of 1
(min = max) works for searching for a gap in either direction, and make
sure the size is at least 1 but not larger than the delta between min
and max.

This commit also updates the testing. Unfortunately there isn't a way
to safely update the tests and code without a test failure.

Suggested-by: Peng Zhang <[email protected]>
Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 20 +++++++++++++-------
lib/test_maple_tree.c | 27 ++++++++++++++++++++-------
2 files changed, 33 insertions(+), 14 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index fe6c9da6f2bd5..7370d7c12fe3b 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5248,7 +5248,10 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
unsigned long *pivots;
enum maple_type mt;

- if (min >= max)
+ if (min > max)
+ return -EINVAL;
+
+ if (size == 0 || max - min < size - 1)
return -EINVAL;

if (mas_is_start(mas))
@@ -5303,7 +5306,10 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
{
struct maple_enode *last = mas->node;

- if (min >= max)
+ if (min > max)
+ return -EINVAL;
+
+ if (size == 0 || max - min < size - 1)
return -EINVAL;

if (mas_is_start(mas)) {
@@ -5339,7 +5345,7 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
return -EBUSY;

/* Trim the upper limit to the max. */
- if (max <= mas->last)
+ if (max < mas->last)
mas->last = max;

mas->index = mas->last - size + 1;
@@ -6375,7 +6381,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;

- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, min, min);
if (!mt_is_alloc(mt))
return -EINVAL;

@@ -6395,7 +6401,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
retry:
mas.offset = 0;
mas.index = min;
- mas.last = max - size;
+ mas.last = max - size + 1;
ret = mas_alloc(&mas, entry, size, startp);
if (mas_nomem(&mas, gfp))
goto retry;
@@ -6411,14 +6417,14 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
{
int ret = 0;

- MA_STATE(mas, mt, min, max - size);
+ MA_STATE(mas, mt, min, max - size + 1);
if (!mt_is_alloc(mt))
return -EINVAL;

if (WARN_ON_ONCE(mt_is_reserved(entry)))
return -EINVAL;

- if (min >= max)
+ if (min > max)
return -EINVAL;

if (max < size - 1)
diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 345eef526d8b0..7b2d19ad5934d 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -105,7 +105,7 @@ static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt,
unsigned long result = expected + 1;
int ret;

- ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end - 1,
+ ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end,
GFP_KERNEL);
MT_BUG_ON(mt, ret != eret);
if (ret)
@@ -683,7 +683,7 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0, /* Return value success. */

0x0, /* Min */
- 0x565234AF1 << 12, /* Max */
+ 0x565234AF0 << 12, /* Max */
0x3000, /* Size */
0x565234AEE << 12, /* max - 3. */
0, /* Return value success. */
@@ -695,14 +695,14 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0, /* Return value success. */

0x0, /* Min */
- 0x7F36D510A << 12, /* Max */
+ 0x7F36D5109 << 12, /* Max */
0x4000, /* Size */
0x7F36D5106 << 12, /* First rev hole of size 0x4000 */
0, /* Return value success. */

/* Ascend test. */
0x0,
- 34148798629 << 12,
+ 34148798628 << 12,
19 << 12,
34148797418 << 12,
0x0,
@@ -714,6 +714,12 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
0x0,
-EBUSY,

+ /* Single space test. */
+ 34148798725 << 12,
+ 34148798725 << 12,
+ 1 << 12,
+ 34148798725 << 12,
+ 0,
};

int i, range_count = ARRAY_SIZE(range);
@@ -762,9 +768,9 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
mas_unlock(&mas);
for (i = 0; i < req_range_count; i += 5) {
#if DEBUG_REV_RANGE
- pr_debug("\tReverse request between %lu-%lu size %lu, should get %lu\n",
- req_range[i] >> 12,
- (req_range[i + 1] >> 12) - 1,
+ pr_debug("\tReverse request %d between %lu-%lu size %lu, should get %lu\n",
+ i, req_range[i] >> 12,
+ (req_range[i + 1] >> 12),
req_range[i+2] >> 12,
req_range[i+3] >> 12);
#endif
@@ -883,6 +889,13 @@ static noinline void __init check_alloc_range(struct maple_tree *mt)
4503599618982063UL << 12, /* Size */
34359052178 << 12, /* Expected location */
-EBUSY, /* Return failure. */
+
+ /* Test a single entry */
+ 34148798648 << 12, /* Min */
+ 34148798648 << 12, /* Max */
+ 4096, /* Size of 1 */
+ 34148798648 << 12, /* Location is the same as min/max */
+ 0, /* Success */
};
int i, range_count = ARRAY_SIZE(range);
int req_range_count = ARRAY_SIZE(req_range);
--
2.39.2

2023-04-25 15:33:09

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 23/34] maple_tree: Try harder to keep active node after mas_next()

Clean up the mas_next() call to try and keep a node reference when
possible. This will avoid re-walking the tree in most cases.

Also clean up the single entry tree handling to ensure index/last are
consistent with what one would expect. (returning NULL with limit of
1-oo).

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 89 +++++++++++++++++++++++++-----------------------
1 file changed, 47 insertions(+), 42 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 1542274dc2b7f..ef7a6ceca864c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4727,33 +4727,25 @@ static inline void *mas_next_nentry(struct ma_state *mas,
if (ma_dead_node(node))
return NULL;

+ mas->last = pivot;
if (entry)
- goto found;
+ return entry;

if (pivot >= max)
return NULL;

+ if (pivot >= mas->max)
+ return NULL;
+
mas->index = pivot + 1;
mas->offset++;
}

- if (mas->index > mas->max) {
- mas->index = mas->last;
- return NULL;
- }
-
- pivot = mas_safe_pivot(mas, pivots, mas->offset, type);
+ pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
entry = mas_slot(mas, slots, mas->offset);
if (ma_dead_node(node))
return NULL;

- if (!pivot)
- return NULL;
-
- if (!entry)
- return NULL;
-
-found:
mas->last = pivot;
return entry;
}
@@ -4782,21 +4774,15 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
{
void *entry = NULL;
- struct maple_enode *prev_node;
struct maple_node *node;
- unsigned char offset;
unsigned long last;
enum maple_type mt;

- if (mas->index > limit) {
- mas->index = mas->last = limit;
- mas_pause(mas);
+ if (mas->last >= limit)
return NULL;
- }
+
last = mas->last;
retry:
- offset = mas->offset;
- prev_node = mas->node;
node = mas_mn(mas);
mt = mte_node_type(mas->node);
mas->offset++;
@@ -4815,12 +4801,10 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
if (likely(entry))
return entry;

- if (unlikely((mas->index > limit)))
- break;
+ if (unlikely((mas->last >= limit)))
+ return NULL;

next_node:
- prev_node = mas->node;
- offset = mas->offset;
if (unlikely(mas_next_node(mas, node, limit))) {
mas_rewalk(mas, last);
goto retry;
@@ -4830,9 +4814,6 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
mt = mte_node_type(mas->node);
}

- mas->index = mas->last = limit;
- mas->offset = offset;
- mas->node = prev_node;
return NULL;
}

@@ -5920,6 +5901,8 @@ EXPORT_SYMBOL_GPL(mas_expected_entries);
*/
void *mas_next(struct ma_state *mas, unsigned long max)
{
+ bool was_none = mas_is_none(mas);
+
if (mas_is_none(mas) || mas_is_paused(mas))
mas->node = MAS_START;

@@ -5927,16 +5910,16 @@ void *mas_next(struct ma_state *mas, unsigned long max)
mas_walk(mas); /* Retries on dead nodes handled by mas_walk */

if (mas_is_ptr(mas)) {
- if (!mas->index) {
- mas->index = 1;
- mas->last = ULONG_MAX;
+ if (was_none && mas->index == 0) {
+ mas->index = mas->last = 0;
+ return mas_root(mas);
}
+ mas->index = 1;
+ mas->last = ULONG_MAX;
+ mas->node = MAS_NONE;
return NULL;
}

- if (mas->last == ULONG_MAX)
- return NULL;
-
/* Retries on dead nodes handled by mas_next_entry */
return mas_next_entry(mas, max);
}
@@ -6060,17 +6043,25 @@ EXPORT_SYMBOL_GPL(mas_pause);
*/
void *mas_find(struct ma_state *mas, unsigned long max)
{
+ if (unlikely(mas_is_none(mas))) {
+ if (unlikely(mas->last >= max))
+ return NULL;
+
+ mas->index = mas->last;
+ mas->node = MAS_START;
+ }
+
if (unlikely(mas_is_paused(mas))) {
- if (unlikely(mas->last == ULONG_MAX)) {
- mas->node = MAS_NONE;
+ if (unlikely(mas->last >= max))
return NULL;
- }
+
mas->node = MAS_START;
mas->index = ++mas->last;
}

- if (unlikely(mas_is_none(mas)))
- mas->node = MAS_START;
+
+ if (unlikely(mas_is_ptr(mas)))
+ goto ptr_out_of_range;

if (unlikely(mas_is_start(mas))) {
/* First run or continue */
@@ -6082,13 +6073,27 @@ void *mas_find(struct ma_state *mas, unsigned long max)
entry = mas_walk(mas);
if (entry)
return entry;
+
}

- if (unlikely(!mas_searchable(mas)))
+ if (unlikely(!mas_searchable(mas))) {
+ if (unlikely(mas_is_ptr(mas)))
+ goto ptr_out_of_range;
+
+ return NULL;
+ }
+
+ if (mas->index == max)
return NULL;

/* Retries on dead nodes handled by mas_next_entry */
return mas_next_entry(mas, max);
+
+ptr_out_of_range:
+ mas->node = MAS_NONE;
+ mas->index = 1;
+ mas->last = ULONG_MAX;
+ return NULL;
}
EXPORT_SYMBOL_GPL(mas_find);

@@ -6519,7 +6524,7 @@ void *mt_find(struct maple_tree *mt, unsigned long *index, unsigned long max)
if (entry)
goto unlock;

- while (mas_searchable(&mas) && (mas.index < max)) {
+ while (mas_searchable(&mas) && (mas.last < max)) {
entry = mas_next_entry(&mas, max);
if (likely(entry && !xa_is_zero(entry)))
break;
--
2.39.2

2023-04-25 15:45:18

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface

Sometimes, during a tree walk, the user needs the next slot regardless
of if it is empty or not. Add an interface to get the next slot.

Since there are no consecutive NULLs allowed in the tree, the mas_next()
function can only advance two slots at most. So use the new
mas_next_slot() interface to align both implementations.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 178 +++++++++++++++++++----------------------------
1 file changed, 71 insertions(+), 107 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 31cbfd7b44728..fe6c9da6f2bd5 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4619,15 +4619,16 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
if (mas->max >= max)
goto no_entry;

+ min = mas->max + 1;
+ if (min > max)
+ goto no_entry;
+
level = 0;
do {
if (ma_is_root(node))
goto no_entry;

- min = mas->max + 1;
- if (min > max)
- goto no_entry;
-
+ /* Walk up. */
if (unlikely(mas_ascend(mas)))
return 1;

@@ -4645,13 +4646,12 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
slots = ma_slots(node, mt);
pivot = mas_safe_pivot(mas, pivots, ++offset, mt);
while (unlikely(level > 1)) {
- /* Descend, if necessary */
+ level--;
enode = mas_slot(mas, slots, offset);
if (unlikely(ma_dead_node(node)))
return 1;

mas->node = enode;
- level--;
node = mas_mn(mas);
mt = mte_node_type(mas->node);
slots = ma_slots(node, mt);
@@ -4680,85 +4680,84 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
return 0;
}

+static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
+{
+retry:
+ mas_set(mas, index);
+ mas_state_walk(mas);
+ if (mas_is_start(mas))
+ goto retry;
+}
+
+static inline bool mas_rewalk_if_dead(struct ma_state *mas,
+ struct maple_node *node, const unsigned long index)
+{
+ if (unlikely(ma_dead_node(node))) {
+ mas_rewalk(mas, index);
+ return true;
+ }
+ return false;
+}
+
/*
- * mas_next_nentry() - Get the next node entry
- * @mas: The maple state
- * @max: The maximum value to check
- * @*range_start: Pointer to store the start of the range.
+ * mas_next_slot() - Get the entry in the next slot
*
- * Sets @mas->offset to the offset of the next node entry, @mas->last to the
- * pivot of the entry.
+ * @mas: The maple state
+ * @max: The maximum starting range
*
- * Return: The next entry, %NULL otherwise
+ * Return: The entry in the next slot which is possibly NULL
*/
-static inline void *mas_next_nentry(struct ma_state *mas,
- struct maple_node *node, unsigned long max, enum maple_type type)
+void *mas_next_slot(struct ma_state *mas, unsigned long max)
{
- unsigned char count;
- unsigned long pivot;
- unsigned long *pivots;
void __rcu **slots;
+ unsigned long *pivots;
+ unsigned long pivot;
+ enum maple_type type;
+ struct maple_node *node;
+ unsigned char data_end;
+ unsigned long save_point = mas->last;
void *entry;

- if (mas->last == mas->max) {
- mas->index = mas->max;
- return NULL;
- }
-
- slots = ma_slots(node, type);
+retry:
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
pivots = ma_pivots(node, type);
- count = ma_data_end(node, type, pivots, mas->max);
- if (unlikely(ma_dead_node(node)))
- return NULL;
-
- mas->index = mas_safe_min(mas, pivots, mas->offset);
- if (unlikely(ma_dead_node(node)))
- return NULL;
-
- if (mas->index > max)
- return NULL;
+ data_end = ma_data_end(node, type, pivots, mas->max);
+ pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;

- if (mas->offset > count)
+ if (pivot >= max)
return NULL;

- while (mas->offset < count) {
- pivot = pivots[mas->offset];
- entry = mas_slot(mas, slots, mas->offset);
- if (ma_dead_node(node))
- return NULL;
-
- mas->last = pivot;
- if (entry)
- return entry;
-
- if (pivot >= max)
- return NULL;
+ if (likely(data_end > mas->offset)) {
+ mas->offset++;
+ mas->index = mas->last + 1;
+ } else {
+ if (mas_next_node(mas, node, max)) {
+ mas_rewalk(mas, save_point);
+ goto retry;
+ }

- if (pivot >= mas->max)
+ if (mas_is_none(mas))
return NULL;

- mas->index = pivot + 1;
- mas->offset++;
+ mas->offset = 0;
+ mas->index = mas->min;
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
+ pivots = ma_pivots(node, type);
}

- pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
+ slots = ma_slots(node, type);
+ mas->last = mas_logical_pivot(mas, pivots, mas->offset, type);
entry = mas_slot(mas, slots, mas->offset);
- if (ma_dead_node(node))
- return NULL;
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;

- mas->last = pivot;
return entry;
}

-static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
-{
-retry:
- mas_set(mas, index);
- mas_state_walk(mas);
- if (mas_is_start(mas))
- goto retry;
-}
-
/*
* mas_next_entry() - Internal function to get the next entry.
* @mas: The maple state
@@ -4774,47 +4773,18 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
{
void *entry = NULL;
- struct maple_node *node;
- unsigned long last;
- enum maple_type mt;

if (mas->last >= limit)
return NULL;

- last = mas->last;
-retry:
- node = mas_mn(mas);
- mt = mte_node_type(mas->node);
- mas->offset++;
- if (unlikely(mas->offset >= mt_slots[mt])) {
- mas->offset = mt_slots[mt] - 1;
- goto next_node;
- }
-
- while (!mas_is_none(mas)) {
- entry = mas_next_nentry(mas, node, limit, mt);
- if (unlikely(ma_dead_node(node))) {
- mas_rewalk(mas, last);
- goto retry;
- }
-
- if (likely(entry))
- return entry;
-
- if (unlikely((mas->last >= limit)))
- return NULL;
+ entry = mas_next_slot_limit(mas, limit);
+ if (entry)
+ return entry;

-next_node:
- if (unlikely(mas_next_node(mas, node, limit))) {
- mas_rewalk(mas, last);
- goto retry;
- }
- mas->offset = 0;
- node = mas_mn(mas);
- mt = mte_node_type(mas->node);
- }
+ if (mas_is_none(mas))
+ return NULL;

- return NULL;
+ return mas_next_slot_limit(mas, limit);
}

/*
@@ -4845,10 +4815,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
slots = ma_slots(mn, mt);
pivots = ma_pivots(mn, mt);
count = ma_data_end(mn, mt, pivots, mas->max);
- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

offset = mas->offset - 1;
if (offset >= mt_slots[mt])
@@ -4861,10 +4829,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
pivot = pivots[offset];
}

- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

while (offset && !mas_slot(mas, slots, offset)) {
pivot = pivots[--offset];
@@ -4881,10 +4847,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,

min = mas_safe_min(mas, pivots, offset);
entry = mas_slot(mas, slots, offset);
- if (unlikely(ma_dead_node(mn))) {
- mas_rewalk(mas, index);
+ if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
goto retry;
- }

mas->offset = offset;
mas->last = pivot;
--
2.39.2

2023-04-25 16:18:44

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 29/34] maple_tree: Introduce mas_prev_slot() interface

Sometimes the user needs to revert to the previous slot, regardless of
if it is empty or not. Add an interface to go to the previous slot.

Since there can't be two consecutive NULLs in the tree, the mas_prev()
function can be implemented by calling mas_prev_slot() a maximum of 2
times. Change the underlying interface to use mas_prev_slot() to align
the code.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/maple_tree.c | 217 ++++++++++++++++++++---------------------------
1 file changed, 90 insertions(+), 127 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 7370d7c12fe3b..297d936321347 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4498,6 +4498,25 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)

}

+static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
+{
+retry:
+ mas_set(mas, index);
+ mas_state_walk(mas);
+ if (mas_is_start(mas))
+ goto retry;
+}
+
+static inline bool mas_rewalk_if_dead(struct ma_state *mas,
+ struct maple_node *node, const unsigned long index)
+{
+ if (unlikely(ma_dead_node(node))) {
+ mas_rewalk(mas, index);
+ return true;
+ }
+ return false;
+}
+
/*
* mas_prev_node() - Find the prev non-null entry at the same level in the
* tree. The prev value will be mas->node[mas->offset] or MAS_NONE.
@@ -4515,13 +4534,15 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
struct maple_node *node;
struct maple_enode *enode;
unsigned long *pivots;
+ unsigned long max;

- if (mas_is_none(mas))
- return 0;
+ node = mas_mn(mas);
+ max = mas->min - 1;
+ if (max < min)
+ goto no_entry;

level = 0;
do {
- node = mas_mn(mas);
if (ma_is_root(node))
goto no_entry;

@@ -4530,11 +4551,11 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
return 1;
offset = mas->offset;
level++;
+ node = mas_mn(mas);
} while (!offset);

offset--;
mt = mte_node_type(mas->node);
- node = mas_mn(mas);
slots = ma_slots(node, mt);
pivots = ma_pivots(node, mt);
if (unlikely(ma_dead_node(node)))
@@ -4543,12 +4564,10 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
mas->max = pivots[offset];
if (offset)
mas->min = pivots[offset - 1] + 1;
+
if (unlikely(ma_dead_node(node)))
return 1;

- if (mas->max < min)
- goto no_entry_min;
-
while (level > 1) {
level--;
enode = mas_slot(mas, slots, offset);
@@ -4569,9 +4588,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)

if (offset < mt_pivots[mt])
mas->max = pivots[offset];
-
- if (mas->max < min)
- goto no_entry;
}

mas->node = mas_slot(mas, slots, offset);
@@ -4584,10 +4600,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)

return 0;

-no_entry_min:
- mas->offset = offset;
- if (offset)
- mas->min = pivots[offset - 1] + 1;
no_entry:
if (unlikely(ma_dead_node(node)))
return 1;
@@ -4596,6 +4608,62 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
return 0;
}

+/*
+ * mas_prev_slot() - Get the entry in the previous slot
+ *
+ * @mas: The maple state
+ * @max: The minimum starting range
+ *
+ * Return: The entry in the previous slot which is possibly NULL
+ */
+void *mas_prev_slot(struct ma_state *mas, unsigned long min)
+{
+ void *entry;
+ void __rcu **slots;
+ unsigned long pivot;
+ enum maple_type type;
+ unsigned long *pivots;
+ struct maple_node *node;
+ unsigned long save_point = mas->index;
+
+retry:
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
+ pivots = ma_pivots(node, type);
+ pivot = mas_safe_min(mas, pivots, mas->offset);
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;
+
+ if (pivot <= min)
+ return NULL;
+
+ if (likely(mas->offset)) {
+ mas->offset--;
+ mas->last = mas->index - 1;
+ } else {
+ if (mas_prev_node(mas, min)) {
+ mas_rewalk(mas, save_point);
+ goto retry;
+ }
+
+ if (mas_is_none(mas))
+ return NULL;
+
+ mas->last = mas->max;
+ node = mas_mn(mas);
+ type = mte_node_type(mas->node);
+ pivots = ma_pivots(node, type);
+ }
+
+ mas->index = mas_safe_min(mas, pivots, mas->offset);
+ slots = ma_slots(node, type);
+ entry = mas_slot(mas, slots, mas->offset);
+ if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
+ goto retry;
+
+ return entry;
+}
+
/*
* mas_next_node() - Get the next node at the same level in the tree.
* @mas: The maple state
@@ -4680,25 +4748,6 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
return 0;
}

-static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
-{
-retry:
- mas_set(mas, index);
- mas_state_walk(mas);
- if (mas_is_start(mas))
- goto retry;
-}
-
-static inline bool mas_rewalk_if_dead(struct ma_state *mas,
- struct maple_node *node, const unsigned long index)
-{
- if (unlikely(ma_dead_node(node))) {
- mas_rewalk(mas, index);
- return true;
- }
- return false;
-}
-
/*
* mas_next_slot() - Get the entry in the next slot
*
@@ -4777,117 +4826,31 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
if (mas->last >= limit)
return NULL;

- entry = mas_next_slot_limit(mas, limit);
+ entry = mas_next_slot(mas, limit);
if (entry)
return entry;

if (mas_is_none(mas))
return NULL;

- return mas_next_slot_limit(mas, limit);
-}
-
-/*
- * mas_prev_nentry() - Get the previous node entry.
- * @mas: The maple state.
- * @limit: The lower limit to check for a value.
- *
- * Return: the entry, %NULL otherwise.
- */
-static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
- unsigned long index)
-{
- unsigned long pivot, min;
- unsigned char offset, count;
- struct maple_node *mn;
- enum maple_type mt;
- unsigned long *pivots;
- void __rcu **slots;
- void *entry;
-
-retry:
- if (!mas->offset)
- return NULL;
-
- mn = mas_mn(mas);
- mt = mte_node_type(mas->node);
- offset = mas->offset - 1;
- slots = ma_slots(mn, mt);
- pivots = ma_pivots(mn, mt);
- count = ma_data_end(mn, mt, pivots, mas->max);
- if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
- goto retry;
-
- offset = mas->offset - 1;
- if (offset >= mt_slots[mt])
- offset = mt_slots[mt] - 1;
-
- if (offset >= count) {
- pivot = mas->max;
- offset = count;
- } else {
- pivot = pivots[offset];
- }
-
- if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
- goto retry;
-
- while (offset && !mas_slot(mas, slots, offset)) {
- pivot = pivots[--offset];
- if (pivot >= limit)
- break;
- }
-
- /*
- * If the slot was null but we've shifted outside the limits, then set
- * the range to the last NULL.
- */
- if (unlikely((pivot < limit) && (offset < mas->offset)))
- pivot = pivots[++offset];
-
- min = mas_safe_min(mas, pivots, offset);
- entry = mas_slot(mas, slots, offset);
- if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
- goto retry;
-
- mas->offset = offset;
- mas->last = pivot;
- mas->index = min;
- return entry;
+ return mas_next_slot(mas, limit);
}

static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
{
void *entry;
- struct maple_enode *prev_enode;
- unsigned char prev_offset;

if (mas->index < min)
return NULL;

-retry:
- prev_enode = mas->node;
- prev_offset = mas->offset;
- while (likely(!mas_is_none(mas))) {
- entry = mas_prev_nentry(mas, min, mas->index);
-
- if (likely(entry))
- return entry;
-
- if (unlikely(mas->index <= min))
- return NULL;
-
- if (unlikely(mas_prev_node(mas, min))) {
- mas_rewalk(mas, mas->index);
- goto retry;
- }
+ entry = mas_prev_slot(mas, min);
+ if (entry)
+ return entry;

- mas->offset++;
- }
+ if (mas_is_none(mas))
+ return NULL;

- mas->node = prev_enode;
- mas->offset = prev_offset;
- return NULL;
+ return mas_prev_slot(mas, min);
}

/*
--
2.39.2

2023-04-25 16:19:14

by Wei Yang

[permalink] [raw]
Subject: Re: [PATCH 02/34] maple_tree: Clean up mas_parent_enum()

On Tue, Apr 25, 2023 at 10:09:23AM -0400, Liam R. Howlett wrote:
>From: "Liam R. Howlett" <[email protected]>
>
>mas_parent_enum() is a simple wrapper for mte_parent_enum() which is
>only called from that wrapper. Remove the wrapper and inline
>mte_parent_enum() into mas_parent_enum().
>
>At the same time, clean up the bit masking of the root pointer since it
>cannot be set by the time the bit masking occurs. Change the check on
>the root bit to a WARN_ON(), and fix the verification code to not
>trigger the WARN_ON() before checking if the node is root.
>
>Reported-by: Wei Yang <[email protected]>
>Signed-off-by: Liam R. Howlett <[email protected]>

Reviewed-by: Wei Yang <[email protected]>

--
Wei Yang
Help you, Help me

2023-04-25 16:47:25

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 33/34] maple_tree: Add testing for mas_{prev,next}_range()

Add the testing for the new functions to iterate per range.

Signed-off-by: Liam R. Howlett <[email protected]>
---
lib/test_maple_tree.c | 148 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 148 insertions(+)

diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
index 7b2d19ad5934d..adbf59542951b 100644
--- a/lib/test_maple_tree.c
+++ b/lib/test_maple_tree.c
@@ -3356,6 +3356,150 @@ static noinline void __init check_state_handling(struct maple_tree *mt)
mas_unlock(&mas);
}

+static noinline void __init check_slot_iterators(struct maple_tree *mt)
+{
+ MA_STATE(mas, mt, 0, 0);
+ unsigned long i, index = 40;
+ unsigned char offset = 0;
+ void *test;
+
+ mt_set_non_kernel(99999);
+
+ mas_lock(&mas);
+ for (i = 0; i <= index; i++) {
+ unsigned long end = 5;
+ if (i > 20 && i < 35)
+ end = 9;
+ mas_set_range(&mas, i*10, i*10 + end);
+ mas_store_gfp(&mas, xa_mk_value(i), GFP_KERNEL);
+ }
+
+ i = 21;
+ mas_set(&mas, i*10);
+ MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != NULL);
+ MAS_BUG_ON(&mas, mas.index != 206);
+ MAS_BUG_ON(&mas, mas.last != 209);
+
+ i--;
+ MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.index != 200);
+ MAS_BUG_ON(&mas, mas.last != 205);
+
+ i = 25;
+ mas_set(&mas, i*10);
+ MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.offset != 0);
+
+ /* Previous range is in another node */
+ i--;
+ MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.index != 240);
+ MAS_BUG_ON(&mas, mas.last != 249);
+
+ /* Shift back with mas_next */
+ i++;
+ MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.index != 250);
+ MAS_BUG_ON(&mas, mas.last != 259);
+
+ i = 33;
+ mas_set(&mas, i*10);
+ MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.index != 330);
+ MAS_BUG_ON(&mas, mas.last != 339);
+
+ /* Next range is in another node */
+ i++;
+ MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
+ MAS_BUG_ON(&mas, mas.offset != 0);
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+
+ /* Next out of range */
+ i++;
+ MAS_BUG_ON(&mas, mas_next_range(&mas, i*10 - 1) != NULL);
+ /* maple state does not move */
+ MAS_BUG_ON(&mas, mas.offset != 0);
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+
+ /* Prev out of range */
+ i--;
+ MAS_BUG_ON(&mas, mas_prev_range(&mas, i*10 + 1) != NULL);
+ /* maple state does not move */
+ MAS_BUG_ON(&mas, mas.offset != 0);
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+
+ mas_set(&mas, 210);
+ for (i = 210; i<= 350; i += 10) {
+ void *entry = mas_find_range(&mas, ULONG_MAX);
+
+ MAS_BUG_ON(&mas, entry != xa_mk_value(i/10));
+ }
+
+ mas_set(&mas, 0);
+ mas_contiguous(&mas, test, ULONG_MAX) {
+ MAS_BUG_ON(&mas, test != xa_mk_value(0));
+ MAS_BUG_ON(&mas, mas.index != 0);
+ MAS_BUG_ON(&mas, mas.last != 5);
+ }
+ MAS_BUG_ON(&mas, test != NULL);
+ MAS_BUG_ON(&mas, mas.index != 6);
+ MAS_BUG_ON(&mas, mas.last != 9);
+
+ mas_set(&mas, 6);
+ mas_contiguous(&mas, test, ULONG_MAX) {
+ MAS_BUG_ON(&mas, test != xa_mk_value(1));
+ MAS_BUG_ON(&mas, mas.index != 10);
+ MAS_BUG_ON(&mas, mas.last != 15);
+ }
+ MAS_BUG_ON(&mas, test != NULL);
+ MAS_BUG_ON(&mas, mas.index != 16);
+ MAS_BUG_ON(&mas, mas.last != 19);
+
+ i = 210;
+ mas_set(&mas, i);
+ mas_contiguous(&mas, test, 340) {
+ MAS_BUG_ON(&mas, test != xa_mk_value(i/10));
+ MAS_BUG_ON(&mas, mas.index != i);
+ MAS_BUG_ON(&mas, mas.last != i+9);
+ i+=10;
+ offset = mas.offset;
+ }
+ /* Hit the limit, iterator is at the limit. */
+ MAS_BUG_ON(&mas, offset != mas.offset);
+ MAS_BUG_ON(&mas, test != NULL);
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+ test = mas_find_range(&mas, ULONG_MAX);
+ MAS_BUG_ON(&mas, test != xa_mk_value(35));
+ MAS_BUG_ON(&mas, mas.index != 350);
+ MAS_BUG_ON(&mas, mas.last != 355);
+
+
+ test = mas_find_range_rev(&mas, 0);
+ MAS_BUG_ON(&mas, test != xa_mk_value(34));
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+ mas_set(&mas, 345);
+ test = mas_find_range_rev(&mas, 0);
+ MAS_BUG_ON(&mas, test != xa_mk_value(34));
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+
+ offset = mas.offset;
+ test = mas_find_range_rev(&mas, 340);
+ MAS_BUG_ON(&mas, offset != mas.offset);
+ MAS_BUG_ON(&mas, test != NULL);
+ MAS_BUG_ON(&mas, mas.index != 340);
+ MAS_BUG_ON(&mas, mas.last != 349);
+
+ mas_unlock(&mas);
+ mt_set_non_kernel(0);
+}
+
static DEFINE_MTREE(tree);
static int __init maple_tree_seed(void)
{
@@ -3621,6 +3765,10 @@ static int __init maple_tree_seed(void)
check_state_handling(&tree);
mtree_destroy(&tree);

+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
+ check_slot_iterators(&tree);
+ mtree_destroy(&tree);
+
#if defined(BENCH)
skip:
#endif
--
2.39.2

2023-04-25 16:58:20

by Liam R. Howlett

[permalink] [raw]
Subject: [PATCH 34/34] mm: Add vma_iter_{next,prev}_range() to vma iterator

Signed-off-by: Liam R. Howlett <[email protected]>
---
include/linux/mm.h | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 37554b08bb28f..2cb6e84ed6113 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -877,11 +877,24 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
return mas_find(&vmi->mas, ULONG_MAX);
}

+static inline
+struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
+{
+ return mas_next_range(&vmi->mas, ULONG_MAX);
+}
+
+
static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
{
return mas_prev(&vmi->mas, 0);
}

+static inline
+struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
+{
+ return mas_prev_range(&vmi->mas, 0);
+}
+
static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
{
return vmi->mas.index;
--
2.39.2

2023-04-26 01:25:50

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface

Hi Liam,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on linus/master v6.3 next-20230425]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230425140955.3834476-28-Liam.Howlett%40oracle.com
patch subject: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface
config: x86_64-randconfig-a003-20230424 (https://download.01.org/0day-ci/archive/20230426/[email protected]/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/0e736b8a8054e7f0b216320d2458a00b54fcd2b0
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
git checkout 0e736b8a8054e7f0b216320d2458a00b54fcd2b0
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> lib/maple_tree.c:4710:7: warning: no previous prototype for function 'mas_next_slot' [-Wmissing-prototypes]
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
lib/maple_tree.c:4710:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
static
lib/maple_tree.c:4780:10: error: implicit declaration of function 'mas_next_slot_limit' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
entry = mas_next_slot_limit(mas, limit);
^
lib/maple_tree.c:4780:10: note: did you mean 'mas_next_slot'?
lib/maple_tree.c:4710:7: note: 'mas_next_slot' declared here
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
lib/maple_tree.c:4780:8: warning: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
entry = mas_next_slot_limit(mas, limit);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/maple_tree.c:4787:9: warning: incompatible integer to pointer conversion returning 'int' from a function with result type 'void *' [-Wint-conversion]
return mas_next_slot_limit(mas, limit);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 warnings and 1 error generated.


vim +/mas_next_slot +4710 lib/maple_tree.c

4701
4702 /*
4703 * mas_next_slot() - Get the entry in the next slot
4704 *
4705 * @mas: The maple state
4706 * @max: The maximum starting range
4707 *
4708 * Return: The entry in the next slot which is possibly NULL
4709 */
> 4710 void *mas_next_slot(struct ma_state *mas, unsigned long max)
4711 {
4712 void __rcu **slots;
4713 unsigned long *pivots;
4714 unsigned long pivot;
4715 enum maple_type type;
4716 struct maple_node *node;
4717 unsigned char data_end;
4718 unsigned long save_point = mas->last;
4719 void *entry;
4720
4721 retry:
4722 node = mas_mn(mas);
4723 type = mte_node_type(mas->node);
4724 pivots = ma_pivots(node, type);
4725 data_end = ma_data_end(node, type, pivots, mas->max);
4726 pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
4727 if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
4728 goto retry;
4729
4730 if (pivot >= max)
4731 return NULL;
4732
4733 if (likely(data_end > mas->offset)) {
4734 mas->offset++;
4735 mas->index = mas->last + 1;
4736 } else {
4737 if (mas_next_node(mas, node, max)) {
4738 mas_rewalk(mas, save_point);
4739 goto retry;
4740 }
4741
4742 if (mas_is_none(mas))
4743 return NULL;
4744
4745 mas->offset = 0;
4746 mas->index = mas->min;
4747 node = mas_mn(mas);
4748 type = mte_node_type(mas->node);
4749 pivots = ma_pivots(node, type);
4750 }
4751
4752 slots = ma_slots(node, type);
4753 mas->last = mas_logical_pivot(mas, pivots, mas->offset, type);
4754 entry = mas_slot(mas, slots, mas->offset);
4755 if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
4756 goto retry;
4757
4758 return entry;
4759 }
4760

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-04-26 01:49:37

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 33/34] maple_tree: Add testing for mas_{prev,next}_range()

Hi Liam,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.3 next-20230425]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230425140955.3834476-34-Liam.Howlett%40oracle.com
patch subject: [PATCH 33/34] maple_tree: Add testing for mas_{prev,next}_range()
config: hexagon-randconfig-r045-20230424 (https://download.01.org/0day-ci/archive/20230426/[email protected]/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project 437b7602e4a998220871de78afcb020b9c14a661)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/571139f33a7ede9db66c7892c40e88ed66a32bc5
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
git checkout 571139f33a7ede9db66c7892c40e88ed66a32bc5
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

>> lib/test_maple_tree.c:3437:17: error: call to undeclared function 'mas_find_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
void *entry = mas_find_range(&mas, ULONG_MAX);
^
>> lib/test_maple_tree.c:3437:9: error: incompatible integer to pointer conversion initializing 'void *' with an expression of type 'int' [-Wint-conversion]
void *entry = mas_find_range(&mas, ULONG_MAX);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3443:2: error: call to undeclared function 'mas_find_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
mas_contiguous(&mas, test, ULONG_MAX) {
^
include/linux/maple_tree.h:545:22: note: expanded from macro 'mas_contiguous'
while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
^
>> lib/test_maple_tree.c:3443:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
mas_contiguous(&mas, test, ULONG_MAX) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3453:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
mas_contiguous(&mas, test, ULONG_MAX) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3464:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
mas_contiguous(&mas, test, 340) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3476:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
test = mas_find_range(&mas, ULONG_MAX);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> lib/test_maple_tree.c:3482:9: error: call to undeclared function 'mas_find_range_rev'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
test = mas_find_range_rev(&mas, 0);
^
lib/test_maple_tree.c:3482:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
test = mas_find_range_rev(&mas, 0);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3487:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
test = mas_find_range_rev(&mas, 0);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_maple_tree.c:3493:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
test = mas_find_range_rev(&mas, 340);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11 errors generated.


vim +/mas_find_range +3437 lib/test_maple_tree.c

3358
3359 static noinline void __init check_slot_iterators(struct maple_tree *mt)
3360 {
3361 MA_STATE(mas, mt, 0, 0);
3362 unsigned long i, index = 40;
3363 unsigned char offset = 0;
3364 void *test;
3365
3366 mt_set_non_kernel(99999);
3367
3368 mas_lock(&mas);
3369 for (i = 0; i <= index; i++) {
3370 unsigned long end = 5;
3371 if (i > 20 && i < 35)
3372 end = 9;
3373 mas_set_range(&mas, i*10, i*10 + end);
3374 mas_store_gfp(&mas, xa_mk_value(i), GFP_KERNEL);
3375 }
3376
3377 i = 21;
3378 mas_set(&mas, i*10);
3379 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
3380 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != NULL);
3381 MAS_BUG_ON(&mas, mas.index != 206);
3382 MAS_BUG_ON(&mas, mas.last != 209);
3383
3384 i--;
3385 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
3386 MAS_BUG_ON(&mas, mas.index != 200);
3387 MAS_BUG_ON(&mas, mas.last != 205);
3388
3389 i = 25;
3390 mas_set(&mas, i*10);
3391 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
3392 MAS_BUG_ON(&mas, mas.offset != 0);
3393
3394 /* Previous range is in another node */
3395 i--;
3396 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
3397 MAS_BUG_ON(&mas, mas.index != 240);
3398 MAS_BUG_ON(&mas, mas.last != 249);
3399
3400 /* Shift back with mas_next */
3401 i++;
3402 MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
3403 MAS_BUG_ON(&mas, mas.index != 250);
3404 MAS_BUG_ON(&mas, mas.last != 259);
3405
3406 i = 33;
3407 mas_set(&mas, i*10);
3408 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
3409 MAS_BUG_ON(&mas, mas.index != 330);
3410 MAS_BUG_ON(&mas, mas.last != 339);
3411
3412 /* Next range is in another node */
3413 i++;
3414 MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
3415 MAS_BUG_ON(&mas, mas.offset != 0);
3416 MAS_BUG_ON(&mas, mas.index != 340);
3417 MAS_BUG_ON(&mas, mas.last != 349);
3418
3419 /* Next out of range */
3420 i++;
3421 MAS_BUG_ON(&mas, mas_next_range(&mas, i*10 - 1) != NULL);
3422 /* maple state does not move */
3423 MAS_BUG_ON(&mas, mas.offset != 0);
3424 MAS_BUG_ON(&mas, mas.index != 340);
3425 MAS_BUG_ON(&mas, mas.last != 349);
3426
3427 /* Prev out of range */
3428 i--;
3429 MAS_BUG_ON(&mas, mas_prev_range(&mas, i*10 + 1) != NULL);
3430 /* maple state does not move */
3431 MAS_BUG_ON(&mas, mas.offset != 0);
3432 MAS_BUG_ON(&mas, mas.index != 340);
3433 MAS_BUG_ON(&mas, mas.last != 349);
3434
3435 mas_set(&mas, 210);
3436 for (i = 210; i<= 350; i += 10) {
> 3437 void *entry = mas_find_range(&mas, ULONG_MAX);
3438
3439 MAS_BUG_ON(&mas, entry != xa_mk_value(i/10));
3440 }
3441
3442 mas_set(&mas, 0);
> 3443 mas_contiguous(&mas, test, ULONG_MAX) {
3444 MAS_BUG_ON(&mas, test != xa_mk_value(0));
3445 MAS_BUG_ON(&mas, mas.index != 0);
3446 MAS_BUG_ON(&mas, mas.last != 5);
3447 }
3448 MAS_BUG_ON(&mas, test != NULL);
3449 MAS_BUG_ON(&mas, mas.index != 6);
3450 MAS_BUG_ON(&mas, mas.last != 9);
3451
3452 mas_set(&mas, 6);
3453 mas_contiguous(&mas, test, ULONG_MAX) {
3454 MAS_BUG_ON(&mas, test != xa_mk_value(1));
3455 MAS_BUG_ON(&mas, mas.index != 10);
3456 MAS_BUG_ON(&mas, mas.last != 15);
3457 }
3458 MAS_BUG_ON(&mas, test != NULL);
3459 MAS_BUG_ON(&mas, mas.index != 16);
3460 MAS_BUG_ON(&mas, mas.last != 19);
3461
3462 i = 210;
3463 mas_set(&mas, i);
3464 mas_contiguous(&mas, test, 340) {
3465 MAS_BUG_ON(&mas, test != xa_mk_value(i/10));
3466 MAS_BUG_ON(&mas, mas.index != i);
3467 MAS_BUG_ON(&mas, mas.last != i+9);
3468 i+=10;
3469 offset = mas.offset;
3470 }
3471 /* Hit the limit, iterator is at the limit. */
3472 MAS_BUG_ON(&mas, offset != mas.offset);
3473 MAS_BUG_ON(&mas, test != NULL);
3474 MAS_BUG_ON(&mas, mas.index != 340);
3475 MAS_BUG_ON(&mas, mas.last != 349);
3476 test = mas_find_range(&mas, ULONG_MAX);
3477 MAS_BUG_ON(&mas, test != xa_mk_value(35));
3478 MAS_BUG_ON(&mas, mas.index != 350);
3479 MAS_BUG_ON(&mas, mas.last != 355);
3480
3481
> 3482 test = mas_find_range_rev(&mas, 0);
3483 MAS_BUG_ON(&mas, test != xa_mk_value(34));
3484 MAS_BUG_ON(&mas, mas.index != 340);
3485 MAS_BUG_ON(&mas, mas.last != 349);
3486 mas_set(&mas, 345);
3487 test = mas_find_range_rev(&mas, 0);
3488 MAS_BUG_ON(&mas, test != xa_mk_value(34));
3489 MAS_BUG_ON(&mas, mas.index != 340);
3490 MAS_BUG_ON(&mas, mas.last != 349);
3491
3492 offset = mas.offset;
3493 test = mas_find_range_rev(&mas, 340);
3494 MAS_BUG_ON(&mas, offset != mas.offset);
3495 MAS_BUG_ON(&mas, test != NULL);
3496 MAS_BUG_ON(&mas, mas.index != 340);
3497 MAS_BUG_ON(&mas, mas.last != 349);
3498
3499 mas_unlock(&mas);
3500 mt_set_non_kernel(0);
3501 }
3502

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-04-26 03:32:01

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 16/34] maple_tree: Make test code work without debug enabled

Hi Liam,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.3 next-20230425]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230425140955.3834476-17-Liam.Howlett%40oracle.com
patch subject: [PATCH 16/34] maple_tree: Make test code work without debug enabled
config: openrisc-randconfig-r015-20230423 (https://download.01.org/0day-ci/archive/20230426/[email protected]/config)
compiler: or1k-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/a51199306b9f48db55117d3357e7a19c845c089c
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
git checkout a51199306b9f48db55117d3357e7a19c845c089c
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=openrisc olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=openrisc SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

lib/test_maple_tree.c: In function 'check_ranges':
>> lib/test_maple_tree.c:1078:9: error: implicit declaration of function 'mt_validate' [-Werror=implicit-function-declaration]
1078 | mt_validate(mt);
| ^~~~~~~~~~~
lib/test_maple_tree.c: In function 'check_dup':
>> lib/test_maple_tree.c:2498:9: error: implicit declaration of function 'mt_cache_shrink' [-Werror=implicit-function-declaration]
2498 | mt_cache_shrink();
| ^~~~~~~~~~~~~~~
In file included from include/linux/kernel.h:30,
from include/linux/maple_tree.h:11,
from lib/test_maple_tree.c:10:
lib/test_maple_tree.c: In function 'maple_tree_seed':
>> lib/test_maple_tree.c:2985:38: error: 'maple_tree_tests_passed' undeclared (first use in this function)
2985 | atomic_read(&maple_tree_tests_passed),
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/printk.h:427:33: note: in definition of macro 'printk_index_wrap'
427 | _p_func(_fmt, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/printk.h:528:9: note: in expansion of macro 'printk'
528 | printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~
lib/test_maple_tree.c:2984:9: note: in expansion of macro 'pr_info'
2984 | pr_info("maple_tree: %u of %u tests passed\n",
| ^~~~~~~
lib/test_maple_tree.c:2985:38: note: each undeclared identifier is reported only once for each function it appears in
2985 | atomic_read(&maple_tree_tests_passed),
| ^~~~~~~~~~~~~~~~~~~~~~~
include/linux/printk.h:427:33: note: in definition of macro 'printk_index_wrap'
427 | _p_func(_fmt, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/printk.h:528:9: note: in expansion of macro 'printk'
528 | printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~
lib/test_maple_tree.c:2984:9: note: in expansion of macro 'pr_info'
2984 | pr_info("maple_tree: %u of %u tests passed\n",
| ^~~~~~~
>> lib/test_maple_tree.c:2986:38: error: 'maple_tree_tests_run' undeclared (first use in this function)
2986 | atomic_read(&maple_tree_tests_run));
| ^~~~~~~~~~~~~~~~~~~~
include/linux/printk.h:427:33: note: in definition of macro 'printk_index_wrap'
427 | _p_func(_fmt, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/printk.h:528:9: note: in expansion of macro 'printk'
528 | printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~
lib/test_maple_tree.c:2984:9: note: in expansion of macro 'pr_info'
2984 | pr_info("maple_tree: %u of %u tests passed\n",
| ^~~~~~~
cc1: some warnings being treated as errors


vim +/mt_validate +1078 lib/test_maple_tree.c

e15e06a8392321 Liam R. Howlett 2022-09-06 947
120b116208a087 Liam Howlett 2022-10-28 948 static noinline void check_ranges(struct maple_tree *mt)
e15e06a8392321 Liam R. Howlett 2022-09-06 949 {
120b116208a087 Liam Howlett 2022-10-28 950 int i, val, val2;
120b116208a087 Liam Howlett 2022-10-28 951 unsigned long r[] = {
120b116208a087 Liam Howlett 2022-10-28 952 10, 15,
120b116208a087 Liam Howlett 2022-10-28 953 20, 25,
120b116208a087 Liam Howlett 2022-10-28 954 17, 22, /* Overlaps previous range. */
120b116208a087 Liam Howlett 2022-10-28 955 9, 1000, /* Huge. */
120b116208a087 Liam Howlett 2022-10-28 956 100, 200,
120b116208a087 Liam Howlett 2022-10-28 957 45, 168,
120b116208a087 Liam Howlett 2022-10-28 958 118, 128,
e15e06a8392321 Liam R. Howlett 2022-09-06 959 };
e15e06a8392321 Liam R. Howlett 2022-09-06 960
120b116208a087 Liam Howlett 2022-10-28 961 MT_BUG_ON(mt, !mtree_empty(mt));
120b116208a087 Liam Howlett 2022-10-28 962 check_insert_range(mt, r[0], r[1], xa_mk_value(r[0]), 0);
120b116208a087 Liam Howlett 2022-10-28 963 check_insert_range(mt, r[2], r[3], xa_mk_value(r[2]), 0);
120b116208a087 Liam Howlett 2022-10-28 964 check_insert_range(mt, r[4], r[5], xa_mk_value(r[4]), -EEXIST);
120b116208a087 Liam Howlett 2022-10-28 965 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 966 /* Store */
120b116208a087 Liam Howlett 2022-10-28 967 check_store_range(mt, r[4], r[5], xa_mk_value(r[4]), 0);
120b116208a087 Liam Howlett 2022-10-28 968 check_store_range(mt, r[6], r[7], xa_mk_value(r[6]), 0);
120b116208a087 Liam Howlett 2022-10-28 969 check_store_range(mt, r[8], r[9], xa_mk_value(r[8]), 0);
120b116208a087 Liam Howlett 2022-10-28 970 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 971 mtree_destroy(mt);
120b116208a087 Liam Howlett 2022-10-28 972 MT_BUG_ON(mt, mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 973
120b116208a087 Liam Howlett 2022-10-28 974 check_seq(mt, 50, false);
120b116208a087 Liam Howlett 2022-10-28 975 mt_set_non_kernel(4);
120b116208a087 Liam Howlett 2022-10-28 976 check_store_range(mt, 5, 47, xa_mk_value(47), 0);
120b116208a087 Liam Howlett 2022-10-28 977 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 978 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 979
120b116208a087 Liam Howlett 2022-10-28 980 /* Create tree of 1-100 */
120b116208a087 Liam Howlett 2022-10-28 981 check_seq(mt, 100, false);
120b116208a087 Liam Howlett 2022-10-28 982 /* Store 45-168 */
120b116208a087 Liam Howlett 2022-10-28 983 mt_set_non_kernel(10);
120b116208a087 Liam Howlett 2022-10-28 984 check_store_range(mt, r[10], r[11], xa_mk_value(r[10]), 0);
120b116208a087 Liam Howlett 2022-10-28 985 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 986 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 987
120b116208a087 Liam Howlett 2022-10-28 988 /* Create tree of 1-200 */
120b116208a087 Liam Howlett 2022-10-28 989 check_seq(mt, 200, false);
120b116208a087 Liam Howlett 2022-10-28 990 /* Store 45-168 */
120b116208a087 Liam Howlett 2022-10-28 991 check_store_range(mt, r[10], r[11], xa_mk_value(r[10]), 0);
120b116208a087 Liam Howlett 2022-10-28 992 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 993 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 994
120b116208a087 Liam Howlett 2022-10-28 995 check_seq(mt, 30, false);
120b116208a087 Liam Howlett 2022-10-28 996 check_store_range(mt, 6, 18, xa_mk_value(6), 0);
120b116208a087 Liam Howlett 2022-10-28 997 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 998 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 999
120b116208a087 Liam Howlett 2022-10-28 1000 /* Overwrite across multiple levels. */
120b116208a087 Liam Howlett 2022-10-28 1001 /* Create tree of 1-400 */
120b116208a087 Liam Howlett 2022-10-28 1002 check_seq(mt, 400, false);
120b116208a087 Liam Howlett 2022-10-28 1003 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1004 /* Store 118-128 */
120b116208a087 Liam Howlett 2022-10-28 1005 check_store_range(mt, r[12], r[13], xa_mk_value(r[12]), 0);
120b116208a087 Liam Howlett 2022-10-28 1006 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1007 mtree_test_erase(mt, 140);
120b116208a087 Liam Howlett 2022-10-28 1008 mtree_test_erase(mt, 141);
120b116208a087 Liam Howlett 2022-10-28 1009 mtree_test_erase(mt, 142);
120b116208a087 Liam Howlett 2022-10-28 1010 mtree_test_erase(mt, 143);
120b116208a087 Liam Howlett 2022-10-28 1011 mtree_test_erase(mt, 130);
120b116208a087 Liam Howlett 2022-10-28 1012 mtree_test_erase(mt, 131);
120b116208a087 Liam Howlett 2022-10-28 1013 mtree_test_erase(mt, 132);
120b116208a087 Liam Howlett 2022-10-28 1014 mtree_test_erase(mt, 133);
120b116208a087 Liam Howlett 2022-10-28 1015 mtree_test_erase(mt, 134);
120b116208a087 Liam Howlett 2022-10-28 1016 mtree_test_erase(mt, 135);
120b116208a087 Liam Howlett 2022-10-28 1017 check_load(mt, r[12], xa_mk_value(r[12]));
120b116208a087 Liam Howlett 2022-10-28 1018 check_load(mt, r[13], xa_mk_value(r[12]));
120b116208a087 Liam Howlett 2022-10-28 1019 check_load(mt, r[13] - 1, xa_mk_value(r[12]));
120b116208a087 Liam Howlett 2022-10-28 1020 check_load(mt, r[13] + 1, xa_mk_value(r[13] + 1));
120b116208a087 Liam Howlett 2022-10-28 1021 check_load(mt, 135, NULL);
120b116208a087 Liam Howlett 2022-10-28 1022 check_load(mt, 140, NULL);
120b116208a087 Liam Howlett 2022-10-28 1023 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1024 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 1025 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1026
e15e06a8392321 Liam R. Howlett 2022-09-06 1027
e15e06a8392321 Liam R. Howlett 2022-09-06 1028
120b116208a087 Liam Howlett 2022-10-28 1029 /* Overwrite multiple levels at the end of the tree (slot 7) */
120b116208a087 Liam Howlett 2022-10-28 1030 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1031 check_seq(mt, 400, false);
120b116208a087 Liam Howlett 2022-10-28 1032 check_store_range(mt, 353, 361, xa_mk_value(353), 0);
120b116208a087 Liam Howlett 2022-10-28 1033 check_store_range(mt, 347, 352, xa_mk_value(347), 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1034
120b116208a087 Liam Howlett 2022-10-28 1035 check_load(mt, 346, xa_mk_value(346));
120b116208a087 Liam Howlett 2022-10-28 1036 for (i = 347; i <= 352; i++)
120b116208a087 Liam Howlett 2022-10-28 1037 check_load(mt, i, xa_mk_value(347));
120b116208a087 Liam Howlett 2022-10-28 1038 for (i = 353; i <= 361; i++)
120b116208a087 Liam Howlett 2022-10-28 1039 check_load(mt, i, xa_mk_value(353));
120b116208a087 Liam Howlett 2022-10-28 1040 check_load(mt, 362, xa_mk_value(362));
120b116208a087 Liam Howlett 2022-10-28 1041 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1042 MT_BUG_ON(mt, !mt_height(mt));
120b116208a087 Liam Howlett 2022-10-28 1043 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1044
120b116208a087 Liam Howlett 2022-10-28 1045 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1046 check_seq(mt, 400, false);
120b116208a087 Liam Howlett 2022-10-28 1047 check_store_range(mt, 352, 364, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1048 check_store_range(mt, 351, 363, xa_mk_value(352), 0);
120b116208a087 Liam Howlett 2022-10-28 1049 check_load(mt, 350, xa_mk_value(350));
120b116208a087 Liam Howlett 2022-10-28 1050 check_load(mt, 351, xa_mk_value(352));
120b116208a087 Liam Howlett 2022-10-28 1051 for (i = 352; i <= 363; i++)
120b116208a087 Liam Howlett 2022-10-28 1052 check_load(mt, i, xa_mk_value(352));
120b116208a087 Liam Howlett 2022-10-28 1053 check_load(mt, 364, NULL);
120b116208a087 Liam Howlett 2022-10-28 1054 check_load(mt, 365, xa_mk_value(365));
e15e06a8392321 Liam R. Howlett 2022-09-06 1055 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1056 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1057 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1058
120b116208a087 Liam Howlett 2022-10-28 1059 mt_set_non_kernel(5);
120b116208a087 Liam Howlett 2022-10-28 1060 check_seq(mt, 400, false);
120b116208a087 Liam Howlett 2022-10-28 1061 check_store_range(mt, 352, 364, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1062 check_store_range(mt, 351, 364, xa_mk_value(352), 0);
120b116208a087 Liam Howlett 2022-10-28 1063 check_load(mt, 350, xa_mk_value(350));
120b116208a087 Liam Howlett 2022-10-28 1064 check_load(mt, 351, xa_mk_value(352));
120b116208a087 Liam Howlett 2022-10-28 1065 for (i = 352; i <= 364; i++)
120b116208a087 Liam Howlett 2022-10-28 1066 check_load(mt, i, xa_mk_value(352));
120b116208a087 Liam Howlett 2022-10-28 1067 check_load(mt, 365, xa_mk_value(365));
120b116208a087 Liam Howlett 2022-10-28 1068 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1069 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1070 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1071
120b116208a087 Liam Howlett 2022-10-28 1072
120b116208a087 Liam Howlett 2022-10-28 1073 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1074 check_seq(mt, 400, false);
120b116208a087 Liam Howlett 2022-10-28 1075 check_store_range(mt, 362, 367, xa_mk_value(362), 0);
120b116208a087 Liam Howlett 2022-10-28 1076 check_store_range(mt, 353, 361, xa_mk_value(353), 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1077 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 @1078 mt_validate(mt);
120b116208a087 Liam Howlett 2022-10-28 1079 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1080 mtree_destroy(mt);
120b116208a087 Liam Howlett 2022-10-28 1081 /*
120b116208a087 Liam Howlett 2022-10-28 1082 * Interesting cases:
120b116208a087 Liam Howlett 2022-10-28 1083 * 1. Overwrite the end of a node and end in the first entry of the next
120b116208a087 Liam Howlett 2022-10-28 1084 * node.
120b116208a087 Liam Howlett 2022-10-28 1085 * 2. Split a single range
120b116208a087 Liam Howlett 2022-10-28 1086 * 3. Overwrite the start of a range
120b116208a087 Liam Howlett 2022-10-28 1087 * 4. Overwrite the end of a range
120b116208a087 Liam Howlett 2022-10-28 1088 * 5. Overwrite the entire range
120b116208a087 Liam Howlett 2022-10-28 1089 * 6. Overwrite a range that causes multiple parent nodes to be
120b116208a087 Liam Howlett 2022-10-28 1090 * combined
120b116208a087 Liam Howlett 2022-10-28 1091 * 7. Overwrite a range that causes multiple parent nodes and part of
120b116208a087 Liam Howlett 2022-10-28 1092 * root to be combined
120b116208a087 Liam Howlett 2022-10-28 1093 * 8. Overwrite the whole tree
120b116208a087 Liam Howlett 2022-10-28 1094 * 9. Try to overwrite the zero entry of an alloc tree.
120b116208a087 Liam Howlett 2022-10-28 1095 * 10. Write a range larger than a nodes current pivot
120b116208a087 Liam Howlett 2022-10-28 1096 */
e15e06a8392321 Liam R. Howlett 2022-09-06 1097
120b116208a087 Liam Howlett 2022-10-28 1098 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1099 for (i = 0; i <= 500; i++) {
120b116208a087 Liam Howlett 2022-10-28 1100 val = i*5;
120b116208a087 Liam Howlett 2022-10-28 1101 val2 = (i+1)*5;
120b116208a087 Liam Howlett 2022-10-28 1102 check_store_range(mt, val, val2, xa_mk_value(val), 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1103 }
120b116208a087 Liam Howlett 2022-10-28 1104 check_store_range(mt, 2400, 2400, xa_mk_value(2400), 0);
120b116208a087 Liam Howlett 2022-10-28 1105 check_store_range(mt, 2411, 2411, xa_mk_value(2411), 0);
120b116208a087 Liam Howlett 2022-10-28 1106 check_store_range(mt, 2412, 2412, xa_mk_value(2412), 0);
120b116208a087 Liam Howlett 2022-10-28 1107 check_store_range(mt, 2396, 2400, xa_mk_value(4052020), 0);
120b116208a087 Liam Howlett 2022-10-28 1108 check_store_range(mt, 2402, 2402, xa_mk_value(2402), 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1109 mtree_destroy(mt);
120b116208a087 Liam Howlett 2022-10-28 1110 mt_set_non_kernel(0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1111
120b116208a087 Liam Howlett 2022-10-28 1112 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1113 for (i = 0; i <= 500; i++) {
120b116208a087 Liam Howlett 2022-10-28 1114 val = i*5;
120b116208a087 Liam Howlett 2022-10-28 1115 val2 = (i+1)*5;
120b116208a087 Liam Howlett 2022-10-28 1116 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1117 }
120b116208a087 Liam Howlett 2022-10-28 1118 check_store_range(mt, 2422, 2422, xa_mk_value(2422), 0);
120b116208a087 Liam Howlett 2022-10-28 1119 check_store_range(mt, 2424, 2424, xa_mk_value(2424), 0);
120b116208a087 Liam Howlett 2022-10-28 1120 check_store_range(mt, 2425, 2425, xa_mk_value(2), 0);
120b116208a087 Liam Howlett 2022-10-28 1121 check_store_range(mt, 2460, 2470, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1122 check_store_range(mt, 2435, 2460, xa_mk_value(2435), 0);
120b116208a087 Liam Howlett 2022-10-28 1123 check_store_range(mt, 2461, 2470, xa_mk_value(2461), 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1124 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1125 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1126 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1127
120b116208a087 Liam Howlett 2022-10-28 1128 /* Test rebalance gaps */
e15e06a8392321 Liam R. Howlett 2022-09-06 1129 mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
120b116208a087 Liam Howlett 2022-10-28 1130 mt_set_non_kernel(50);
120b116208a087 Liam Howlett 2022-10-28 1131 for (i = 0; i <= 50; i++) {
120b116208a087 Liam Howlett 2022-10-28 1132 val = i*10;
120b116208a087 Liam Howlett 2022-10-28 1133 val2 = (i+1)*10;
120b116208a087 Liam Howlett 2022-10-28 1134 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1135 }
120b116208a087 Liam Howlett 2022-10-28 1136 check_store_range(mt, 161, 161, xa_mk_value(161), 0);
120b116208a087 Liam Howlett 2022-10-28 1137 check_store_range(mt, 162, 162, xa_mk_value(162), 0);
120b116208a087 Liam Howlett 2022-10-28 1138 check_store_range(mt, 163, 163, xa_mk_value(163), 0);
120b116208a087 Liam Howlett 2022-10-28 1139 check_store_range(mt, 240, 249, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1140 mtree_erase(mt, 200);
120b116208a087 Liam Howlett 2022-10-28 1141 mtree_erase(mt, 210);
120b116208a087 Liam Howlett 2022-10-28 1142 mtree_erase(mt, 220);
120b116208a087 Liam Howlett 2022-10-28 1143 mtree_erase(mt, 230);
120b116208a087 Liam Howlett 2022-10-28 1144 mt_set_non_kernel(0);
120b116208a087 Liam Howlett 2022-10-28 1145 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1146 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1147
e15e06a8392321 Liam R. Howlett 2022-09-06 1148 mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
120b116208a087 Liam Howlett 2022-10-28 1149 for (i = 0; i <= 500; i++) {
120b116208a087 Liam Howlett 2022-10-28 1150 val = i*10;
120b116208a087 Liam Howlett 2022-10-28 1151 val2 = (i+1)*10;
120b116208a087 Liam Howlett 2022-10-28 1152 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1153 }
120b116208a087 Liam Howlett 2022-10-28 1154 check_store_range(mt, 4600, 4959, xa_mk_value(1), 0);
120b116208a087 Liam Howlett 2022-10-28 1155 mt_validate(mt);
120b116208a087 Liam Howlett 2022-10-28 1156 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1157 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1158
e15e06a8392321 Liam R. Howlett 2022-09-06 1159 mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
120b116208a087 Liam Howlett 2022-10-28 1160 for (i = 0; i <= 500; i++) {
120b116208a087 Liam Howlett 2022-10-28 1161 val = i*10;
120b116208a087 Liam Howlett 2022-10-28 1162 val2 = (i+1)*10;
120b116208a087 Liam Howlett 2022-10-28 1163 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1164 }
120b116208a087 Liam Howlett 2022-10-28 1165 check_store_range(mt, 4811, 4811, xa_mk_value(4811), 0);
120b116208a087 Liam Howlett 2022-10-28 1166 check_store_range(mt, 4812, 4812, xa_mk_value(4812), 0);
120b116208a087 Liam Howlett 2022-10-28 1167 check_store_range(mt, 4861, 4861, xa_mk_value(4861), 0);
120b116208a087 Liam Howlett 2022-10-28 1168 check_store_range(mt, 4862, 4862, xa_mk_value(4862), 0);
120b116208a087 Liam Howlett 2022-10-28 1169 check_store_range(mt, 4842, 4849, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1170 mt_validate(mt);
120b116208a087 Liam Howlett 2022-10-28 1171 MT_BUG_ON(mt, !mt_height(mt));
e15e06a8392321 Liam R. Howlett 2022-09-06 1172 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1173
e15e06a8392321 Liam R. Howlett 2022-09-06 1174 mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
120b116208a087 Liam Howlett 2022-10-28 1175 for (i = 0; i <= 1300; i++) {
120b116208a087 Liam Howlett 2022-10-28 1176 val = i*10;
120b116208a087 Liam Howlett 2022-10-28 1177 val2 = (i+1)*10;
120b116208a087 Liam Howlett 2022-10-28 1178 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1179 MT_BUG_ON(mt, mt_height(mt) >= 4);
120b116208a087 Liam Howlett 2022-10-28 1180 }
120b116208a087 Liam Howlett 2022-10-28 1181 /* Cause a 3 child split all the way up the tree. */
120b116208a087 Liam Howlett 2022-10-28 1182 for (i = 5; i < 215; i += 10)
120b116208a087 Liam Howlett 2022-10-28 1183 check_store_range(mt, 11450 + i, 11450 + i + 1, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1184 for (i = 5; i < 65; i += 10)
120b116208a087 Liam Howlett 2022-10-28 1185 check_store_range(mt, 11770 + i, 11770 + i + 1, NULL, 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1186
120b116208a087 Liam Howlett 2022-10-28 1187 MT_BUG_ON(mt, mt_height(mt) >= 4);
120b116208a087 Liam Howlett 2022-10-28 1188 for (i = 5; i < 45; i += 10)
120b116208a087 Liam Howlett 2022-10-28 1189 check_store_range(mt, 11700 + i, 11700 + i + 1, NULL, 0);
120b116208a087 Liam Howlett 2022-10-28 1190 if (!MAPLE_32BIT)
120b116208a087 Liam Howlett 2022-10-28 1191 MT_BUG_ON(mt, mt_height(mt) < 4);
e15e06a8392321 Liam R. Howlett 2022-09-06 1192 mtree_destroy(mt);
e15e06a8392321 Liam R. Howlett 2022-09-06 1193
e15e06a8392321 Liam R. Howlett 2022-09-06 1194
e15e06a8392321 Liam R. Howlett 2022-09-06 1195 mt_init_flags(mt, MT_FLAGS_ALLOC_RANGE);
120b116208a087 Liam Howlett 2022-10-28 1196 for (i = 0; i <= 1200; i++) {
120b116208a087 Liam Howlett 2022-10-28 1197 val = i*10;
120b116208a087 Liam Howlett 2022-10-28 1198 val2 = (i+1)*10;
120b116208a087 Liam Howlett 2022-10-28 1199 check_store_range(mt, val, val2, xa_mk_value(val), 0);
120b116208a087 Liam Howlett 2022-10-28 1200 MT_BUG_ON(mt, mt_height(mt) >= 4);
e15e06a8392321 Liam R. Howlett 2022-09-06 1201 }
120b116208a087 Liam Howlett 2022-10-28 1202 /* Fill parents and leaves before split. */
120b116208a087 Liam Howlett 2022-10-28 1203 for (i = 5; i < 455; i += 10)
120b116208a087 Liam Howlett 2022-10-28 1204 check_store_range(mt, 7800 + i, 7800 + i + 1, NULL, 0);
e15e06a8392321 Liam R. Howlett 2022-09-06 1205
120b116208a087 Liam Howlett 2022-10-28 1206 for (i = 1; i < 16; i++)
120b116208a087 Liam Howlett 2022-10-28 1207 check_store_range(mt, 8185 + i, 8185 + i + 1,
120b116208a087 Liam Howlett 2022-10-28 1208 xa_mk_value(8185+i), 0);
120b116208a087 Liam Howlett 2022-10-28 1209 MT_BUG_ON(mt, mt_height(mt) >= 4);
120b116208a087 Liam Howlett 2022-10-28 1210 /* triple split across multiple levels. */
120b116208a087 Liam Howlett 2022-10-28 1211 check_store_range(mt, 8184, 8184, xa_mk_value(8184), 0);
120b116208a087 Liam Howlett 2022-10-28 1212 if (!MAPLE_32BIT)
120b116208a087 Liam Howlett 2022-10-28 1213 MT_BUG_ON(mt, mt_height(mt) != 4);
e15e06a8392321 Liam R. Howlett 2022-09-06 1214 }
e15e06a8392321 Liam R. Howlett 2022-09-06 1215

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-04-26 04:30:15

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 01/34] maple_tree: Fix static analyser cppcheck issue


在 2023/4/25 22:09, Liam R. Howlett 写道:
> Static analyser of the maple tree code noticed that the split variable
> is being used to dereference into an array prior to checking the
> variable itself. Fix this issue by changing the order of the statement
> to check the variable first.
>
> Reported-by: David Binderman <[email protected]>
> Signed-off-by: Liam R. Howlett <[email protected]>

Reviewed-by: Peng Zhang<[email protected]>

> ---
> lib/maple_tree.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 110a36479dced..9cf4fca42310c 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -1943,8 +1943,9 @@ static inline int mab_calc_split(struct ma_state *mas,
> * causes one node to be deficient.
> * NOTE: mt_min_slots is 1 based, b_end and split are zero.
> */
> - while (((bn->pivot[split] - min) < slot_count - 1) &&
> - (split < slot_count - 1) && (b_end - split > slot_min))
> + while ((split < slot_count - 1) &&
> + ((bn->pivot[split] - min) < slot_count - 1) &&
> + (b_end - split > slot_min))
> split++;
> }
>

2023-04-26 04:32:29

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 02/34] maple_tree: Clean up mas_parent_enum()


在 2023/4/25 22:09, Liam R. Howlett 写道:
> From: "Liam R. Howlett" <[email protected]>
>
> mas_parent_enum() is a simple wrapper for mte_parent_enum() which is
> only called from that wrapper. Remove the wrapper and inline
> mte_parent_enum() into mas_parent_enum().
>
> At the same time, clean up the bit masking of the root pointer since it
> cannot be set by the time the bit masking occurs. Change the check on
> the root bit to a WARN_ON(), and fix the verification code to not
> trigger the WARN_ON() before checking if the node is root.
>
> Reported-by: Wei Yang <[email protected]>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/maple_tree.c | 28 +++++++++++-----------------
> 1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 9cf4fca42310c..ac0245dd88dad 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -428,25 +428,23 @@ static inline unsigned long mte_parent_slot_mask(unsigned long parent)
> * mas_parent_enum() - Return the maple_type of the parent from the stored
> * parent type.
> * @mas: The maple state
> - * @node: The maple_enode to extract the parent's enum
> + * @enode: The maple_enode to extract the parent's enum
> * Return: The node->parent maple_type
> */
> static inline
> -enum maple_type mte_parent_enum(struct maple_enode *p_enode,
> - struct maple_tree *mt)
> +enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)

Do you think it's better to rename this function to mas_parent_type()?
The meaning of enum is not obvious and there is already a similar
function mte_node_type().

> {
> unsigned long p_type;
>
> - p_type = (unsigned long)p_enode;
> - if (p_type & MAPLE_PARENT_ROOT)
> - return 0; /* Validated in the caller. */
> + p_type = (unsigned long)mte_to_node(enode)->parent;
> + if (WARN_ON(p_type & MAPLE_PARENT_ROOT))
> + return 0;
>
> p_type &= MAPLE_NODE_MASK;
> - p_type = p_type & ~(MAPLE_PARENT_ROOT | mte_parent_slot_mask(p_type));
> -
> + p_type &= ~mte_parent_slot_mask(p_type);
> switch (p_type) {
> case MAPLE_PARENT_RANGE64: /* or MAPLE_PARENT_ARANGE64 */
> - if (mt_is_alloc(mt))
> + if (mt_is_alloc(mas->tree))
> return maple_arange_64;
> return maple_range_64;
> }
> @@ -454,12 +452,6 @@ enum maple_type mte_parent_enum(struct maple_enode *p_enode,
> return 0;
> }
>
> -static inline
> -enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
> -{
> - return mte_parent_enum(ma_enode_ptr(mte_to_node(enode)->parent), mas->tree);
> -}
> -
> /*
> * mte_set_parent() - Set the parent node and encode the slot
> * @enode: The encoded maple node.
> @@ -7008,14 +7000,16 @@ static void mas_validate_parent_slot(struct ma_state *mas)
> {
> struct maple_node *parent;
> struct maple_enode *node;
> - enum maple_type p_type = mas_parent_enum(mas, mas->node);
> - unsigned char p_slot = mte_parent_slot(mas->node);
> + enum maple_type p_type;
> + unsigned char p_slot;
> void __rcu **slots;
> int i;
>
> if (mte_is_root(mas->node))
> return;
>
> + p_slot = mte_parent_slot(mas->node);
> + p_type = mas_parent_enum(mas, mas->node);
> parent = mte_parent(mas->node);
> slots = ma_slots(parent, p_type);
> MT_BUG_ON(mas->tree, mas_mn(mas) == parent);

2023-04-26 05:48:43

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 03/34] maple_tree: Avoid unnecessary ascending


在 2023/4/25 22:09, Liam R. Howlett 写道:
> The maple tree node limits are implied by the parent. When walking up
> the tree, the limit may not be known until a slot that does not have
> implied limits are encountered. However, if the node is the left-most
> or right-most node, the walking up to find that limit can be skipped.
>
> This commit also fixes the debug/testing code that was not setting the
> limit on walking down the tree as that optimization is not compatible
> with this change.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
> ---
> lib/maple_tree.c | 6 ++++++
> tools/testing/radix-tree/maple.c | 4 ++++
> 2 files changed, 10 insertions(+)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index ac0245dd88dad..60bae5be008a6 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -1132,6 +1132,12 @@ static int mas_ascend(struct ma_state *mas)
> return 0;
> }
>
> + if (!mas->min)
> + set_min = true;
> +
> + if (mas->max == ULONG_MAX)
> + set_max = true;
> +
> min = 0;
> max = ULONG_MAX;
> do {
> diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
> index 9286d3baa12d6..75df543e019c9 100644
> --- a/tools/testing/radix-tree/maple.c
> +++ b/tools/testing/radix-tree/maple.c
> @@ -35259,6 +35259,7 @@ static void mas_dfs_preorder(struct ma_state *mas)
>
> struct maple_enode *prev;
> unsigned char end, slot = 0;
> + unsigned long *pivots;
>
> if (mas->node == MAS_START) {
> mas_start(mas);
> @@ -35291,6 +35292,9 @@ static void mas_dfs_preorder(struct ma_state *mas)
> mas_ascend(mas);
> goto walk_up;
> }
> + pivots = ma_pivots(mte_to_node(prev), mte_node_type(prev));
> + mas->max = mas_safe_pivot(mas, pivots, slot, mte_node_type(prev));
> + mas->min = mas_safe_min(mas, pivots, slot);
>
> return;
> done:

2023-04-26 09:58:53

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 15/34] maple_tree: Return error on mte_pivots() out of range


在 2023/4/25 22:09, Liam R. Howlett 写道:
> Rename mte_pivots() to mas_pivots() and pass through the ma_state to set
> the error code to -EIO when the offset is out of range for the node
> type. Change the WARN_ON() to MAS_WARN_ON() to log the maple state.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
> ---
> lib/maple_tree.c | 25 ++++++++++++++-----------
> 1 file changed, 14 insertions(+), 11 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 41873d935cfa3..89e30462f8b62 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -663,22 +663,22 @@ static inline unsigned long *ma_gaps(struct maple_node *node,
> }
>
> /*
> - * mte_pivot() - Get the pivot at @piv of the maple encoded node.
> - * @mn: The maple encoded node.
> + * mas_pivot() - Get the pivot at @piv of the maple encoded node.
> + * @mas: The maple state.
> * @piv: The pivot.
> *
> * Return: the pivot at @piv of @mn.
> */
> -static inline unsigned long mte_pivot(const struct maple_enode *mn,
> - unsigned char piv)
> +static inline unsigned long mas_pivot(struct ma_state *mas, unsigned char piv)
> {
> - struct maple_node *node = mte_to_node(mn);
> - enum maple_type type = mte_node_type(mn);
> + struct maple_node *node = mas_mn(mas);
> + enum maple_type type = mte_node_type(mas->node);
>
> - if (piv >= mt_pivots[type]) {
> - WARN_ON(1);
> + if (MAS_WARN_ON(mas, piv >= mt_pivots[type])) {
> + mas_set_err(mas, -EIO);
> return 0;
> }
> +
> switch (type) {
> case maple_arange_64:
> return node->ma64.pivot[piv];
> @@ -5400,8 +5400,8 @@ static inline int mas_alloc(struct ma_state *mas, void *entry,
> return xa_err(mas->node);
>
> if (!mas->index)
> - return mte_pivot(mas->node, 0);
> - return mte_pivot(mas->node, 1);
> + return mas_pivot(mas, 0);
> + return mas_pivot(mas, 1);
> }
>
> /* Must be walking a tree. */
> @@ -5418,7 +5418,10 @@ static inline int mas_alloc(struct ma_state *mas, void *entry,
> */
> min = mas->min;
> if (mas->offset)
> - min = mte_pivot(mas->node, mas->offset - 1) + 1;
> + min = mas_pivot(mas, mas->offset - 1) + 1;
> +
> + if (mas_is_err(mas))
> + return xa_err(mas->node);
>
> if (mas->index < min)
> mas->index = min;

2023-04-26 10:00:55

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 20/34] maple_tree: Remove unnecessary check from mas_destroy()


在 2023/4/25 22:09, Liam R. Howlett 写道:
> mas_destroy currently checks if mas->node is MAS_START prior to calling
> mas_start(), but this is unnecessary as mas_start() will do nothing if
> the node is anything but MAS_START.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
> ---
> lib/maple_tree.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 89e30462f8b62..35c6e12ca9482 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5817,9 +5817,7 @@ void mas_destroy(struct ma_state *mas)
> if (mas->mas_flags & MA_STATE_REBALANCE) {
> unsigned char end;
>
> - if (mas_is_start(mas))
> - mas_start(mas);
> -
> + mas_start(mas);
> mtree_range_walk(mas);
> end = mas_data_end(mas) + 1;
> if (end < mt_min_slot_count(mas->node) - 1)

2023-04-26 13:35:34

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface

Hi Liam,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.3 next-20230425]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230425140955.3834476-28-Liam.Howlett%40oracle.com
patch subject: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface
config: x86_64-kexec (https://download.01.org/0day-ci/archive/20230426/[email protected]/config)
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/0e736b8a8054e7f0b216320d2458a00b54fcd2b0
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
git checkout 0e736b8a8054e7f0b216320d2458a00b54fcd2b0
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 olddefconfig
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All error/warnings (new ones prefixed by >>):

lib/maple_tree.c:4710:7: warning: no previous prototype for 'mas_next_slot' [-Wmissing-prototypes]
4710 | void *mas_next_slot(struct ma_state *mas, unsigned long max)
| ^~~~~~~~~~~~~
lib/maple_tree.c: In function 'mas_next_entry':
>> lib/maple_tree.c:4780:17: error: implicit declaration of function 'mas_next_slot_limit'; did you mean 'mas_next_slot'? [-Werror=implicit-function-declaration]
4780 | entry = mas_next_slot_limit(mas, limit);
| ^~~~~~~~~~~~~~~~~~~
| mas_next_slot
>> lib/maple_tree.c:4780:15: warning: assignment to 'void *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
4780 | entry = mas_next_slot_limit(mas, limit);
| ^
>> lib/maple_tree.c:4787:16: warning: returning 'int' from a function with return type 'void *' makes pointer from integer without a cast [-Wint-conversion]
4787 | return mas_next_slot_limit(mas, limit);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +4780 lib/maple_tree.c

4760
4761 /*
4762 * mas_next_entry() - Internal function to get the next entry.
4763 * @mas: The maple state
4764 * @limit: The maximum range start.
4765 *
4766 * Set the @mas->node to the next entry and the range_start to
4767 * the beginning value for the entry. Does not check beyond @limit.
4768 * Sets @mas->index and @mas->last to the limit if it is hit.
4769 * Restarts on dead nodes.
4770 *
4771 * Return: the next entry or %NULL.
4772 */
4773 static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
4774 {
4775 void *entry = NULL;
4776
4777 if (mas->last >= limit)
4778 return NULL;
4779
> 4780 entry = mas_next_slot_limit(mas, limit);
4781 if (entry)
4782 return entry;
4783
4784 if (mas_is_none(mas))
4785 return NULL;
4786
> 4787 return mas_next_slot_limit(mas, limit);
4788 }
4789

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-04-26 21:10:47

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 02/34] maple_tree: Clean up mas_parent_enum()

* Peng Zhang <[email protected]> [230426 00:15]:
>
> 在 2023/4/25 22:09, Liam R. Howlett 写道:
> > From: "Liam R. Howlett" <[email protected]>
> >
> > mas_parent_enum() is a simple wrapper for mte_parent_enum() which is
> > only called from that wrapper. Remove the wrapper and inline
> > mte_parent_enum() into mas_parent_enum().
> >
> > At the same time, clean up the bit masking of the root pointer since it
> > cannot be set by the time the bit masking occurs. Change the check on
> > the root bit to a WARN_ON(), and fix the verification code to not
> > trigger the WARN_ON() before checking if the node is root.
> >
> > Reported-by: Wei Yang <[email protected]>
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/maple_tree.c | 28 +++++++++++-----------------
> > 1 file changed, 11 insertions(+), 17 deletions(-)
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index 9cf4fca42310c..ac0245dd88dad 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -428,25 +428,23 @@ static inline unsigned long mte_parent_slot_mask(unsigned long parent)
> > * mas_parent_enum() - Return the maple_type of the parent from the stored
> > * parent type.
> > * @mas: The maple state
> > - * @node: The maple_enode to extract the parent's enum
> > + * @enode: The maple_enode to extract the parent's enum
> > * Return: The node->parent maple_type
> > */
> > static inline
> > -enum maple_type mte_parent_enum(struct maple_enode *p_enode,
> > - struct maple_tree *mt)
> > +enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
>
> Do you think it's better to rename this function to mas_parent_type()?
> The meaning of enum is not obvious and there is already a similar
> function mte_node_type().

Yes, thanks. I'll make that change in v2.

>
> > {
> > unsigned long p_type;
> > - p_type = (unsigned long)p_enode;
> > - if (p_type & MAPLE_PARENT_ROOT)
> > - return 0; /* Validated in the caller. */
> > + p_type = (unsigned long)mte_to_node(enode)->parent;
> > + if (WARN_ON(p_type & MAPLE_PARENT_ROOT))
> > + return 0;
> > p_type &= MAPLE_NODE_MASK;
> > - p_type = p_type & ~(MAPLE_PARENT_ROOT | mte_parent_slot_mask(p_type));
> > -
> > + p_type &= ~mte_parent_slot_mask(p_type);
> > switch (p_type) {
> > case MAPLE_PARENT_RANGE64: /* or MAPLE_PARENT_ARANGE64 */
> > - if (mt_is_alloc(mt))
> > + if (mt_is_alloc(mas->tree))
> > return maple_arange_64;
> > return maple_range_64;
> > }
> > @@ -454,12 +452,6 @@ enum maple_type mte_parent_enum(struct maple_enode *p_enode,
> > return 0;
> > }
> > -static inline
> > -enum maple_type mas_parent_enum(struct ma_state *mas, struct maple_enode *enode)
> > -{
> > - return mte_parent_enum(ma_enode_ptr(mte_to_node(enode)->parent), mas->tree);
> > -}
> > -
> > /*
> > * mte_set_parent() - Set the parent node and encode the slot
> > * @enode: The encoded maple node.
> > @@ -7008,14 +7000,16 @@ static void mas_validate_parent_slot(struct ma_state *mas)
> > {
> > struct maple_node *parent;
> > struct maple_enode *node;
> > - enum maple_type p_type = mas_parent_enum(mas, mas->node);
> > - unsigned char p_slot = mte_parent_slot(mas->node);
> > + enum maple_type p_type;
> > + unsigned char p_slot;
> > void __rcu **slots;
> > int i;
> > if (mte_is_root(mas->node))
> > return;
> > + p_slot = mte_parent_slot(mas->node);
> > + p_type = mas_parent_enum(mas, mas->node);
> > parent = mte_parent(mas->node);
> > slots = ma_slots(parent, p_type);
> > MT_BUG_ON(mas->tree, mas_mn(mas) == parent);

2023-04-27 01:17:05

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH 18/34] mm: Update vma_iter_store() to use MAS_WARN_ON()

Cc-ing Petr

On (23/04/25 10:09), Liam R. Howlett wrote:
> + if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
> + vmi->mas.index > vma->vm_start)) {
> + printk("%lx > %lx\n", vmi->mas.index, vma->vm_start);
> + printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
> + printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
> }

[..]

> + if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
> + vmi->mas.last < vma->vm_start)) {
> + printk("%lx < %lx\n", vmi->mas.last, vma->vm_start);
> + printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
> + printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
> }

Any reason for "store of vma" and "into slot" to be two separate
printk()-s? It's not guaranteed that these will be printed as a
continuous line. A single printk should work fine:

pr_foo("store of vma %lx-%lx into slot %lx-%lx", ...);

The line probably needs to be terminated with \n unless you expect
more pr_cont() printks after this, which doesn't look like a case.
Additionally, do you want to pass a specific loglevel? E.g. pr_debug()?

2023-04-27 01:18:32

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 18/34] mm: Update vma_iter_store() to use MAS_WARN_ON()

* Sergey Senozhatsky <[email protected]> [230426 21:07]:
> Cc-ing Petr
>
> On (23/04/25 10:09), Liam R. Howlett wrote:
> > + if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
> > + vmi->mas.index > vma->vm_start)) {
> > + printk("%lx > %lx\n", vmi->mas.index, vma->vm_start);
> > + printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
> > + printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
> > }
>
> [..]
>
> > + if (MAS_WARN_ON(&vmi->mas, vmi->mas.node != MAS_START &&
> > + vmi->mas.last < vma->vm_start)) {
> > + printk("%lx < %lx\n", vmi->mas.last, vma->vm_start);
> > + printk("store of vma %lx-%lx", vma->vm_start, vma->vm_end);
> > + printk("into slot %lx-%lx", vmi->mas.index, vmi->mas.last);
> > }
>
> Any reason for "store of vma" and "into slot" to be two separate
> printk()-s? It's not guaranteed that these will be printed as a
> continuous line. A single printk should work fine:
>
> pr_foo("store of vma %lx-%lx into slot %lx-%lx", ...);
>

Really, just for readability. I'll split the string literal instead.

> The line probably needs to be terminated with \n unless you expect
> more pr_cont() printks after this, which doesn't look like a case.
> Additionally, do you want to pass a specific loglevel? E.g. pr_debug()?

Yes, I should do both of these, thanks.

2023-04-27 11:23:59

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 25/34] maple_tree: Clear up index and last setting in single entry tree



在 2023/4/25 22:09, Liam R. Howlett 写道:
> When there is a single entry tree (range of 0-0 pointing to an entry),
> then ensure the limit is either 0-0 or 1-oo, depending on where the user
> walks. Ensure the correct node setting as well; either MAS_ROOT or
> MAS_NONE.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/maple_tree.c | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 20f0a10dc5608..31cbfd7b44728 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5099,24 +5099,25 @@ void *mas_walk(struct ma_state *mas)
> {
> void *entry;
>
> + if (mas_is_none(mas) || mas_is_paused(mas))
> + mas->node = MAS_START;
> retry:
> entry = mas_state_walk(mas);
> - if (mas_is_start(mas))
> + if (mas_is_start(mas)) {
> goto retry;
> -
> - if (mas_is_ptr(mas)) {
> + } else if (mas_is_none(mas)) {
> + mas->index = 0;
> + mas->last = ULONG_MAX;
> + } else if (mas_is_ptr(mas)) {
> if (!mas->index) {
> mas->last = 0;
> - } else {
> - mas->index = 1;
> - mas->last = ULONG_MAX;
> + return mas_root(mas);
Why we call mas_root() to get the single entry stored in root again?
I think it's not safe. In RCU mode, if someone modify the tree to a
normal tree(not a single entry tree), mas_root() will return a address.
So, this may cause a race bug. We can return entry directly.
> }
> - return entry;
> - }
>
> - if (mas_is_none(mas)) {
> - mas->index = 0;
> + mas->index = 1;
> mas->last = ULONG_MAX;
> + mas->node = MAS_NONE;
> + return NULL;
> }
>
> return entry;

2023-04-27 17:37:26

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 25/34] maple_tree: Clear up index and last setting in single entry tree

* Peng Zhang <[email protected]> [230427 07:20]:
>
>
> 在 2023/4/25 22:09, Liam R. Howlett 写道:
> > When there is a single entry tree (range of 0-0 pointing to an entry),
> > then ensure the limit is either 0-0 or 1-oo, depending on where the user
> > walks. Ensure the correct node setting as well; either MAS_ROOT or
> > MAS_NONE.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/maple_tree.c | 21 +++++++++++----------
> > 1 file changed, 11 insertions(+), 10 deletions(-)
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index 20f0a10dc5608..31cbfd7b44728 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -5099,24 +5099,25 @@ void *mas_walk(struct ma_state *mas)
> > {
> > void *entry;
> > + if (mas_is_none(mas) || mas_is_paused(mas))
> > + mas->node = MAS_START;
> > retry:
> > entry = mas_state_walk(mas);
> > - if (mas_is_start(mas))
> > + if (mas_is_start(mas)) {
> > goto retry;
> > -
> > - if (mas_is_ptr(mas)) {
> > + } else if (mas_is_none(mas)) {
> > + mas->index = 0;
> > + mas->last = ULONG_MAX;
> > + } else if (mas_is_ptr(mas)) {
> > if (!mas->index) {
> > mas->last = 0;
> > - } else {
> > - mas->index = 1;
> > - mas->last = ULONG_MAX;
> > + return mas_root(mas);
> Why we call mas_root() to get the single entry stored in root again?
> I think it's not safe. In RCU mode, if someone modify the tree to a normal
> tree(not a single entry tree), mas_root() will return a address.
> So, this may cause a race bug. We can return entry directly.

Good catch. I will address this in v2.

> > }
> > - return entry;
> > - }
> > - if (mas_is_none(mas)) {
> > - mas->index = 0;
> > + mas->index = 1;
> > mas->last = ULONG_MAX;
> > + mas->node = MAS_NONE;
> > + return NULL;
> > }
> > return entry;

2023-04-28 02:55:34

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 21/34] maple_tree: mas_start() reset depth on dead node



在 2023/4/25 22:09, Liam R. Howlett 写道:
> When a dead node is detected, the depth has already been set to 1 so
> reset it to 0.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
> ---
> lib/maple_tree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 35c6e12ca9482..1542274dc2b7f 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -1397,9 +1397,9 @@ static inline struct maple_enode *mas_start(struct ma_state *mas)
>
> mas->min = 0;
> mas->max = ULONG_MAX;
> - mas->depth = 0;
>
> retry:
> + mas->depth = 0;
> root = mas_root(mas);
> /* Tree with nodes */
> if (likely(xa_is_node(root))) {

2023-04-28 06:49:55

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface



在 2023/4/25 22:09, Liam R. Howlett 写道:
> Sometimes, during a tree walk, the user needs the next slot regardless
> of if it is empty or not. Add an interface to get the next slot.
>
> Since there are no consecutive NULLs allowed in the tree, the mas_next()
> function can only advance two slots at most. So use the new
> mas_next_slot() interface to align both implementations.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/maple_tree.c | 178 +++++++++++++++++++----------------------------
> 1 file changed, 71 insertions(+), 107 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 31cbfd7b44728..fe6c9da6f2bd5 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4619,15 +4619,16 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> if (mas->max >= max)
> goto no_entry;
>
> + min = mas->max + 1;
> + if (min > max)
> + goto no_entry;
Unnecessary check due to mas->max < max.
> +
> level = 0;
> do {
> if (ma_is_root(node))
> goto no_entry;
>
> - min = mas->max + 1;
> - if (min > max)
> - goto no_entry;
> -
> + /* Walk up. */
> if (unlikely(mas_ascend(mas)))
> return 1;
>
> @@ -4645,13 +4646,12 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> slots = ma_slots(node, mt);
> pivot = mas_safe_pivot(mas, pivots, ++offset, mt);
> while (unlikely(level > 1)) {
> - /* Descend, if necessary */
> + level--;
> enode = mas_slot(mas, slots, offset);
> if (unlikely(ma_dead_node(node)))
> return 1;
>
> mas->node = enode;
> - level--;
> node = mas_mn(mas);
> mt = mte_node_type(mas->node);
> slots = ma_slots(node, mt);
> @@ -4680,85 +4680,84 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> return 0;
> }
>
> +static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> +{
> +retry:
> + mas_set(mas, index);
> + mas_state_walk(mas);
> + if (mas_is_start(mas))
> + goto retry;
> +}
> +
> +static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> + struct maple_node *node, const unsigned long index)
> +{
> + if (unlikely(ma_dead_node(node))) {
> + mas_rewalk(mas, index);
> + return true;
> + }
> + return false;
> +}
> +
> /*
> - * mas_next_nentry() - Get the next node entry
> - * @mas: The maple state
> - * @max: The maximum value to check
> - * @*range_start: Pointer to store the start of the range.
> + * mas_next_slot() - Get the entry in the next slot
> *
> - * Sets @mas->offset to the offset of the next node entry, @mas->last to the
> - * pivot of the entry.
> + * @mas: The maple state
> + * @max: The maximum starting range
> *
> - * Return: The next entry, %NULL otherwise
> + * Return: The entry in the next slot which is possibly NULL
> */
> -static inline void *mas_next_nentry(struct ma_state *mas,
> - struct maple_node *node, unsigned long max, enum maple_type type)
> +void *mas_next_slot(struct ma_state *mas, unsigned long max)
> {
> - unsigned char count;
> - unsigned long pivot;
> - unsigned long *pivots;
> void __rcu **slots;
> + unsigned long *pivots;
> + unsigned long pivot;
> + enum maple_type type;
> + struct maple_node *node;
> + unsigned char data_end;
> + unsigned long save_point = mas->last;
> void *entry;
>
> - if (mas->last == mas->max) {
> - mas->index = mas->max;
> - return NULL;
> - }
> -
> - slots = ma_slots(node, type);
> +retry:
> + node = mas_mn(mas);
> + type = mte_node_type(mas->node);
> pivots = ma_pivots(node, type);
> - count = ma_data_end(node, type, pivots, mas->max);
> - if (unlikely(ma_dead_node(node)))
> - return NULL;
> -
> - mas->index = mas_safe_min(mas, pivots, mas->offset);
> - if (unlikely(ma_dead_node(node)))
> - return NULL;
> -
> - if (mas->index > max)
> - return NULL;
> + data_end = ma_data_end(node, type, pivots, mas->max);
> + pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
> + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> + goto retry;
>
> - if (mas->offset > count)
> + if (pivot >= max)
> return NULL;
>
> - while (mas->offset < count) {
> - pivot = pivots[mas->offset];
> - entry = mas_slot(mas, slots, mas->offset);
> - if (ma_dead_node(node))
> - return NULL;
> -
> - mas->last = pivot;
> - if (entry)
> - return entry;
> -
> - if (pivot >= max)
> - return NULL;
> + if (likely(data_end > mas->offset)) {
> + mas->offset++;
> + mas->index = mas->last + 1;
> + } else {
> + if (mas_next_node(mas, node, max)) {
> + mas_rewalk(mas, save_point);
> + goto retry;
> + }
>
> - if (pivot >= mas->max)
> + if (mas_is_none(mas))
> return NULL;
>
> - mas->index = pivot + 1;
> - mas->offset++;
> + mas->offset = 0;
> + mas->index = mas->min;
> + node = mas_mn(mas);
> + type = mte_node_type(mas->node);
> + pivots = ma_pivots(node, type);
> }
>
> - pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
> + slots = ma_slots(node, type);
> + mas->last = mas_logical_pivot(mas, pivots, mas->offset, type);
> entry = mas_slot(mas, slots, mas->offset);
> - if (ma_dead_node(node))
> - return NULL;
> + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> + goto retry;
>
> - mas->last = pivot;
> return entry;
> }
>
> -static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> -{
> -retry:
> - mas_set(mas, index);
> - mas_state_walk(mas);
> - if (mas_is_start(mas))
> - goto retry;
> -}
> -
> /*
> * mas_next_entry() - Internal function to get the next entry.
> * @mas: The maple state
> @@ -4774,47 +4773,18 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
> {
> void *entry = NULL;
> - struct maple_node *node;
> - unsigned long last;
> - enum maple_type mt;
>
> if (mas->last >= limit)
> return NULL;
>
> - last = mas->last;
> -retry:
> - node = mas_mn(mas);
> - mt = mte_node_type(mas->node);
> - mas->offset++;
> - if (unlikely(mas->offset >= mt_slots[mt])) {
> - mas->offset = mt_slots[mt] - 1;
> - goto next_node;
> - }
> -
> - while (!mas_is_none(mas)) {
> - entry = mas_next_nentry(mas, node, limit, mt);
> - if (unlikely(ma_dead_node(node))) {
> - mas_rewalk(mas, last);
> - goto retry;
> - }
> -
> - if (likely(entry))
> - return entry;
> -
> - if (unlikely((mas->last >= limit)))
> - return NULL;
> + entry = mas_next_slot_limit(mas, limit);
> + if (entry)
> + return entry;
>
> -next_node:
> - if (unlikely(mas_next_node(mas, node, limit))) {
> - mas_rewalk(mas, last);
> - goto retry;
> - }
> - mas->offset = 0;
> - node = mas_mn(mas);
> - mt = mte_node_type(mas->node);
> - }
> + if (mas_is_none(mas))
> + return NULL;
>
> - return NULL;
> + return mas_next_slot_limit(mas, limit);
> }
>
> /*
> @@ -4845,10 +4815,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> slots = ma_slots(mn, mt);
> pivots = ma_pivots(mn, mt);
> count = ma_data_end(mn, mt, pivots, mas->max);
> - if (unlikely(ma_dead_node(mn))) {
> - mas_rewalk(mas, index);
> + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> goto retry;
> - }
>
> offset = mas->offset - 1;
> if (offset >= mt_slots[mt])
> @@ -4861,10 +4829,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> pivot = pivots[offset];
> }
>
> - if (unlikely(ma_dead_node(mn))) {
> - mas_rewalk(mas, index);
> + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> goto retry;
> - }
>
> while (offset && !mas_slot(mas, slots, offset)) {
> pivot = pivots[--offset];
> @@ -4881,10 +4847,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
>
> min = mas_safe_min(mas, pivots, offset);
> entry = mas_slot(mas, slots, offset);
> - if (unlikely(ma_dead_node(mn))) {
> - mas_rewalk(mas, index);
> + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> goto retry;
> - }
>
> mas->offset = offset;
> mas->last = pivot;

2023-04-28 07:18:58

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 28/34] maple_tree: Revise limit checks in mas_empty_area{_rev}()



在 2023/4/25 22:09, Liam R. Howlett 写道:
> Since the maple tree is inclusive in range, ensure that a range of 1
> (min = max) works for searching for a gap in either direction, and make
> sure the size is at least 1 but not larger than the delta between min
> and max.
>
> This commit also updates the testing. Unfortunately there isn't a way
> to safely update the tests and code without a test failure.
>
> Suggested-by: Peng Zhang <[email protected]>
> Signed-off-by: Liam R. Howlett <[email protected]>
Reviewed-by: Peng Zhang <[email protected]>
Except test code.
> ---
> lib/maple_tree.c | 20 +++++++++++++-------
> lib/test_maple_tree.c | 27 ++++++++++++++++++++-------
> 2 files changed, 33 insertions(+), 14 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index fe6c9da6f2bd5..7370d7c12fe3b 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5248,7 +5248,10 @@ int mas_empty_area(struct ma_state *mas, unsigned long min,
> unsigned long *pivots;
> enum maple_type mt;
>
> - if (min >= max)
> + if (min > max)
> + return -EINVAL;
> +
> + if (size == 0 || max - min < size - 1)
> return -EINVAL;
>
> if (mas_is_start(mas))
> @@ -5303,7 +5306,10 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
> {
> struct maple_enode *last = mas->node;
>
> - if (min >= max)
> + if (min > max)
> + return -EINVAL;
> +
> + if (size == 0 || max - min < size - 1)
> return -EINVAL;
>
> if (mas_is_start(mas)) {
> @@ -5339,7 +5345,7 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min,
> return -EBUSY;
>
> /* Trim the upper limit to the max. */
> - if (max <= mas->last)
> + if (max < mas->last)
> mas->last = max;
>
> mas->index = mas->last - size + 1;
> @@ -6375,7 +6381,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
> {
> int ret = 0;
>
> - MA_STATE(mas, mt, min, max - size);
> + MA_STATE(mas, mt, min, min);
> if (!mt_is_alloc(mt))
> return -EINVAL;
>
> @@ -6395,7 +6401,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
> retry:
> mas.offset = 0;
> mas.index = min;
> - mas.last = max - size;
> + mas.last = max - size + 1;
> ret = mas_alloc(&mas, entry, size, startp);
> if (mas_nomem(&mas, gfp))
> goto retry;
> @@ -6411,14 +6417,14 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
> {
> int ret = 0;
>
> - MA_STATE(mas, mt, min, max - size);
> + MA_STATE(mas, mt, min, max - size + 1);
> if (!mt_is_alloc(mt))
> return -EINVAL;
>
> if (WARN_ON_ONCE(mt_is_reserved(entry)))
> return -EINVAL;
>
> - if (min >= max)
> + if (min > max)
> return -EINVAL;
>
> if (max < size - 1)
> diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
> index 345eef526d8b0..7b2d19ad5934d 100644
> --- a/lib/test_maple_tree.c
> +++ b/lib/test_maple_tree.c
> @@ -105,7 +105,7 @@ static noinline void __init check_mtree_alloc_rrange(struct maple_tree *mt,
> unsigned long result = expected + 1;
> int ret;
>
> - ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end - 1,
> + ret = mtree_alloc_rrange(mt, &result, ptr, size, start, end,
> GFP_KERNEL);
> MT_BUG_ON(mt, ret != eret);
> if (ret)
> @@ -683,7 +683,7 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
> 0, /* Return value success. */
>
> 0x0, /* Min */
> - 0x565234AF1 << 12, /* Max */
> + 0x565234AF0 << 12, /* Max */
> 0x3000, /* Size */
> 0x565234AEE << 12, /* max - 3. */
> 0, /* Return value success. */
> @@ -695,14 +695,14 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
> 0, /* Return value success. */
>
> 0x0, /* Min */
> - 0x7F36D510A << 12, /* Max */
> + 0x7F36D5109 << 12, /* Max */
> 0x4000, /* Size */
> 0x7F36D5106 << 12, /* First rev hole of size 0x4000 */
> 0, /* Return value success. */
>
> /* Ascend test. */
> 0x0,
> - 34148798629 << 12,
> + 34148798628 << 12,
> 19 << 12,
> 34148797418 << 12,
> 0x0,
> @@ -714,6 +714,12 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
> 0x0,
> -EBUSY,
>
> + /* Single space test. */
> + 34148798725 << 12,
> + 34148798725 << 12,
> + 1 << 12,
> + 34148798725 << 12,
> + 0,
> };
>
> int i, range_count = ARRAY_SIZE(range);
> @@ -762,9 +768,9 @@ static noinline void __init check_alloc_rev_range(struct maple_tree *mt)
> mas_unlock(&mas);
> for (i = 0; i < req_range_count; i += 5) {
> #if DEBUG_REV_RANGE
> - pr_debug("\tReverse request between %lu-%lu size %lu, should get %lu\n",
> - req_range[i] >> 12,
> - (req_range[i + 1] >> 12) - 1,
> + pr_debug("\tReverse request %d between %lu-%lu size %lu, should get %lu\n",
> + i, req_range[i] >> 12,
> + (req_range[i + 1] >> 12),
> req_range[i+2] >> 12,
> req_range[i+3] >> 12);
> #endif
> @@ -883,6 +889,13 @@ static noinline void __init check_alloc_range(struct maple_tree *mt)
> 4503599618982063UL << 12, /* Size */
> 34359052178 << 12, /* Expected location */
> -EBUSY, /* Return failure. */
> +
> + /* Test a single entry */
> + 34148798648 << 12, /* Min */
> + 34148798648 << 12, /* Max */
> + 4096, /* Size of 1 */
> + 34148798648 << 12, /* Location is the same as min/max */
> + 0, /* Success */
> };
> int i, range_count = ARRAY_SIZE(range);
> int req_range_count = ARRAY_SIZE(req_range);

2023-04-28 08:44:14

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 29/34] maple_tree: Introduce mas_prev_slot() interface



在 2023/4/25 22:09, Liam R. Howlett 写道:
> Sometimes the user needs to revert to the previous slot, regardless of
> if it is empty or not. Add an interface to go to the previous slot.
>
> Since there can't be two consecutive NULLs in the tree, the mas_prev()
> function can be implemented by calling mas_prev_slot() a maximum of 2
> times. Change the underlying interface to use mas_prev_slot() to align
> the code.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/maple_tree.c | 217 ++++++++++++++++++++---------------------------
> 1 file changed, 90 insertions(+), 127 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 7370d7c12fe3b..297d936321347 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4498,6 +4498,25 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
>
> }
>
> +static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> +{
> +retry:
> + mas_set(mas, index);
> + mas_state_walk(mas);
> + if (mas_is_start(mas))
> + goto retry;
> +}
> +
> +static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> + struct maple_node *node, const unsigned long index)
> +{
> + if (unlikely(ma_dead_node(node))) {
> + mas_rewalk(mas, index);
> + return true;
> + }
> + return false;
> +}
> +
> /*
> * mas_prev_node() - Find the prev non-null entry at the same level in the
> * tree. The prev value will be mas->node[mas->offset] or MAS_NONE.
> @@ -4515,13 +4534,15 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> struct maple_node *node;
> struct maple_enode *enode;
> unsigned long *pivots;
> + unsigned long max;
>
> - if (mas_is_none(mas))
> - return 0;
> + node = mas_mn(mas);
> + max = mas->min - 1;
May underflow.
> + if (max < min)
> + goto no_entry;
>
> level = 0;
> do {
> - node = mas_mn(mas);
> if (ma_is_root(node))
> goto no_entry;
>
> @@ -4530,11 +4551,11 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> return 1;
> offset = mas->offset;
> level++;
> + node = mas_mn(mas);
> } while (!offset);
>
> offset--;
> mt = mte_node_type(mas->node);
> - node = mas_mn(mas);
> slots = ma_slots(node, mt);
> pivots = ma_pivots(node, mt);
> if (unlikely(ma_dead_node(node)))
> @@ -4543,12 +4564,10 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> mas->max = pivots[offset];
> if (offset)
> mas->min = pivots[offset - 1] + 1;
> +
> if (unlikely(ma_dead_node(node)))
> return 1;
>
> - if (mas->max < min)
> - goto no_entry_min;
> -
> while (level > 1) {
> level--;
> enode = mas_slot(mas, slots, offset);
> @@ -4569,9 +4588,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
>
> if (offset < mt_pivots[mt])
> mas->max = pivots[offset];
> -
> - if (mas->max < min)
> - goto no_entry;
> }
>
> mas->node = mas_slot(mas, slots, offset);
> @@ -4584,10 +4600,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
>
> return 0;
>
> -no_entry_min:
> - mas->offset = offset;
> - if (offset)
> - mas->min = pivots[offset - 1] + 1;
> no_entry:
> if (unlikely(ma_dead_node(node)))
> return 1;
> @@ -4596,6 +4608,62 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> return 0;
> }
>
> +/*
> + * mas_prev_slot() - Get the entry in the previous slot
> + *
> + * @mas: The maple state
> + * @max: The minimum starting range
> + *
> + * Return: The entry in the previous slot which is possibly NULL
> + */
> +void *mas_prev_slot(struct ma_state *mas, unsigned long min)
> +{
> + void *entry;
> + void __rcu **slots;
> + unsigned long pivot;
> + enum maple_type type;
> + unsigned long *pivots;
> + struct maple_node *node;
> + unsigned long save_point = mas->index;
> +
> +retry:
> + node = mas_mn(mas);
> + type = mte_node_type(mas->node);
> + pivots = ma_pivots(node, type);
> + pivot = mas_safe_min(mas, pivots, mas->offset);
> + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> + goto retry;
> +
> + if (pivot <= min)
> + return NULL;
> +
> + if (likely(mas->offset)) {
> + mas->offset--;
> + mas->last = mas->index - 1;
> + } else {
> + if (mas_prev_node(mas, min)) {
> + mas_rewalk(mas, save_point);
> + goto retry;
> + }
> +
> + if (mas_is_none(mas))
> + return NULL;
> +
> + mas->last = mas->max;
> + node = mas_mn(mas);
> + type = mte_node_type(mas->node);
> + pivots = ma_pivots(node, type);
> + }
> +
> + mas->index = mas_safe_min(mas, pivots, mas->offset);
> + slots = ma_slots(node, type);
> + entry = mas_slot(mas, slots, mas->offset);
> + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> + goto retry;
> +
> + return entry;
> +}
> +
> /*
> * mas_next_node() - Get the next node at the same level in the tree.
> * @mas: The maple state
> @@ -4680,25 +4748,6 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> return 0;
> }
>
> -static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> -{
> -retry:
> - mas_set(mas, index);
> - mas_state_walk(mas);
> - if (mas_is_start(mas))
> - goto retry;
> -}
> -
> -static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> - struct maple_node *node, const unsigned long index)
> -{
> - if (unlikely(ma_dead_node(node))) {
> - mas_rewalk(mas, index);
> - return true;
> - }
> - return false;
> -}
> -
> /*
> * mas_next_slot() - Get the entry in the next slot
> *
> @@ -4777,117 +4826,31 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
> if (mas->last >= limit)
> return NULL;
>
> - entry = mas_next_slot_limit(mas, limit);
> + entry = mas_next_slot(mas, limit);
> if (entry)
> return entry;
>
> if (mas_is_none(mas))
> return NULL;
>
> - return mas_next_slot_limit(mas, limit);
> -}
> -
> -/*
> - * mas_prev_nentry() - Get the previous node entry.
> - * @mas: The maple state.
> - * @limit: The lower limit to check for a value.
> - *
> - * Return: the entry, %NULL otherwise.
> - */
> -static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> - unsigned long index)
> -{
> - unsigned long pivot, min;
> - unsigned char offset, count;
> - struct maple_node *mn;
> - enum maple_type mt;
> - unsigned long *pivots;
> - void __rcu **slots;
> - void *entry;
> -
> -retry:
> - if (!mas->offset)
> - return NULL;
> -
> - mn = mas_mn(mas);
> - mt = mte_node_type(mas->node);
> - offset = mas->offset - 1;
> - slots = ma_slots(mn, mt);
> - pivots = ma_pivots(mn, mt);
> - count = ma_data_end(mn, mt, pivots, mas->max);
> - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> - goto retry;
> -
> - offset = mas->offset - 1;
> - if (offset >= mt_slots[mt])
> - offset = mt_slots[mt] - 1;
> -
> - if (offset >= count) {
> - pivot = mas->max;
> - offset = count;
> - } else {
> - pivot = pivots[offset];
> - }
> -
> - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> - goto retry;
> -
> - while (offset && !mas_slot(mas, slots, offset)) {
> - pivot = pivots[--offset];
> - if (pivot >= limit)
> - break;
> - }
> -
> - /*
> - * If the slot was null but we've shifted outside the limits, then set
> - * the range to the last NULL.
> - */
> - if (unlikely((pivot < limit) && (offset < mas->offset)))
> - pivot = pivots[++offset];
> -
> - min = mas_safe_min(mas, pivots, offset);
> - entry = mas_slot(mas, slots, offset);
> - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> - goto retry;
> -
> - mas->offset = offset;
> - mas->last = pivot;
> - mas->index = min;
> - return entry;
> + return mas_next_slot(mas, limit);
> }
>
> static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
> {
> void *entry;
> - struct maple_enode *prev_enode;
> - unsigned char prev_offset;
>
> if (mas->index < min)
> return NULL;
>
> -retry:
> - prev_enode = mas->node;
> - prev_offset = mas->offset;
> - while (likely(!mas_is_none(mas))) {
> - entry = mas_prev_nentry(mas, min, mas->index);
> -
> - if (likely(entry))
> - return entry;
> -
> - if (unlikely(mas->index <= min))
> - return NULL;
> -
> - if (unlikely(mas_prev_node(mas, min))) {
> - mas_rewalk(mas, mas->index);
> - goto retry;
> - }
> + entry = mas_prev_slot(mas, min);
> + if (entry)
> + return entry;
>
> - mas->offset++;
> - }
> + if (mas_is_none(mas))
> + return NULL;
>
> - mas->node = prev_enode;
> - mas->offset = prev_offset;
> - return NULL;
> + return mas_prev_slot(mas, min);
> }
>
> /*

2023-04-28 08:47:06

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface

Hi Liam,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.3 next-20230427]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230425140955.3834476-28-Liam.Howlett%40oracle.com
patch subject: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface
config: i386-randconfig-a005-20230424 (https://download.01.org/0day-ci/archive/20230428/[email protected]/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/0e736b8a8054e7f0b216320d2458a00b54fcd2b0
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
git checkout 0e736b8a8054e7f0b216320d2458a00b54fcd2b0
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All error/warnings (new ones prefixed by >>):

lib/maple_tree.c:4710:7: warning: no previous prototype for function 'mas_next_slot' [-Wmissing-prototypes]
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
lib/maple_tree.c:4710:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
static
>> lib/maple_tree.c:4780:10: error: implicit declaration of function 'mas_next_slot_limit' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
entry = mas_next_slot_limit(mas, limit);
^
lib/maple_tree.c:4780:10: note: did you mean 'mas_next_slot'?
lib/maple_tree.c:4710:7: note: 'mas_next_slot' declared here
void *mas_next_slot(struct ma_state *mas, unsigned long max)
^
>> lib/maple_tree.c:4780:8: warning: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
entry = mas_next_slot_limit(mas, limit);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> lib/maple_tree.c:4787:9: warning: incompatible integer to pointer conversion returning 'int' from a function with result type 'void *' [-Wint-conversion]
return mas_next_slot_limit(mas, limit);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 warnings and 1 error generated.


vim +/mas_next_slot_limit +4780 lib/maple_tree.c

4701
4702 /*
4703 * mas_next_slot() - Get the entry in the next slot
4704 *
4705 * @mas: The maple state
4706 * @max: The maximum starting range
4707 *
4708 * Return: The entry in the next slot which is possibly NULL
4709 */
> 4710 void *mas_next_slot(struct ma_state *mas, unsigned long max)
4711 {
4712 void __rcu **slots;
4713 unsigned long *pivots;
4714 unsigned long pivot;
4715 enum maple_type type;
4716 struct maple_node *node;
4717 unsigned char data_end;
4718 unsigned long save_point = mas->last;
4719 void *entry;
4720
4721 retry:
4722 node = mas_mn(mas);
4723 type = mte_node_type(mas->node);
4724 pivots = ma_pivots(node, type);
4725 data_end = ma_data_end(node, type, pivots, mas->max);
4726 pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
4727 if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
4728 goto retry;
4729
4730 if (pivot >= max)
4731 return NULL;
4732
4733 if (likely(data_end > mas->offset)) {
4734 mas->offset++;
4735 mas->index = mas->last + 1;
4736 } else {
4737 if (mas_next_node(mas, node, max)) {
4738 mas_rewalk(mas, save_point);
4739 goto retry;
4740 }
4741
4742 if (mas_is_none(mas))
4743 return NULL;
4744
4745 mas->offset = 0;
4746 mas->index = mas->min;
4747 node = mas_mn(mas);
4748 type = mte_node_type(mas->node);
4749 pivots = ma_pivots(node, type);
4750 }
4751
4752 slots = ma_slots(node, type);
4753 mas->last = mas_logical_pivot(mas, pivots, mas->offset, type);
4754 entry = mas_slot(mas, slots, mas->offset);
4755 if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
4756 goto retry;
4757
4758 return entry;
4759 }
4760
4761 /*
4762 * mas_next_entry() - Internal function to get the next entry.
4763 * @mas: The maple state
4764 * @limit: The maximum range start.
4765 *
4766 * Set the @mas->node to the next entry and the range_start to
4767 * the beginning value for the entry. Does not check beyond @limit.
4768 * Sets @mas->index and @mas->last to the limit if it is hit.
4769 * Restarts on dead nodes.
4770 *
4771 * Return: the next entry or %NULL.
4772 */
4773 static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
4774 {
4775 void *entry = NULL;
4776
4777 if (mas->last >= limit)
4778 return NULL;
4779
> 4780 entry = mas_next_slot_limit(mas, limit);
4781 if (entry)
4782 return entry;
4783
4784 if (mas_is_none(mas))
4785 return NULL;
4786
> 4787 return mas_next_slot_limit(mas, limit);
4788 }
4789

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

2023-04-28 10:30:45

by Petr Tesařík

[permalink] [raw]
Subject: Re: [PATCH 10/34] maple_tree: Use MAS_BUG_ON() when setting a leaf node as a parent

On Tue, 25 Apr 2023 10:09:31 -0400
"Liam R. Howlett" <[email protected]> wrote:

> Use MAS_BUG_ON() to dump the maple state and tree in the unlikely even
^^^^
nitpick: event

Petr T

2023-04-28 10:37:02

by Petr Tesařík

[permalink] [raw]
Subject: Re: [PATCH 11/34] maple_tree: Use MAS_BUG_ON() in mas_set_height()

On Tue, 25 Apr 2023 10:09:32 -0400
"Liam R. Howlett" <[email protected]> wrote:

> Use MAS_BUG_ON() instead of MT_BUG_ON() to get the maple state
> information. In the unlikely even of a tree height of > 31, try to increase
^^^^
Again, *event*. Consider buying a new keyboard if your 'T' is broken. ;-)

Petr T

2023-05-03 19:34:25

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 10/34] maple_tree: Use MAS_BUG_ON() when setting a leaf node as a parent

* Petr Tesařík <[email protected]> [230428 06:08]:
> On Tue, 25 Apr 2023 10:09:31 -0400
> "Liam R. Howlett" <[email protected]> wrote:
>
> > Use MAS_BUG_ON() to dump the maple state and tree in the unlikely even
> ^^^^
> nitpick: event

Thanks, I will fix this in v2.

>
> Petr T
>

2023-05-03 19:34:38

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 29/34] maple_tree: Introduce mas_prev_slot() interface

* Peng Zhang <[email protected]> [230428 04:27]:
>
>
> 在 2023/4/25 22:09, Liam R. Howlett 写道:
> > Sometimes the user needs to revert to the previous slot, regardless of
> > if it is empty or not. Add an interface to go to the previous slot.
> >
> > Since there can't be two consecutive NULLs in the tree, the mas_prev()
> > function can be implemented by calling mas_prev_slot() a maximum of 2
> > times. Change the underlying interface to use mas_prev_slot() to align
> > the code.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/maple_tree.c | 217 ++++++++++++++++++++---------------------------
> > 1 file changed, 90 insertions(+), 127 deletions(-)
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index 7370d7c12fe3b..297d936321347 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -4498,6 +4498,25 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
> > }
> > +static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> > +{
> > +retry:
> > + mas_set(mas, index);
> > + mas_state_walk(mas);
> > + if (mas_is_start(mas))
> > + goto retry;
> > +}
> > +
> > +static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> > + struct maple_node *node, const unsigned long index)
> > +{
> > + if (unlikely(ma_dead_node(node))) {
> > + mas_rewalk(mas, index);
> > + return true;
> > + }
> > + return false;
> > +}
> > +
> > /*
> > * mas_prev_node() - Find the prev non-null entry at the same level in the
> > * tree. The prev value will be mas->node[mas->offset] or MAS_NONE.
> > @@ -4515,13 +4534,15 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > struct maple_node *node;
> > struct maple_enode *enode;
> > unsigned long *pivots;
> > + unsigned long max;
> > - if (mas_is_none(mas))
> > - return 0;
> > + node = mas_mn(mas);
> > + max = mas->min - 1;
> May underflow.

Thanks, I will address this in v2.

> > + if (max < min)
> > + goto no_entry;
> > level = 0;
> > do {
> > - node = mas_mn(mas);
> > if (ma_is_root(node))
> > goto no_entry;
> > @@ -4530,11 +4551,11 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > return 1;
> > offset = mas->offset;
> > level++;
> > + node = mas_mn(mas);
> > } while (!offset);
> > offset--;
> > mt = mte_node_type(mas->node);
> > - node = mas_mn(mas);
> > slots = ma_slots(node, mt);
> > pivots = ma_pivots(node, mt);
> > if (unlikely(ma_dead_node(node)))
> > @@ -4543,12 +4564,10 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > mas->max = pivots[offset];
> > if (offset)
> > mas->min = pivots[offset - 1] + 1;
> > +
> > if (unlikely(ma_dead_node(node)))
> > return 1;
> > - if (mas->max < min)
> > - goto no_entry_min;
> > -
> > while (level > 1) {
> > level--;
> > enode = mas_slot(mas, slots, offset);
> > @@ -4569,9 +4588,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > if (offset < mt_pivots[mt])
> > mas->max = pivots[offset];
> > -
> > - if (mas->max < min)
> > - goto no_entry;
> > }
> > mas->node = mas_slot(mas, slots, offset);
> > @@ -4584,10 +4600,6 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > return 0;
> > -no_entry_min:
> > - mas->offset = offset;
> > - if (offset)
> > - mas->min = pivots[offset - 1] + 1;
> > no_entry:
> > if (unlikely(ma_dead_node(node)))
> > return 1;
> > @@ -4596,6 +4608,62 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min)
> > return 0;
> > }
> > +/*
> > + * mas_prev_slot() - Get the entry in the previous slot
> > + *
> > + * @mas: The maple state
> > + * @max: The minimum starting range
> > + *
> > + * Return: The entry in the previous slot which is possibly NULL
> > + */
> > +void *mas_prev_slot(struct ma_state *mas, unsigned long min)
> > +{
> > + void *entry;
> > + void __rcu **slots;
> > + unsigned long pivot;
> > + enum maple_type type;
> > + unsigned long *pivots;
> > + struct maple_node *node;
> > + unsigned long save_point = mas->index;
> > +
> > +retry:
> > + node = mas_mn(mas);
> > + type = mte_node_type(mas->node);
> > + pivots = ma_pivots(node, type);
> > + pivot = mas_safe_min(mas, pivots, mas->offset);
> > + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> > + goto retry;
> > +
> > + if (pivot <= min)
> > + return NULL;
> > +
> > + if (likely(mas->offset)) {
> > + mas->offset--;
> > + mas->last = mas->index - 1;
> > + } else {
> > + if (mas_prev_node(mas, min)) {
> > + mas_rewalk(mas, save_point);
> > + goto retry;
> > + }
> > +
> > + if (mas_is_none(mas))
> > + return NULL;
> > +
> > + mas->last = mas->max;
> > + node = mas_mn(mas);
> > + type = mte_node_type(mas->node);
> > + pivots = ma_pivots(node, type);
> > + }
> > +
> > + mas->index = mas_safe_min(mas, pivots, mas->offset);
> > + slots = ma_slots(node, type);
> > + entry = mas_slot(mas, slots, mas->offset);
> > + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> > + goto retry;
> > +
> > + return entry;
> > +}
> > +
> > /*
> > * mas_next_node() - Get the next node at the same level in the tree.
> > * @mas: The maple state
> > @@ -4680,25 +4748,6 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> > return 0;
> > }
> > -static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> > -{
> > -retry:
> > - mas_set(mas, index);
> > - mas_state_walk(mas);
> > - if (mas_is_start(mas))
> > - goto retry;
> > -}
> > -
> > -static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> > - struct maple_node *node, const unsigned long index)
> > -{
> > - if (unlikely(ma_dead_node(node))) {
> > - mas_rewalk(mas, index);
> > - return true;
> > - }
> > - return false;
> > -}
> > -
> > /*
> > * mas_next_slot() - Get the entry in the next slot
> > *
> > @@ -4777,117 +4826,31 @@ static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
> > if (mas->last >= limit)
> > return NULL;
> > - entry = mas_next_slot_limit(mas, limit);
> > + entry = mas_next_slot(mas, limit);
> > if (entry)
> > return entry;
> > if (mas_is_none(mas))
> > return NULL;
> > - return mas_next_slot_limit(mas, limit);
> > -}
> > -
> > -/*
> > - * mas_prev_nentry() - Get the previous node entry.
> > - * @mas: The maple state.
> > - * @limit: The lower limit to check for a value.
> > - *
> > - * Return: the entry, %NULL otherwise.
> > - */
> > -static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> > - unsigned long index)
> > -{
> > - unsigned long pivot, min;
> > - unsigned char offset, count;
> > - struct maple_node *mn;
> > - enum maple_type mt;
> > - unsigned long *pivots;
> > - void __rcu **slots;
> > - void *entry;
> > -
> > -retry:
> > - if (!mas->offset)
> > - return NULL;
> > -
> > - mn = mas_mn(mas);
> > - mt = mte_node_type(mas->node);
> > - offset = mas->offset - 1;
> > - slots = ma_slots(mn, mt);
> > - pivots = ma_pivots(mn, mt);
> > - count = ma_data_end(mn, mt, pivots, mas->max);
> > - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > - goto retry;
> > -
> > - offset = mas->offset - 1;
> > - if (offset >= mt_slots[mt])
> > - offset = mt_slots[mt] - 1;
> > -
> > - if (offset >= count) {
> > - pivot = mas->max;
> > - offset = count;
> > - } else {
> > - pivot = pivots[offset];
> > - }
> > -
> > - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > - goto retry;
> > -
> > - while (offset && !mas_slot(mas, slots, offset)) {
> > - pivot = pivots[--offset];
> > - if (pivot >= limit)
> > - break;
> > - }
> > -
> > - /*
> > - * If the slot was null but we've shifted outside the limits, then set
> > - * the range to the last NULL.
> > - */
> > - if (unlikely((pivot < limit) && (offset < mas->offset)))
> > - pivot = pivots[++offset];
> > -
> > - min = mas_safe_min(mas, pivots, offset);
> > - entry = mas_slot(mas, slots, offset);
> > - if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > - goto retry;
> > -
> > - mas->offset = offset;
> > - mas->last = pivot;
> > - mas->index = min;
> > - return entry;
> > + return mas_next_slot(mas, limit);
> > }
> > static inline void *mas_prev_entry(struct ma_state *mas, unsigned long min)
> > {
> > void *entry;
> > - struct maple_enode *prev_enode;
> > - unsigned char prev_offset;
> > if (mas->index < min)
> > return NULL;
> > -retry:
> > - prev_enode = mas->node;
> > - prev_offset = mas->offset;
> > - while (likely(!mas_is_none(mas))) {
> > - entry = mas_prev_nentry(mas, min, mas->index);
> > -
> > - if (likely(entry))
> > - return entry;
> > -
> > - if (unlikely(mas->index <= min))
> > - return NULL;
> > -
> > - if (unlikely(mas_prev_node(mas, min))) {
> > - mas_rewalk(mas, mas->index);
> > - goto retry;
> > - }
> > + entry = mas_prev_slot(mas, min);
> > + if (entry)
> > + return entry;
> > - mas->offset++;
> > - }
> > + if (mas_is_none(mas))
> > + return NULL;
> > - mas->node = prev_enode;
> > - mas->offset = prev_offset;
> > - return NULL;
> > + return mas_prev_slot(mas, min);
> > }
> > /*

2023-05-03 19:34:58

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 11/34] maple_tree: Use MAS_BUG_ON() in mas_set_height()

* Petr Tesařík <[email protected]> [230428 06:10]:
> On Tue, 25 Apr 2023 10:09:32 -0400
> "Liam R. Howlett" <[email protected]> wrote:
>
> > Use MAS_BUG_ON() instead of MT_BUG_ON() to get the maple state
> > information. In the unlikely even of a tree height of > 31, try to increase
> ^^^^
> Again, *event*. Consider buying a new keyboard if your 'T' is broken. ;-)

hanks. I will fix This in v2. :)

>
> Petr T

2023-05-03 19:35:02

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 27/34] maple_tree: Introduce mas_next_slot() interface

* Peng Zhang <[email protected]> [230428 02:48]:
>
>
> 在 2023/4/25 22:09, Liam R. Howlett 写道:
> > Sometimes, during a tree walk, the user needs the next slot regardless
> > of if it is empty or not. Add an interface to get the next slot.
> >
> > Since there are no consecutive NULLs allowed in the tree, the mas_next()
> > function can only advance two slots at most. So use the new
> > mas_next_slot() interface to align both implementations.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/maple_tree.c | 178 +++++++++++++++++++----------------------------
> > 1 file changed, 71 insertions(+), 107 deletions(-)
> >
> > diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> > index 31cbfd7b44728..fe6c9da6f2bd5 100644
> > --- a/lib/maple_tree.c
> > +++ b/lib/maple_tree.c
> > @@ -4619,15 +4619,16 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> > if (mas->max >= max)
> > goto no_entry;
> > + min = mas->max + 1;
> > + if (min > max)
> > + goto no_entry;
> Unnecessary check due to mas->max < max.

Thanks, I will address this in v2.

> > +
> > level = 0;
> > do {
> > if (ma_is_root(node))
> > goto no_entry;
> > - min = mas->max + 1;
> > - if (min > max)
> > - goto no_entry;
> > -
> > + /* Walk up. */
> > if (unlikely(mas_ascend(mas)))
> > return 1;
> > @@ -4645,13 +4646,12 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> > slots = ma_slots(node, mt);
> > pivot = mas_safe_pivot(mas, pivots, ++offset, mt);
> > while (unlikely(level > 1)) {
> > - /* Descend, if necessary */
> > + level--;
> > enode = mas_slot(mas, slots, offset);
> > if (unlikely(ma_dead_node(node)))
> > return 1;
> > mas->node = enode;
> > - level--;
> > node = mas_mn(mas);
> > mt = mte_node_type(mas->node);
> > slots = ma_slots(node, mt);
> > @@ -4680,85 +4680,84 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node,
> > return 0;
> > }
> > +static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> > +{
> > +retry:
> > + mas_set(mas, index);
> > + mas_state_walk(mas);
> > + if (mas_is_start(mas))
> > + goto retry;
> > +}
> > +
> > +static inline bool mas_rewalk_if_dead(struct ma_state *mas,
> > + struct maple_node *node, const unsigned long index)
> > +{
> > + if (unlikely(ma_dead_node(node))) {
> > + mas_rewalk(mas, index);
> > + return true;
> > + }
> > + return false;
> > +}
> > +
> > /*
> > - * mas_next_nentry() - Get the next node entry
> > - * @mas: The maple state
> > - * @max: The maximum value to check
> > - * @*range_start: Pointer to store the start of the range.
> > + * mas_next_slot() - Get the entry in the next slot
> > *
> > - * Sets @mas->offset to the offset of the next node entry, @mas->last to the
> > - * pivot of the entry.
> > + * @mas: The maple state
> > + * @max: The maximum starting range
> > *
> > - * Return: The next entry, %NULL otherwise
> > + * Return: The entry in the next slot which is possibly NULL
> > */
> > -static inline void *mas_next_nentry(struct ma_state *mas,
> > - struct maple_node *node, unsigned long max, enum maple_type type)
> > +void *mas_next_slot(struct ma_state *mas, unsigned long max)
> > {
> > - unsigned char count;
> > - unsigned long pivot;
> > - unsigned long *pivots;
> > void __rcu **slots;
> > + unsigned long *pivots;
> > + unsigned long pivot;
> > + enum maple_type type;
> > + struct maple_node *node;
> > + unsigned char data_end;
> > + unsigned long save_point = mas->last;
> > void *entry;
> > - if (mas->last == mas->max) {
> > - mas->index = mas->max;
> > - return NULL;
> > - }
> > -
> > - slots = ma_slots(node, type);
> > +retry:
> > + node = mas_mn(mas);
> > + type = mte_node_type(mas->node);
> > pivots = ma_pivots(node, type);
> > - count = ma_data_end(node, type, pivots, mas->max);
> > - if (unlikely(ma_dead_node(node)))
> > - return NULL;
> > -
> > - mas->index = mas_safe_min(mas, pivots, mas->offset);
> > - if (unlikely(ma_dead_node(node)))
> > - return NULL;
> > -
> > - if (mas->index > max)
> > - return NULL;
> > + data_end = ma_data_end(node, type, pivots, mas->max);
> > + pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
> > + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> > + goto retry;
> > - if (mas->offset > count)
> > + if (pivot >= max)
> > return NULL;
> > - while (mas->offset < count) {
> > - pivot = pivots[mas->offset];
> > - entry = mas_slot(mas, slots, mas->offset);
> > - if (ma_dead_node(node))
> > - return NULL;
> > -
> > - mas->last = pivot;
> > - if (entry)
> > - return entry;
> > -
> > - if (pivot >= max)
> > - return NULL;
> > + if (likely(data_end > mas->offset)) {
> > + mas->offset++;
> > + mas->index = mas->last + 1;
> > + } else {
> > + if (mas_next_node(mas, node, max)) {
> > + mas_rewalk(mas, save_point);
> > + goto retry;
> > + }
> > - if (pivot >= mas->max)
> > + if (mas_is_none(mas))
> > return NULL;
> > - mas->index = pivot + 1;
> > - mas->offset++;
> > + mas->offset = 0;
> > + mas->index = mas->min;
> > + node = mas_mn(mas);
> > + type = mte_node_type(mas->node);
> > + pivots = ma_pivots(node, type);
> > }
> > - pivot = mas_logical_pivot(mas, pivots, mas->offset, type);
> > + slots = ma_slots(node, type);
> > + mas->last = mas_logical_pivot(mas, pivots, mas->offset, type);
> > entry = mas_slot(mas, slots, mas->offset);
> > - if (ma_dead_node(node))
> > - return NULL;
> > + if (unlikely(mas_rewalk_if_dead(mas, node, save_point)))
> > + goto retry;
> > - mas->last = pivot;
> > return entry;
> > }
> > -static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> > -{
> > -retry:
> > - mas_set(mas, index);
> > - mas_state_walk(mas);
> > - if (mas_is_start(mas))
> > - goto retry;
> > -}
> > -
> > /*
> > * mas_next_entry() - Internal function to get the next entry.
> > * @mas: The maple state
> > @@ -4774,47 +4773,18 @@ static inline void mas_rewalk(struct ma_state *mas, unsigned long index)
> > static inline void *mas_next_entry(struct ma_state *mas, unsigned long limit)
> > {
> > void *entry = NULL;
> > - struct maple_node *node;
> > - unsigned long last;
> > - enum maple_type mt;
> > if (mas->last >= limit)
> > return NULL;
> > - last = mas->last;
> > -retry:
> > - node = mas_mn(mas);
> > - mt = mte_node_type(mas->node);
> > - mas->offset++;
> > - if (unlikely(mas->offset >= mt_slots[mt])) {
> > - mas->offset = mt_slots[mt] - 1;
> > - goto next_node;
> > - }
> > -
> > - while (!mas_is_none(mas)) {
> > - entry = mas_next_nentry(mas, node, limit, mt);
> > - if (unlikely(ma_dead_node(node))) {
> > - mas_rewalk(mas, last);
> > - goto retry;
> > - }
> > -
> > - if (likely(entry))
> > - return entry;
> > -
> > - if (unlikely((mas->last >= limit)))
> > - return NULL;
> > + entry = mas_next_slot_limit(mas, limit);
> > + if (entry)
> > + return entry;
> > -next_node:
> > - if (unlikely(mas_next_node(mas, node, limit))) {
> > - mas_rewalk(mas, last);
> > - goto retry;
> > - }
> > - mas->offset = 0;
> > - node = mas_mn(mas);
> > - mt = mte_node_type(mas->node);
> > - }
> > + if (mas_is_none(mas))
> > + return NULL;
> > - return NULL;
> > + return mas_next_slot_limit(mas, limit);
> > }
> > /*
> > @@ -4845,10 +4815,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> > slots = ma_slots(mn, mt);
> > pivots = ma_pivots(mn, mt);
> > count = ma_data_end(mn, mt, pivots, mas->max);
> > - if (unlikely(ma_dead_node(mn))) {
> > - mas_rewalk(mas, index);
> > + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > goto retry;
> > - }
> > offset = mas->offset - 1;
> > if (offset >= mt_slots[mt])
> > @@ -4861,10 +4829,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> > pivot = pivots[offset];
> > }
> > - if (unlikely(ma_dead_node(mn))) {
> > - mas_rewalk(mas, index);
> > + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > goto retry;
> > - }
> > while (offset && !mas_slot(mas, slots, offset)) {
> > pivot = pivots[--offset];
> > @@ -4881,10 +4847,8 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit,
> > min = mas_safe_min(mas, pivots, offset);
> > entry = mas_slot(mas, slots, offset);
> > - if (unlikely(ma_dead_node(mn))) {
> > - mas_rewalk(mas, index);
> > + if (unlikely(mas_rewalk_if_dead(mas, mn, index)))
> > goto retry;
> > - }
> > mas->offset = offset;
> > mas->last = pivot;

2023-05-04 03:14:42

by Yujie Liu

[permalink] [raw]
Subject: Re: [PATCH 23/34] maple_tree: Try harder to keep active node after mas_next()

Hello,

kernel test robot noticed "BUG:Bad_rss-counter_state_mm:#type:MM_FILEPAGES_val" on:

commit: e56e7042dca07a9de8c957c1d67f246b8f8183ee ("[PATCH 23/34] maple_tree: Try harder to keep active node after mas_next()")
url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/all/[email protected]/
patch subject: [PATCH 23/34] maple_tree: Try harder to keep active node after mas_next()

in testcase: trinity
version:
with following parameters:

runtime: 300s

test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/

compiler: gcc-11
test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

(please refer to attached dmesg/kmsg for entire log/backtrace)


If you fix the issue, kindly add following tag
| Reported-by: kernel test robot <[email protected]>
| Link: https://lore.kernel.org/oe-lkp/[email protected]


[ 25.976555][ T2770] BUG: Bad rss-counter state mm:00000000f0004b17 type:MM_FILEPAGES val:2467
[ 25.979876][ T2770] BUG: Bad rss-counter state mm:00000000f0004b17 type:MM_ANONPAGES val:815
[ 25.981154][ T2770] BUG: non-zero pgtables_bytes on freeing mm: 53248
[ 26.897355][ T3061] Zero length message leads to an empty skb
[ 26.935222][ T26] audit: type=1326 audit(1682538244.461:4): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=3061 comm="trinity-c2" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0
[ 26.939639][ T1430] [main] 10391 iterations. [F:7791 S:2536 HI:1723]
[ 26.939649][ T1430]
[ 27.950645][ T26] audit: type=1326 audit(1682538245.477:5): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=2950 comm="trinity-c0" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0
[ 30.095254][ T26] audit: type=1326 audit(1682538247.625:6): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=3070 comm="trinity-c5" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0
[ 30.269599][ T3095] scsi_nl_rcv_msg: discarding partial skb
[ 31.025282][ T26] audit: type=1326 audit(1682538248.553:7): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=3099 comm="trinity-c0" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0
[ 32.299465][ T1430] [main] 20608 iterations. [F:15638 S:4833 HI:1813]
[ 32.299476][ T1430]
[ 33.365345][ T3089] can: request_module (can-proto-3) failed.
[ 34.241128][ T3280] futex_wake_op: trinity-c7 tries to shift op by -1; fix this program
[ 41.300839][ T1430] [main] 31062 iterations. [F:23567 S:7302 HI:2941]
[ 41.300851][ T1430]
[ 41.395010][ T3261] futex_wake_op: trinity-c4 tries to shift op by 1917; fix this program
[ 51.944041][ T3471] BUG: Bad rss-counter state mm:00000000dcb60c0e type:MM_FILEPAGES val:2467
[ 51.945501][ T3471] BUG: Bad rss-counter state mm:00000000dcb60c0e type:MM_ANONPAGES val:860
[ 51.946758][ T3471] BUG: non-zero pgtables_bytes on freeing mm: 53248
[ 53.949886][ T2770] BUG: Bad rss-counter state mm:000000005666b194 type:MM_FILEPAGES val:2467
[ 53.951288][ T2770] BUG: Bad rss-counter state mm:000000005666b194 type:MM_ANONPAGES val:847
[ 53.952547][ T2770] BUG: non-zero pgtables_bytes on freeing mm: 53248
[ 56.044667][ T1430] [main] 41190 iterations. [F:31257 S:9679 HI:2944]
[ 56.044680][ T1430]
[ 57.218048][ T3537] BUG: Bad rss-counter state mm:00000000076661cb type:MM_ANONPAGES val:4
[ 57.219389][ T3537] BUG: non-zero pgtables_bytes on freeing mm: 16384
[ 58.107193][ T2770] BUG: Bad rss-counter state mm:000000003f7bfeb5 type:MM_FILEPAGES val:2467
[ 58.108592][ T2770] BUG: Bad rss-counter state mm:000000003f7bfeb5 type:MM_ANONPAGES val:846
[ 58.109885][ T2770] BUG: non-zero pgtables_bytes on freeing mm: 53248
[ 60.294818][ T26] audit: type=1326 audit(1682538277.821:8): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=3565 comm="trinity-c6" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0
[ 62.443729][ T26] audit: type=1326 audit(1682538279.973:9): auid=4294967295 uid=65534 gid=65534 ses=4294967295 pid=3589 comm="trinity-c4" exe="/bin/trinity" sig=9 arch=c000003e syscall=8 compat=0 ip=0x454ba7 code=0x0

kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-meta-89.cgz
-m 16384
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::32032-:22
-boot order=nc
-no-reboot
-device i6300esb
-watchdog-action debug
-rtc base=localtime
-serial stdio
-display none
-monitor null
)

append=(
ip=::::vm-meta-89::dhcp
root=/dev/ram0
RESULT_ROOT=/result/trinity/300s/vm-snb/quantal-x86_64-core-20190426.cgz/x86_64-kexec/gcc-11/e56e7042dca07a9de8c957c1d67f246b8f8183ee/1
BOOT_IMAGE=/pkg/linux/x86_64-kexec/gcc-11/e56e7042dca07a9de8c957c1d67f246b8f8183ee/vmlinuz-6.3.0-rc5-00661-ge56e7042dca0
branch=linux-review/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
job=/job-script
user=lkp
ARCH=x86_64
kconfig=x86_64-kexec
commit=e56e7042dca07a9de8c957c1d67f246b8f8183ee
initcall_debug
nmi_watchdog=0
vmalloc=256M
initramfs_async=0
page_owner=on
max_uptime=1200
result_service=tmpfs
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw
rcuperf.shutdown=0
watchdog_thresh=240
)

"${kvm[@]}" -append "${append[*]}"


To reproduce:

# build kernel
cd linux
cp config-6.3.0-rc5-00661-ge56e7042dca0 .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz


git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


Attachments:
(No filename) (6.56 kB)
config-6.3.0-rc5-00661-ge56e7042dca0 (131.98 kB)
job-script (4.43 kB)
dmesg.xz (28.39 kB)
Download all attachments

2023-05-04 03:35:26

by Peng Zhang

[permalink] [raw]
Subject: Re: [PATCH 26/34] maple_tree: Update testing code for mas_{next,prev,walk}



在 2023/4/25 22:09, Liam R. Howlett 写道:
> Now that the functions have changed the limits, update the testing of
> the maple tree to test these new settings.
>
> Signed-off-by: Liam R. Howlett <[email protected]>
> ---
> lib/test_maple_tree.c | 641 +++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 635 insertions(+), 6 deletions(-)
>
> diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
> index ae08d34d1d3c4..345eef526d8b0 100644
> --- a/lib/test_maple_tree.c
> +++ b/lib/test_maple_tree.c
> @@ -1290,6 +1290,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
> mas_lock(&mas);
> mas_set(&mas, 3);
> ptr = mas_walk(&mas);
> + MAS_BUG_ON(&mas, mas.index != 0);
> MT_BUG_ON(mt, ptr != NULL);
> MT_BUG_ON(mt, mas.index != 0);
> MT_BUG_ON(mt, mas.last != ULONG_MAX);
> @@ -1300,7 +1301,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
>
> mas_set(&mas, 0);
> ptr = mas_walk(&mas);
> - MT_BUG_ON(mt, ptr != NULL);
> + MAS_BUG_ON(&mas, ptr != NULL);
>
> mas_set(&mas, 1);
> ptr = mas_walk(&mas);
> @@ -1359,7 +1360,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
> mas_store_gfp(&mas, ptr, GFP_KERNEL);
> ptr = mas_next(&mas, ULONG_MAX);
> MT_BUG_ON(mt, ptr != NULL);
> - MT_BUG_ON(mt, (mas.index != 1) && (mas.last != ULONG_MAX));
> + MAS_BUG_ON(&mas, (mas.index != ULONG_MAX) && (mas.last != ULONG_MAX));
>
> mas_set(&mas, 1);
> ptr = mas_prev(&mas, 0);
> @@ -1768,12 +1769,12 @@ static noinline void __init check_iteration(struct maple_tree *mt)
> mas.index = 760;
> mas.last = 765;
> mas_store(&mas, val);
> - mas_next(&mas, ULONG_MAX);
> }
> i++;
> }
> /* Make sure the next find returns the one after 765, 766-769 */
> val = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, val != xa_mk_value(76));
> MT_BUG_ON(mt, val != xa_mk_value(76));
> mas_unlock(&mas);
> mas_destroy(&mas);
> @@ -1979,7 +1980,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
>
> val = mas_next(&mas, ULONG_MAX);
> MT_BUG_ON(mt, val != NULL);
> - MT_BUG_ON(mt, mas.index != ULONG_MAX);
> + MT_BUG_ON(mt, mas.index != 0x7d6);
> MT_BUG_ON(mt, mas.last != ULONG_MAX);
>
> val = mas_prev(&mas, 0);
> @@ -2003,7 +2004,8 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
> val = mas_prev(&mas, 0);
> MT_BUG_ON(mt, val != NULL);
> MT_BUG_ON(mt, mas.index != 0);
> - MT_BUG_ON(mt, mas.last != 0);
> + MT_BUG_ON(mt, mas.last != 5);
> + MT_BUG_ON(mt, mas.node != MAS_NONE);
>
> mas.index = 0;
> mas.last = 5;
> @@ -2015,7 +2017,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
> val = mas_prev(&mas, 0);
> MT_BUG_ON(mt, val != NULL);
> MT_BUG_ON(mt, mas.index != 0);
> - MT_BUG_ON(mt, mas.last != 0);
> + MT_BUG_ON(mt, mas.last != 9);
> mas_unlock(&mas);
>
> mtree_destroy(mt);
> @@ -2718,6 +2720,629 @@ static noinline void __init check_empty_area_fill(struct maple_tree *mt)
> mt_set_non_kernel(0);
> }
>
> +/*
> + * Check MAS_START, MAS_PAUSE, active (implied), and MAS_NONE transitions.
> + *
> + * The table below shows the single entry tree (0-0 pointer) and normal tree
> + * with nodes.
> + *
> + * Function ENTRY Start Result index & last
> + * ┬ ┬ ┬ ┬ ┬
> + * │ │ │ │ └─ the final range
> + * │ │ │ └─ The node value after execution
> + * │ │ └─ The node value before execution
> + * │ └─ If the entry exists of does not exists (DNE)
of->or?
> + * └─ The function name
> + *
> + * Function ENTRY Start Result index & last
> + * mas_next()
> + * - after last
> + * Single entry tree at 0-0
> + * ------------------------
> + * DNE MAS_START MAS_NONE 1 - oo
> + * DNE MAS_PAUSE MAS_NONE 1 - oo
> + * DNE MAS_ROOT MAS_NONE 1 - oo
> + * when index = 0
> + * DNE MAS_NONE MAS_ROOT 0
> + * when index > 0
> + * DNE MAS_NONE MAS_NONE 1 - oo
> + *
> + * Normal tree
> + * -----------
> + * exists MAS_START active range
> + * DNE MAS_START active set to last range
> + * exists MAS_PAUSE active range
> + * DNE MAS_PAUSE active set to last range
> + * exists MAS_NONE active range
> + * exists active active range
> + * DNE active active set to last range
> + *
> + * Function ENTRY Start Result index & last
> + * mas_prev()
> + * - before index
> + * Single entry tree at 0-0
> + * ------------------------
> + * if index > 0
> + * exists MAS_START MAS_ROOT 0
> + * exists MAS_PAUSE MAS_ROOT 0
> + * exists MAS_NONE MAS_ROOT 0
> + *
> + * if index == 0
> + * DNE MAS_START MAS_NONE 0
> + * DNE MAS_PAUSE MAS_NONE 0
> + * DNE MAS_NONE MAS_NONE 0
> + * DNE MAS_ROOT MAS_NONE 0
> + *
> + * Normal tree
> + * -----------
> + * exists MAS_START active range
> + * DNE MAS_START active set to min
> + * exists MAS_PAUSE active range
> + * DNE MAS_PAUSE active set to min
> + * exists MAS_NONE active range
> + * DNE MAS_NONE MAS_NONE set to min
> + * any MAS_ROOT MAS_NONE 0
> + * exists active active range
> + * DNE active active last range
> + *
> + * Function ENTRY Start Result index & last
> + * mas_find()
> + * - at index or next
> + * Single entry tree at 0-0
> + * ------------------------
> + * if index > 0
> + * DNE MAS_START MAS_NONE 0
> + * DNE MAS_PAUSE MAS_NONE 0
> + * DNE MAS_ROOT MAS_NONE 0
> + * DNE MAS_NONE MAS_NONE 0
> + * if index == 0
> + * exists MAS_START MAS_ROOT 0
> + * exists MAS_PAUSE MAS_ROOT 0
> + * exists MAS_NONE MAS_ROOT 0
> + *
> + * Normal tree
> + * -----------
> + * exists MAS_START active range
> + * DNE MAS_START active set to max
> + * exists MAS_PAUSE active range
> + * DNE MAS_PAUSE active set to max
> + * exists MAS_NONE active range
> + * exists active active range
> + * DNE active active last range (max < last)
> + *
> + * Function ENTRY Start Result index & last
> + * mas_find_rev()
> + * - at index or before
> + * Single entry tree at 0-0
> + * ------------------------
> + * if index > 0
> + * exists MAS_START MAS_ROOT 0
> + * exists MAS_PAUSE MAS_ROOT 0
> + * exists MAS_NONE MAS_ROOT 0
> + * if index == 0
> + * DNE MAS_START MAS_NONE 0
> + * DNE MAS_PAUSE MAS_NONE 0
> + * DNE MAS_NONE MAS_NONE 0
> + * DNE MAS_ROOT MAS_NONE 0
> + *
> + * Normal tree
> + * -----------
> + * exists MAS_START active range
> + * DNE MAS_START active set to min
> + * exists MAS_PAUSE active range
> + * DNE MAS_PAUSE active set to min
> + * exists MAS_NONE active range
> + * exists active active range
> + * DNE active active last range (min > index)
> + *
> + * Function ENTRY Start Result index & last
> + * mas_walk()
> + * - Look up index
> + * Single entry tree at 0-0
> + * ------------------------
> + * if index > 0
> + * DNE MAS_START MAS_ROOT 1 - oo
> + * DNE MAS_PAUSE MAS_ROOT 1 - oo
> + * DNE MAS_NONE MAS_ROOT 1 - oo
> + * DNE MAS_ROOT MAS_ROOT 1 - oo
> + * if index == 0
> + * exists MAS_START MAS_ROOT 0
> + * exists MAS_PAUSE MAS_ROOT 0
> + * exists MAS_NONE MAS_ROOT 0
> + * exists MAS_ROOT MAS_ROOT 0
> + *
> + * Normal tree
> + * -----------
> + * exists MAS_START active range
> + * DNE MAS_START active range of NULL
> + * exists MAS_PAUSE active range
> + * DNE MAS_PAUSE active range of NULL
> + * exists MAS_NONE active range
> + * DNE MAS_NONE active range of NULL
> + * exists active active range
> + * DNE active active range of NULL
> + */
> +
> +#define mas_active(x) (((x).node != MAS_ROOT) && \
> + ((x).node != MAS_START) && \
> + ((x).node != MAS_PAUSE) && \
> + ((x).node != MAS_NONE))
> +static noinline void __init check_state_handling(struct maple_tree *mt)
> +{
> + MA_STATE(mas, mt, 0, 0);
> + void *entry, *ptr = (void *) 0x1234500;
> + void *ptr2 = &ptr;
> + void *ptr3 = &ptr2;
> +
> + /* Check MAS_ROOT First */
> + mtree_store_range(mt, 0, 0, ptr, GFP_KERNEL);
> +
> + mas_lock(&mas);
> + /* prev: Start -> none */
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* prev: Start -> root */
> + mas_set(&mas, 10);
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* prev: pause -> root */
> + mas_set(&mas, 10);
> + mas_pause(&mas);
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* next: start -> none */
> + mas_set(&mas, 0);
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* next: start -> none */
> + mas_set(&mas, 10);
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find: start -> root */
> + mas_set(&mas, 0);
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* find: root -> none */
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find: none -> none */
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find: start -> none */
> + mas_set(&mas, 10);
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find_rev: none -> root */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* find_rev: start -> root */
> + mas_set(&mas, 0);
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* find_rev: root -> none */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find_rev: none -> none */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* find_rev: start -> root */
> + mas_set(&mas, 10);
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* walk: start -> none */
> + mas_set(&mas, 10);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* walk: pause -> none*/
> + mas_set(&mas, 10);
> + mas_pause(&mas);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* walk: none -> none */
> + mas.index = mas.last = 10;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* walk: none -> none */
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* walk: start -> root */
> + mas_set(&mas, 0);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* walk: pause -> root */
> + mas_set(&mas, 0);
> + mas_pause(&mas);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* walk: none -> root */
> + mas.node = MAS_NONE;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* walk: root -> root */
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + /* walk: root -> none */
> + mas_set(&mas, 10);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 1);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> +
> + /* walk: none -> root */
> + mas.index = mas.last = 0;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0);
> + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> +
> + mas_unlock(&mas);
> +
> + /* Check when there is an actual node */
> + mtree_store_range(mt, 0, 0, NULL, GFP_KERNEL);
> + mtree_store_range(mt, 0x1000, 0x1500, ptr, GFP_KERNEL);
> + mtree_store_range(mt, 0x2000, 0x2500, ptr2, GFP_KERNEL);
> + mtree_store_range(mt, 0x3000, 0x3500, ptr3, GFP_KERNEL);
> +
> + mas_lock(&mas);
> +
> + /* next: start ->active */
> + mas_set(&mas, 0);
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next: pause ->active */
> + mas_set(&mas, 0);
> + mas_pause(&mas);
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next: none ->active */
> + mas.index = mas.last = 0;
> + mas.offset = 0;
> + mas.node = MAS_NONE;
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next:active ->active */
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr2);
> + MAS_BUG_ON(&mas, mas.index != 0x2000);
> + MAS_BUG_ON(&mas, mas.last != 0x2500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next:active -> active out of range*/
> + entry = mas_next(&mas, 0x2999);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x2501);
> + MAS_BUG_ON(&mas, mas.last != 0x2fff);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* Continue after out of range*/
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr3);
> + MAS_BUG_ON(&mas, mas.index != 0x3000);
> + MAS_BUG_ON(&mas, mas.last != 0x3500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next:active -> active out of range*/
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x3501);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* next: none -> active, skip value at location */
> + mas_set(&mas, 0);
> + entry = mas_next(&mas, ULONG_MAX);
> + mas.node = MAS_NONE;
> + mas.offset = 0;
> + entry = mas_next(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr2);
> + MAS_BUG_ON(&mas, mas.index != 0x2000);
> + MAS_BUG_ON(&mas, mas.last != 0x2500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* prev:active ->active */
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* prev:active -> active out of range*/
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0x0FFF);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* prev: pause ->active */
> + mas_set(&mas, 0x3600);
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr3);
> + mas_pause(&mas);
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr2);
> + MAS_BUG_ON(&mas, mas.index != 0x2000);
> + MAS_BUG_ON(&mas, mas.last != 0x2500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* prev:active -> active out of range*/
> + entry = mas_prev(&mas, 0x1600);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x1501);
> + MAS_BUG_ON(&mas, mas.last != 0x1FFF);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* prev: active ->active, continue*/
> + entry = mas_prev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find: start ->active */
> + mas_set(&mas, 0);
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find: pause ->active */
> + mas_set(&mas, 0);
> + mas_pause(&mas);
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find: start ->active on value */;
> + mas_set(&mas, 1200);
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find:active ->active */
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != ptr2);
> + MAS_BUG_ON(&mas, mas.index != 0x2000);
> + MAS_BUG_ON(&mas, mas.last != 0x2500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> +
> + /* find:active -> active (NULL)*/
> + entry = mas_find(&mas, 0x2700);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x2501);
> + MAS_BUG_ON(&mas, mas.last != 0x2FFF);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find: none ->active */
> + entry = mas_find(&mas, 0x5000);
> + MAS_BUG_ON(&mas, entry != ptr3);
> + MAS_BUG_ON(&mas, mas.index != 0x3000);
> + MAS_BUG_ON(&mas, mas.last != 0x3500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find:active -> active (NULL) end*/
> + entry = mas_find(&mas, ULONG_MAX);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x3501);
> + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find_rev: active (END) ->active */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr3);
> + MAS_BUG_ON(&mas, mas.index != 0x3000);
> + MAS_BUG_ON(&mas, mas.last != 0x3500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find_rev:active ->active */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr2);
> + MAS_BUG_ON(&mas, mas.index != 0x2000);
> + MAS_BUG_ON(&mas, mas.last != 0x2500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find_rev: pause ->active */
> + mas_pause(&mas);
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find_rev:active -> active */
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0);
> + MAS_BUG_ON(&mas, mas.last != 0x0FFF);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* find_rev: start ->active */
> + mas_set(&mas, 0x1200);
> + entry = mas_find_rev(&mas, 0);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk start ->active */
> + mas_set(&mas, 0x1200);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk start ->active */
> + mas_set(&mas, 0x1600);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x1501);
> + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk pause ->active */
> + mas_set(&mas, 0x1200);
> + mas_pause(&mas);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk pause -> active */
> + mas_set(&mas, 0x1600);
> + mas_pause(&mas);
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x1501);
> + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk none -> active */
> + mas_set(&mas, 0x1200);
> + mas.node = MAS_NONE;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk none -> active */
> + mas_set(&mas, 0x1600);
> + mas.node = MAS_NONE;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x1501);
> + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk active -> active */
> + mas.index = 0x1200;
> + mas.last = 0x1200;
> + mas.offset = 0;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != ptr);
> + MAS_BUG_ON(&mas, mas.index != 0x1000);
> + MAS_BUG_ON(&mas, mas.last != 0x1500);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + /* mas_walk active -> active */
> + mas.index = 0x1600;
> + mas.last = 0x1600;
> + entry = mas_walk(&mas);
> + MAS_BUG_ON(&mas, entry != NULL);
> + MAS_BUG_ON(&mas, mas.index != 0x1501);
> + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> + MAS_BUG_ON(&mas, !mas_active(mas));
> +
> + mas_unlock(&mas);
> +}
> +
> static DEFINE_MTREE(tree);
> static int __init maple_tree_seed(void)
> {
> @@ -2979,6 +3604,10 @@ static int __init maple_tree_seed(void)
> mtree_destroy(&tree);
>
>
> + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> + check_state_handling(&tree);
> + mtree_destroy(&tree);
> +
> #if defined(BENCH)
> skip:
> #endif

2023-05-04 18:53:15

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 26/34] maple_tree: Update testing code for mas_{next,prev,walk}

* Peng Zhang <[email protected]> [230503 23:33]:
>
>
> 在 2023/4/25 22:09, Liam R. Howlett 写道:
> > Now that the functions have changed the limits, update the testing of
> > the maple tree to test these new settings.
> >
> > Signed-off-by: Liam R. Howlett <[email protected]>
> > ---
> > lib/test_maple_tree.c | 641 +++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 635 insertions(+), 6 deletions(-)
> >
> > diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c
> > index ae08d34d1d3c4..345eef526d8b0 100644
> > --- a/lib/test_maple_tree.c
> > +++ b/lib/test_maple_tree.c
> > @@ -1290,6 +1290,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
> > mas_lock(&mas);
> > mas_set(&mas, 3);
> > ptr = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > MT_BUG_ON(mt, ptr != NULL);
> > MT_BUG_ON(mt, mas.index != 0);
> > MT_BUG_ON(mt, mas.last != ULONG_MAX);
> > @@ -1300,7 +1301,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
> > mas_set(&mas, 0);
> > ptr = mas_walk(&mas);
> > - MT_BUG_ON(mt, ptr != NULL);
> > + MAS_BUG_ON(&mas, ptr != NULL);
> > mas_set(&mas, 1);
> > ptr = mas_walk(&mas);
> > @@ -1359,7 +1360,7 @@ static noinline void __init check_root_expand(struct maple_tree *mt)
> > mas_store_gfp(&mas, ptr, GFP_KERNEL);
> > ptr = mas_next(&mas, ULONG_MAX);
> > MT_BUG_ON(mt, ptr != NULL);
> > - MT_BUG_ON(mt, (mas.index != 1) && (mas.last != ULONG_MAX));
> > + MAS_BUG_ON(&mas, (mas.index != ULONG_MAX) && (mas.last != ULONG_MAX));
> > mas_set(&mas, 1);
> > ptr = mas_prev(&mas, 0);
> > @@ -1768,12 +1769,12 @@ static noinline void __init check_iteration(struct maple_tree *mt)
> > mas.index = 760;
> > mas.last = 765;
> > mas_store(&mas, val);
> > - mas_next(&mas, ULONG_MAX);
> > }
> > i++;
> > }
> > /* Make sure the next find returns the one after 765, 766-769 */
> > val = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, val != xa_mk_value(76));
> > MT_BUG_ON(mt, val != xa_mk_value(76));
> > mas_unlock(&mas);
> > mas_destroy(&mas);
> > @@ -1979,7 +1980,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
> > val = mas_next(&mas, ULONG_MAX);
> > MT_BUG_ON(mt, val != NULL);
> > - MT_BUG_ON(mt, mas.index != ULONG_MAX);
> > + MT_BUG_ON(mt, mas.index != 0x7d6);
> > MT_BUG_ON(mt, mas.last != ULONG_MAX);
> > val = mas_prev(&mas, 0);
> > @@ -2003,7 +2004,8 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
> > val = mas_prev(&mas, 0);
> > MT_BUG_ON(mt, val != NULL);
> > MT_BUG_ON(mt, mas.index != 0);
> > - MT_BUG_ON(mt, mas.last != 0);
> > + MT_BUG_ON(mt, mas.last != 5);
> > + MT_BUG_ON(mt, mas.node != MAS_NONE);
> > mas.index = 0;
> > mas.last = 5;
> > @@ -2015,7 +2017,7 @@ static noinline void __init next_prev_test(struct maple_tree *mt)
> > val = mas_prev(&mas, 0);
> > MT_BUG_ON(mt, val != NULL);
> > MT_BUG_ON(mt, mas.index != 0);
> > - MT_BUG_ON(mt, mas.last != 0);
> > + MT_BUG_ON(mt, mas.last != 9);
> > mas_unlock(&mas);
> > mtree_destroy(mt);
> > @@ -2718,6 +2720,629 @@ static noinline void __init check_empty_area_fill(struct maple_tree *mt)
> > mt_set_non_kernel(0);
> > }
> > +/*
> > + * Check MAS_START, MAS_PAUSE, active (implied), and MAS_NONE transitions.
> > + *
> > + * The table below shows the single entry tree (0-0 pointer) and normal tree
> > + * with nodes.
> > + *
> > + * Function ENTRY Start Result index & last
> > + * ┬ ┬ ┬ ┬ ┬
> > + * │ │ │ │ └─ the final range
> > + * │ │ │ └─ The node value after execution
> > + * │ │ └─ The node value before execution
> > + * │ └─ If the entry exists of does not exists (DNE)
> of->or?

Yes, thank you.

> > + * └─ The function name
> > + *
> > + * Function ENTRY Start Result index & last
> > + * mas_next()
> > + * - after last
> > + * Single entry tree at 0-0
> > + * ------------------------
> > + * DNE MAS_START MAS_NONE 1 - oo
> > + * DNE MAS_PAUSE MAS_NONE 1 - oo
> > + * DNE MAS_ROOT MAS_NONE 1 - oo
> > + * when index = 0
> > + * DNE MAS_NONE MAS_ROOT 0
> > + * when index > 0
> > + * DNE MAS_NONE MAS_NONE 1 - oo
> > + *
> > + * Normal tree
> > + * -----------
> > + * exists MAS_START active range
> > + * DNE MAS_START active set to last range
> > + * exists MAS_PAUSE active range
> > + * DNE MAS_PAUSE active set to last range
> > + * exists MAS_NONE active range
> > + * exists active active range
> > + * DNE active active set to last range
> > + *
> > + * Function ENTRY Start Result index & last
> > + * mas_prev()
> > + * - before index
> > + * Single entry tree at 0-0
> > + * ------------------------
> > + * if index > 0
> > + * exists MAS_START MAS_ROOT 0
> > + * exists MAS_PAUSE MAS_ROOT 0
> > + * exists MAS_NONE MAS_ROOT 0
> > + *
> > + * if index == 0
> > + * DNE MAS_START MAS_NONE 0
> > + * DNE MAS_PAUSE MAS_NONE 0
> > + * DNE MAS_NONE MAS_NONE 0
> > + * DNE MAS_ROOT MAS_NONE 0
> > + *
> > + * Normal tree
> > + * -----------
> > + * exists MAS_START active range
> > + * DNE MAS_START active set to min
> > + * exists MAS_PAUSE active range
> > + * DNE MAS_PAUSE active set to min
> > + * exists MAS_NONE active range
> > + * DNE MAS_NONE MAS_NONE set to min
> > + * any MAS_ROOT MAS_NONE 0
> > + * exists active active range
> > + * DNE active active last range
> > + *
> > + * Function ENTRY Start Result index & last
> > + * mas_find()
> > + * - at index or next
> > + * Single entry tree at 0-0
> > + * ------------------------
> > + * if index > 0
> > + * DNE MAS_START MAS_NONE 0
> > + * DNE MAS_PAUSE MAS_NONE 0
> > + * DNE MAS_ROOT MAS_NONE 0
> > + * DNE MAS_NONE MAS_NONE 0
> > + * if index == 0
> > + * exists MAS_START MAS_ROOT 0
> > + * exists MAS_PAUSE MAS_ROOT 0
> > + * exists MAS_NONE MAS_ROOT 0
> > + *
> > + * Normal tree
> > + * -----------
> > + * exists MAS_START active range
> > + * DNE MAS_START active set to max
> > + * exists MAS_PAUSE active range
> > + * DNE MAS_PAUSE active set to max
> > + * exists MAS_NONE active range
> > + * exists active active range
> > + * DNE active active last range (max < last)
> > + *
> > + * Function ENTRY Start Result index & last
> > + * mas_find_rev()
> > + * - at index or before
> > + * Single entry tree at 0-0
> > + * ------------------------
> > + * if index > 0
> > + * exists MAS_START MAS_ROOT 0
> > + * exists MAS_PAUSE MAS_ROOT 0
> > + * exists MAS_NONE MAS_ROOT 0
> > + * if index == 0
> > + * DNE MAS_START MAS_NONE 0
> > + * DNE MAS_PAUSE MAS_NONE 0
> > + * DNE MAS_NONE MAS_NONE 0
> > + * DNE MAS_ROOT MAS_NONE 0
> > + *
> > + * Normal tree
> > + * -----------
> > + * exists MAS_START active range
> > + * DNE MAS_START active set to min
> > + * exists MAS_PAUSE active range
> > + * DNE MAS_PAUSE active set to min
> > + * exists MAS_NONE active range
> > + * exists active active range
> > + * DNE active active last range (min > index)
> > + *
> > + * Function ENTRY Start Result index & last
> > + * mas_walk()
> > + * - Look up index
> > + * Single entry tree at 0-0
> > + * ------------------------
> > + * if index > 0
> > + * DNE MAS_START MAS_ROOT 1 - oo
> > + * DNE MAS_PAUSE MAS_ROOT 1 - oo
> > + * DNE MAS_NONE MAS_ROOT 1 - oo
> > + * DNE MAS_ROOT MAS_ROOT 1 - oo
> > + * if index == 0
> > + * exists MAS_START MAS_ROOT 0
> > + * exists MAS_PAUSE MAS_ROOT 0
> > + * exists MAS_NONE MAS_ROOT 0
> > + * exists MAS_ROOT MAS_ROOT 0
> > + *
> > + * Normal tree
> > + * -----------
> > + * exists MAS_START active range
> > + * DNE MAS_START active range of NULL
> > + * exists MAS_PAUSE active range
> > + * DNE MAS_PAUSE active range of NULL
> > + * exists MAS_NONE active range
> > + * DNE MAS_NONE active range of NULL
> > + * exists active active range
> > + * DNE active active range of NULL
> > + */
> > +
> > +#define mas_active(x) (((x).node != MAS_ROOT) && \
> > + ((x).node != MAS_START) && \
> > + ((x).node != MAS_PAUSE) && \
> > + ((x).node != MAS_NONE))
> > +static noinline void __init check_state_handling(struct maple_tree *mt)
> > +{
> > + MA_STATE(mas, mt, 0, 0);
> > + void *entry, *ptr = (void *) 0x1234500;
> > + void *ptr2 = &ptr;
> > + void *ptr3 = &ptr2;
> > +
> > + /* Check MAS_ROOT First */
> > + mtree_store_range(mt, 0, 0, ptr, GFP_KERNEL);
> > +
> > + mas_lock(&mas);
> > + /* prev: Start -> none */
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* prev: Start -> root */
> > + mas_set(&mas, 10);
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* prev: pause -> root */
> > + mas_set(&mas, 10);
> > + mas_pause(&mas);
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* next: start -> none */
> > + mas_set(&mas, 0);
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* next: start -> none */
> > + mas_set(&mas, 10);
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find: start -> root */
> > + mas_set(&mas, 0);
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* find: root -> none */
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find: none -> none */
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find: start -> none */
> > + mas_set(&mas, 10);
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find_rev: none -> root */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* find_rev: start -> root */
> > + mas_set(&mas, 0);
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* find_rev: root -> none */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find_rev: none -> none */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* find_rev: start -> root */
> > + mas_set(&mas, 10);
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* walk: start -> none */
> > + mas_set(&mas, 10);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* walk: pause -> none*/
> > + mas_set(&mas, 10);
> > + mas_pause(&mas);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* walk: none -> none */
> > + mas.index = mas.last = 10;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* walk: none -> none */
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* walk: start -> root */
> > + mas_set(&mas, 0);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* walk: pause -> root */
> > + mas_set(&mas, 0);
> > + mas_pause(&mas);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* walk: none -> root */
> > + mas.node = MAS_NONE;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* walk: root -> root */
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + /* walk: root -> none */
> > + mas_set(&mas, 10);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 1);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, mas.node != MAS_NONE);
> > +
> > + /* walk: none -> root */
> > + mas.index = mas.last = 0;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0);
> > + MAS_BUG_ON(&mas, mas.node != MAS_ROOT);
> > +
> > + mas_unlock(&mas);
> > +
> > + /* Check when there is an actual node */
> > + mtree_store_range(mt, 0, 0, NULL, GFP_KERNEL);
> > + mtree_store_range(mt, 0x1000, 0x1500, ptr, GFP_KERNEL);
> > + mtree_store_range(mt, 0x2000, 0x2500, ptr2, GFP_KERNEL);
> > + mtree_store_range(mt, 0x3000, 0x3500, ptr3, GFP_KERNEL);
> > +
> > + mas_lock(&mas);
> > +
> > + /* next: start ->active */
> > + mas_set(&mas, 0);
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next: pause ->active */
> > + mas_set(&mas, 0);
> > + mas_pause(&mas);
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next: none ->active */
> > + mas.index = mas.last = 0;
> > + mas.offset = 0;
> > + mas.node = MAS_NONE;
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next:active ->active */
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr2);
> > + MAS_BUG_ON(&mas, mas.index != 0x2000);
> > + MAS_BUG_ON(&mas, mas.last != 0x2500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next:active -> active out of range*/
> > + entry = mas_next(&mas, 0x2999);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x2501);
> > + MAS_BUG_ON(&mas, mas.last != 0x2fff);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* Continue after out of range*/
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr3);
> > + MAS_BUG_ON(&mas, mas.index != 0x3000);
> > + MAS_BUG_ON(&mas, mas.last != 0x3500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next:active -> active out of range*/
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x3501);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* next: none -> active, skip value at location */
> > + mas_set(&mas, 0);
> > + entry = mas_next(&mas, ULONG_MAX);
> > + mas.node = MAS_NONE;
> > + mas.offset = 0;
> > + entry = mas_next(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr2);
> > + MAS_BUG_ON(&mas, mas.index != 0x2000);
> > + MAS_BUG_ON(&mas, mas.last != 0x2500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* prev:active ->active */
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* prev:active -> active out of range*/
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0x0FFF);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* prev: pause ->active */
> > + mas_set(&mas, 0x3600);
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr3);
> > + mas_pause(&mas);
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr2);
> > + MAS_BUG_ON(&mas, mas.index != 0x2000);
> > + MAS_BUG_ON(&mas, mas.last != 0x2500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* prev:active -> active out of range*/
> > + entry = mas_prev(&mas, 0x1600);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x1501);
> > + MAS_BUG_ON(&mas, mas.last != 0x1FFF);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* prev: active ->active, continue*/
> > + entry = mas_prev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find: start ->active */
> > + mas_set(&mas, 0);
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find: pause ->active */
> > + mas_set(&mas, 0);
> > + mas_pause(&mas);
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find: start ->active on value */;
> > + mas_set(&mas, 1200);
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find:active ->active */
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != ptr2);
> > + MAS_BUG_ON(&mas, mas.index != 0x2000);
> > + MAS_BUG_ON(&mas, mas.last != 0x2500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > +
> > + /* find:active -> active (NULL)*/
> > + entry = mas_find(&mas, 0x2700);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x2501);
> > + MAS_BUG_ON(&mas, mas.last != 0x2FFF);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find: none ->active */
> > + entry = mas_find(&mas, 0x5000);
> > + MAS_BUG_ON(&mas, entry != ptr3);
> > + MAS_BUG_ON(&mas, mas.index != 0x3000);
> > + MAS_BUG_ON(&mas, mas.last != 0x3500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find:active -> active (NULL) end*/
> > + entry = mas_find(&mas, ULONG_MAX);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x3501);
> > + MAS_BUG_ON(&mas, mas.last != ULONG_MAX);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find_rev: active (END) ->active */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr3);
> > + MAS_BUG_ON(&mas, mas.index != 0x3000);
> > + MAS_BUG_ON(&mas, mas.last != 0x3500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find_rev:active ->active */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr2);
> > + MAS_BUG_ON(&mas, mas.index != 0x2000);
> > + MAS_BUG_ON(&mas, mas.last != 0x2500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find_rev: pause ->active */
> > + mas_pause(&mas);
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find_rev:active -> active */
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0);
> > + MAS_BUG_ON(&mas, mas.last != 0x0FFF);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* find_rev: start ->active */
> > + mas_set(&mas, 0x1200);
> > + entry = mas_find_rev(&mas, 0);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk start ->active */
> > + mas_set(&mas, 0x1200);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk start ->active */
> > + mas_set(&mas, 0x1600);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x1501);
> > + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk pause ->active */
> > + mas_set(&mas, 0x1200);
> > + mas_pause(&mas);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk pause -> active */
> > + mas_set(&mas, 0x1600);
> > + mas_pause(&mas);
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x1501);
> > + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk none -> active */
> > + mas_set(&mas, 0x1200);
> > + mas.node = MAS_NONE;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk none -> active */
> > + mas_set(&mas, 0x1600);
> > + mas.node = MAS_NONE;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x1501);
> > + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk active -> active */
> > + mas.index = 0x1200;
> > + mas.last = 0x1200;
> > + mas.offset = 0;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != ptr);
> > + MAS_BUG_ON(&mas, mas.index != 0x1000);
> > + MAS_BUG_ON(&mas, mas.last != 0x1500);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + /* mas_walk active -> active */
> > + mas.index = 0x1600;
> > + mas.last = 0x1600;
> > + entry = mas_walk(&mas);
> > + MAS_BUG_ON(&mas, entry != NULL);
> > + MAS_BUG_ON(&mas, mas.index != 0x1501);
> > + MAS_BUG_ON(&mas, mas.last != 0x1fff);
> > + MAS_BUG_ON(&mas, !mas_active(mas));
> > +
> > + mas_unlock(&mas);
> > +}
> > +
> > static DEFINE_MTREE(tree);
> > static int __init maple_tree_seed(void)
> > {
> > @@ -2979,6 +3604,10 @@ static int __init maple_tree_seed(void)
> > mtree_destroy(&tree);
> > + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);
> > + check_state_handling(&tree);
> > + mtree_destroy(&tree);
> > +
> > #if defined(BENCH)
> > skip:
> > #endif

2023-05-05 17:26:06

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH 33/34] maple_tree: Add testing for mas_{prev,next}_range()

* kernel test robot <[email protected]> [230425 21:42]:
> Hi Liam,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on akpm-mm/mm-everything]
> [also build test ERROR on linus/master v6.3 next-20230425]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link: https://lore.kernel.org/r/20230425140955.3834476-34-Liam.Howlett%40oracle.com
> patch subject: [PATCH 33/34] maple_tree: Add testing for mas_{prev,next}_range()
> config: hexagon-randconfig-r045-20230424 (https://download.01.org/0day-ci/archive/20230426/[email protected]/config)
> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project 437b7602e4a998220871de78afcb020b9c14a661)
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/571139f33a7ede9db66c7892c40e88ed66a32bc5
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Liam-R-Howlett/maple_tree-Fix-static-analyser-cppcheck-issue/20230425-233958
> git checkout 571139f33a7ede9db66c7892c40e88ed66a32bc5
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon olddefconfig
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag where applicable
> | Reported-by: kernel test robot <[email protected]>
> | Link: https://lore.kernel.org/oe-kbuild-all/[email protected]/

Thanks. This will be fixed in v2.

>
> All errors (new ones prefixed by >>):
>
> >> lib/test_maple_tree.c:3437:17: error: call to undeclared function 'mas_find_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
> void *entry = mas_find_range(&mas, ULONG_MAX);
> ^
> >> lib/test_maple_tree.c:3437:9: error: incompatible integer to pointer conversion initializing 'void *' with an expression of type 'int' [-Wint-conversion]
> void *entry = mas_find_range(&mas, ULONG_MAX);
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3443:2: error: call to undeclared function 'mas_find_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
> mas_contiguous(&mas, test, ULONG_MAX) {
> ^
> include/linux/maple_tree.h:545:22: note: expanded from macro 'mas_contiguous'
> while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
> ^
> >> lib/test_maple_tree.c:3443:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> mas_contiguous(&mas, test, ULONG_MAX) {
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
> while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3453:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> mas_contiguous(&mas, test, ULONG_MAX) {
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
> while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3464:2: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> mas_contiguous(&mas, test, 340) {
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> include/linux/maple_tree.h:545:20: note: expanded from macro 'mas_contiguous'
> while (((__entry) = mas_find_range((__mas), (__max))) != NULL)
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3476:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> test = mas_find_range(&mas, ULONG_MAX);
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >> lib/test_maple_tree.c:3482:9: error: call to undeclared function 'mas_find_range_rev'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
> test = mas_find_range_rev(&mas, 0);
> ^
> lib/test_maple_tree.c:3482:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> test = mas_find_range_rev(&mas, 0);
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3487:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> test = mas_find_range_rev(&mas, 0);
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> lib/test_maple_tree.c:3493:7: error: incompatible integer to pointer conversion assigning to 'void *' from 'int' [-Wint-conversion]
> test = mas_find_range_rev(&mas, 340);
> ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 11 errors generated.
>
>
> vim +/mas_find_range +3437 lib/test_maple_tree.c
>
> 3358
> 3359 static noinline void __init check_slot_iterators(struct maple_tree *mt)
> 3360 {
> 3361 MA_STATE(mas, mt, 0, 0);
> 3362 unsigned long i, index = 40;
> 3363 unsigned char offset = 0;
> 3364 void *test;
> 3365
> 3366 mt_set_non_kernel(99999);
> 3367
> 3368 mas_lock(&mas);
> 3369 for (i = 0; i <= index; i++) {
> 3370 unsigned long end = 5;
> 3371 if (i > 20 && i < 35)
> 3372 end = 9;
> 3373 mas_set_range(&mas, i*10, i*10 + end);
> 3374 mas_store_gfp(&mas, xa_mk_value(i), GFP_KERNEL);
> 3375 }
> 3376
> 3377 i = 21;
> 3378 mas_set(&mas, i*10);
> 3379 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
> 3380 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != NULL);
> 3381 MAS_BUG_ON(&mas, mas.index != 206);
> 3382 MAS_BUG_ON(&mas, mas.last != 209);
> 3383
> 3384 i--;
> 3385 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
> 3386 MAS_BUG_ON(&mas, mas.index != 200);
> 3387 MAS_BUG_ON(&mas, mas.last != 205);
> 3388
> 3389 i = 25;
> 3390 mas_set(&mas, i*10);
> 3391 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
> 3392 MAS_BUG_ON(&mas, mas.offset != 0);
> 3393
> 3394 /* Previous range is in another node */
> 3395 i--;
> 3396 MAS_BUG_ON(&mas, mas_prev_range(&mas, 0) != xa_mk_value(i));
> 3397 MAS_BUG_ON(&mas, mas.index != 240);
> 3398 MAS_BUG_ON(&mas, mas.last != 249);
> 3399
> 3400 /* Shift back with mas_next */
> 3401 i++;
> 3402 MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
> 3403 MAS_BUG_ON(&mas, mas.index != 250);
> 3404 MAS_BUG_ON(&mas, mas.last != 259);
> 3405
> 3406 i = 33;
> 3407 mas_set(&mas, i*10);
> 3408 MAS_BUG_ON(&mas, mas_walk(&mas) != xa_mk_value(i));
> 3409 MAS_BUG_ON(&mas, mas.index != 330);
> 3410 MAS_BUG_ON(&mas, mas.last != 339);
> 3411
> 3412 /* Next range is in another node */
> 3413 i++;
> 3414 MAS_BUG_ON(&mas, mas_next_range(&mas, ULONG_MAX) != xa_mk_value(i));
> 3415 MAS_BUG_ON(&mas, mas.offset != 0);
> 3416 MAS_BUG_ON(&mas, mas.index != 340);
> 3417 MAS_BUG_ON(&mas, mas.last != 349);
> 3418
> 3419 /* Next out of range */
> 3420 i++;
> 3421 MAS_BUG_ON(&mas, mas_next_range(&mas, i*10 - 1) != NULL);
> 3422 /* maple state does not move */
> 3423 MAS_BUG_ON(&mas, mas.offset != 0);
> 3424 MAS_BUG_ON(&mas, mas.index != 340);
> 3425 MAS_BUG_ON(&mas, mas.last != 349);
> 3426
> 3427 /* Prev out of range */
> 3428 i--;
> 3429 MAS_BUG_ON(&mas, mas_prev_range(&mas, i*10 + 1) != NULL);
> 3430 /* maple state does not move */
> 3431 MAS_BUG_ON(&mas, mas.offset != 0);
> 3432 MAS_BUG_ON(&mas, mas.index != 340);
> 3433 MAS_BUG_ON(&mas, mas.last != 349);
> 3434
> 3435 mas_set(&mas, 210);
> 3436 for (i = 210; i<= 350; i += 10) {
> > 3437 void *entry = mas_find_range(&mas, ULONG_MAX);
> 3438
> 3439 MAS_BUG_ON(&mas, entry != xa_mk_value(i/10));
> 3440 }
> 3441
> 3442 mas_set(&mas, 0);
> > 3443 mas_contiguous(&mas, test, ULONG_MAX) {
> 3444 MAS_BUG_ON(&mas, test != xa_mk_value(0));
> 3445 MAS_BUG_ON(&mas, mas.index != 0);
> 3446 MAS_BUG_ON(&mas, mas.last != 5);
> 3447 }
> 3448 MAS_BUG_ON(&mas, test != NULL);
> 3449 MAS_BUG_ON(&mas, mas.index != 6);
> 3450 MAS_BUG_ON(&mas, mas.last != 9);
> 3451
> 3452 mas_set(&mas, 6);
> 3453 mas_contiguous(&mas, test, ULONG_MAX) {
> 3454 MAS_BUG_ON(&mas, test != xa_mk_value(1));
> 3455 MAS_BUG_ON(&mas, mas.index != 10);
> 3456 MAS_BUG_ON(&mas, mas.last != 15);
> 3457 }
> 3458 MAS_BUG_ON(&mas, test != NULL);
> 3459 MAS_BUG_ON(&mas, mas.index != 16);
> 3460 MAS_BUG_ON(&mas, mas.last != 19);
> 3461
> 3462 i = 210;
> 3463 mas_set(&mas, i);
> 3464 mas_contiguous(&mas, test, 340) {
> 3465 MAS_BUG_ON(&mas, test != xa_mk_value(i/10));
> 3466 MAS_BUG_ON(&mas, mas.index != i);
> 3467 MAS_BUG_ON(&mas, mas.last != i+9);
> 3468 i+=10;
> 3469 offset = mas.offset;
> 3470 }
> 3471 /* Hit the limit, iterator is at the limit. */
> 3472 MAS_BUG_ON(&mas, offset != mas.offset);
> 3473 MAS_BUG_ON(&mas, test != NULL);
> 3474 MAS_BUG_ON(&mas, mas.index != 340);
> 3475 MAS_BUG_ON(&mas, mas.last != 349);
> 3476 test = mas_find_range(&mas, ULONG_MAX);
> 3477 MAS_BUG_ON(&mas, test != xa_mk_value(35));
> 3478 MAS_BUG_ON(&mas, mas.index != 350);
> 3479 MAS_BUG_ON(&mas, mas.last != 355);
> 3480
> 3481
> > 3482 test = mas_find_range_rev(&mas, 0);
> 3483 MAS_BUG_ON(&mas, test != xa_mk_value(34));
> 3484 MAS_BUG_ON(&mas, mas.index != 340);
> 3485 MAS_BUG_ON(&mas, mas.last != 349);
> 3486 mas_set(&mas, 345);
> 3487 test = mas_find_range_rev(&mas, 0);
> 3488 MAS_BUG_ON(&mas, test != xa_mk_value(34));
> 3489 MAS_BUG_ON(&mas, mas.index != 340);
> 3490 MAS_BUG_ON(&mas, mas.last != 349);
> 3491
> 3492 offset = mas.offset;
> 3493 test = mas_find_range_rev(&mas, 340);
> 3494 MAS_BUG_ON(&mas, offset != mas.offset);
> 3495 MAS_BUG_ON(&mas, test != NULL);
> 3496 MAS_BUG_ON(&mas, mas.index != 340);
> 3497 MAS_BUG_ON(&mas, mas.last != 349);
> 3498
> 3499 mas_unlock(&mas);
> 3500 mt_set_non_kernel(0);
> 3501 }
> 3502
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests