2024-06-07 18:53:42

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 00/16] Introduce a store type enum for the Maple tree

v1[1] -> v2:
- previous results were noisy, replaced with stress-ng-mmap config of
mmtests

- document function parameter 'entry'in 'mas_prealloc_calc per Intel
Test Robot

- explain why mas_reset() was dropeed in patch 4

- add locking and mas_destroy() to testcase in patch 4

- squash "set write store type" vs "set store type" into the
mas_wr_store_entry() modification

- only set ret to xa_err() in the err case in mas_store_gfp()

- move MT_BUG_ON() inside lock in patch 7

- noinline mas_wr_spanning store to reduce stack usage in
mas_wr_store_type() in patch 11 per Intel Test Robot

- document function parameter gfp in mas_insert() per Intel
test robot.

- further simplify the local variables in patch 11



================================ OVERVIEW ================================

This series implements two work items[2]: "aligning mas_store_gfp() with
mas_preallocate()" and "enum for store type".

mas_store_gfp() is modified to preallocate nodes. This simplies many of
the write helper functions by allowing them to use mas_store_gfp() rather
than open coding node allocation and error handling.

The enum defines the following store types:

enum store_type {
wr_invalid,
wr_new_root,
wr_store_root,
wr_exact_fit,
wr_spanning_store,
wr_split_store,
wr_rebalance,
wr_append,
wr_node_store,
wr_slot_store,
wr_bnode
};

In the current maple tree code, a walk down the tree is done in
mas_preallocate() to determine the number of nodes needed for this write.
After node allocation, mas_wr_store_entry() will perform another walk to
determine which write helper function to use to complete the write.

Rather than performing the second walk, we can store the type of write
in the maple write state during node allocation and read this field to
complete the write.

================================ RESULTS =================================

v6.10_mmap v6.10_mmap_store_type
Duration User 9.80 8.69
Duration System 2295.94 2243.85
Duration Elapsed 1010.50 1009.44

================================ TESTING =================================

Testing was done with the maple tree test suite. A new test case is also
added to validate the order in which we test for and assign the store type.


[1]: https://lore.kernel.org/linux-mm/[email protected]/
[2]: https://lists.infradead.org/pipermail/maple-tree/2023-December/003098.html


Sidhartha Kumar (16):
maple_tree: introduce store_type enum
maple_tree: introduce mas_wr_prealloc_setup()
maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup()
maple_tree: introduce mas_wr_store_type()
maple_tree: remove mas_destroy() from mas_nomem()
maple_tree: use mas_store_gfp() in mas_erase()
maple_tree: use mas_store_gfp() in mtree_store_range()
maple_tree: print store type in mas_dump()
maple_tree: use store type in mas_wr_store_entry()
maple_tree: convert mas_insert() to preallocate nodes
maple_tree: simplify mas_commit_b_node()
maple_tree: remove mas_wr_modify()
maple_tree: have mas_store() allocate nodes if needed
maple_tree: remove node allocations from various write helper
functions
maple_tree: remove repeated sanity checks from mas_wr_append()
maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()

include/linux/maple_tree.h | 15 +
lib/maple_tree.c | 579 ++++++++++++++++++-------------
tools/testing/radix-tree/maple.c | 48 ++-
3 files changed, 401 insertions(+), 241 deletions(-)

--
2.45.2



2024-06-07 18:54:14

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 06/16] maple_tree: use mas_store_gfp() in mas_erase()

Refactor mas_erase() to simply call mas_store_gfp() which will abstract
storing the null, memory allocation, and error handling.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index b933df290887..bf309b7515e7 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6355,7 +6355,6 @@ EXPORT_SYMBOL_GPL(mas_find_range_rev);
void *mas_erase(struct ma_state *mas)
{
void *entry;
- MA_WR_STATE(wr_mas, mas, NULL);

if (!mas_is_active(mas) || !mas_is_start(mas))
mas->status = ma_start;
@@ -6365,15 +6364,9 @@ void *mas_erase(struct ma_state *mas)
if (!entry)
return NULL;

-write_retry:
/* Must reset to ensure spanning writes of last slot are detected */
mas_reset(mas);
- mas_wr_store_setup(&wr_mas);
- mas_wr_store_entry(&wr_mas);
- if (mas_nomem(mas, GFP_KERNEL))
- goto write_retry;
-
- mas_destroy(mas);
+ mas_store_gfp(mas, NULL, GFP_KERNEL);
return entry;
}
EXPORT_SYMBOL_GPL(mas_erase);
--
2.45.2


2024-06-07 18:54:18

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 04/16] maple_tree: introduce mas_wr_store_type()

Introduce mas_wr_store_type() which will set the correct store type
based on a walk of the tree.

mas_prealloc_calc() is also introduced to abstract the calculation used
to determine the number of nodes needed for a store operation.

In this change a call to mas_reset() is removed in the error case of
mas_prealloc(). This is only needed in the MA_STATE_REBALANCE case of
mas_destroy(). We can move the call to mas_reset() directly to
mas_destroy().

Also, add a test case to validate the order that we check the store type
in is correct. This test models a vma expanding and then shrinking which
is part of the boot process.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 214 ++++++++++++++++++++++---------
tools/testing/radix-tree/maple.c | 38 ++++++
2 files changed, 192 insertions(+), 60 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 2558d15bb748..a7f585ed488c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4278,6 +4278,151 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
wr_mas->content = mas_start(mas);
}

+/**
+ * mas_prealloc_calc() - Calculate number of nodes needed for a
+ * given store oepration
+ * @mas: The maple state
+ * @entry: The entry to store into the tree
+ *
+ * Return: Number of nodes required for preallocation.
+ */
+static inline int mas_prealloc_calc(struct ma_state *mas, void *entry)
+{
+ int ret = mas_mt_height(mas) * 3 + 1;
+
+ switch (mas->store_type) {
+ case wr_invalid:
+ WARN_ON_ONCE(1);
+ break;
+ case wr_new_root:
+ ret = 1;
+ break;
+ case wr_store_root:
+ if (likely((mas->last != 0) || (mas->index != 0)))
+ ret = 1;
+ else if (((unsigned long) (entry) & 3) == 2)
+ ret = 1;
+ else
+ ret = 0;
+ break;
+ case wr_spanning_store:
+ ret = mas_mt_height(mas) * 3 + 1;
+ break;
+ case wr_split_store:
+ ret = mas_mt_height(mas) * 2 + 1;
+ break;
+ case wr_rebalance:
+ ret = mas_mt_height(mas) * 2 - 1;
+ break;
+ case wr_node_store:
+ case wr_bnode:
+ ret = mt_in_rcu(mas->tree) ? 1 : 0;
+ break;
+ case wr_append:
+ case wr_exact_fit:
+ case wr_slot_store:
+ ret = 0;
+ }
+
+ return ret;
+}
+
+/*
+ * mas_wr_store_type() - Set the store type for a given
+ * store operation.
+ * @wr_mas: The maple write state
+ */
+static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
+{
+ struct ma_state *mas = wr_mas->mas;
+ unsigned char new_end;
+
+ if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
+ mas->store_type = wr_store_root;
+ return;
+ }
+
+ if (unlikely(!mas_wr_walk(wr_mas))) {
+ mas->store_type = wr_spanning_store;
+ return;
+ }
+
+ /* At this point, we are at the leaf node that needs to be altered. */
+ mas_wr_end_piv(wr_mas);
+ if (!wr_mas->entry)
+ mas_wr_extend_null(wr_mas);
+
+ new_end = mas_wr_new_end(wr_mas);
+ if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == mas->last)) {
+ mas->store_type = wr_exact_fit;
+ return;
+ }
+
+ if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
+ mas->store_type = wr_new_root;
+ return;
+ }
+
+ /* Potential spanning rebalance collapsing a node */
+ if (new_end < mt_min_slots[wr_mas->type]) {
+ if (!mte_is_root(mas->node)) {
+ mas->store_type = wr_rebalance;
+ return;
+ }
+ mas->store_type = wr_node_store;
+ return;
+ }
+
+ if (new_end >= mt_slots[wr_mas->type]) {
+ mas->store_type = wr_split_store;
+ return;
+ }
+
+ if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
+ mas->store_type = wr_append;
+ return;
+ }
+
+ if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
+ (wr_mas->offset_end - mas->offset == 1))) {
+ mas->store_type = wr_slot_store;
+ return;
+ }
+
+ if (mte_is_root(mas->node) || !(new_end <= mt_min_slots[wr_mas->type]) ||
+ (mas->mas_flags & MA_STATE_BULK)) {
+ mas->store_type = wr_node_store;
+ return;
+ }
+
+ mas->store_type = wr_bnode;
+}
+
+/**
+ * mas_wr_preallocate() - Preallocate enough nodes for a store operation
+ * @wr_mas: The maple write state
+ * @entry: The entry that will be stored
+ * @gfp: The GFP_FLAGS to use for allocations.
+ *
+ */
+static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry, gfp_t gfp)
+{
+ struct ma_state *mas = wr_mas->mas;
+ int request;
+
+ mas_wr_prealloc_setup(wr_mas);
+ mas_wr_store_type(wr_mas);
+ request = mas_prealloc_calc(mas, entry);
+ if (!request)
+ return;
+
+ mas_node_count_gfp(mas, request, gfp);
+ if (likely(!mas_is_err(mas)))
+ return;
+
+ mas_set_alloc_req(mas, 0);
+}
+
/**
* mas_insert() - Internal call to insert a value
* @mas: The maple state
@@ -5506,69 +5651,17 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc);
int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
{
MA_WR_STATE(wr_mas, mas, entry);
- unsigned char node_size;
- int request = 1;
- int ret;
-
-
- if (unlikely(!mas->index && mas->last == ULONG_MAX))
- goto ask_now;
-
- mas_wr_prealloc_setup(&wr_mas);
- /* Root expand */
- if (unlikely(mas_is_none(mas) || mas_is_ptr(mas)))
- goto ask_now;
-
- if (unlikely(!mas_wr_walk(&wr_mas))) {
- /* Spanning store, use worst case for now */
- request = 1 + mas_mt_height(mas) * 3;
- goto ask_now;
- }
-
- /* At this point, we are at the leaf node that needs to be altered. */
- /* Exact fit, no nodes needed. */
- if (wr_mas.r_min == mas->index && wr_mas.r_max == mas->last)
- return 0;
-
- mas_wr_end_piv(&wr_mas);
- node_size = mas_wr_new_end(&wr_mas);
-
- /* Slot store, does not require additional nodes */
- if (node_size == mas->end) {
- /* reuse node */
- if (!mt_in_rcu(mas->tree))
- return 0;
- /* shifting boundary */
- if (wr_mas.offset_end - mas->offset == 1)
- return 0;
- }
+ int ret = 0;

- if (node_size >= mt_slots[wr_mas.type]) {
- /* Split, worst case for now. */
- request = 1 + mas_mt_height(mas) * 2;
- goto ask_now;
+ mas_wr_preallocate(&wr_mas, entry, gfp);
+ if (mas_is_err(mas)) {
+ ret = xa_err(mas->node);
+ mas_destroy(mas);
+ mas_reset(mas);
+ return ret;
}

- /* New root needs a single node */
- if (unlikely(mte_is_root(mas->node)))
- goto ask_now;
-
- /* Potential spanning rebalance collapsing a node, use worst-case */
- if (node_size - 1 <= mt_min_slots[wr_mas.type])
- request = mas_mt_height(mas) * 2 - 1;
-
- /* node store, slot store needs one node */
-ask_now:
- mas_node_count_gfp(mas, request, gfp);
mas->mas_flags |= MA_STATE_PREALLOC;
- if (likely(!mas_is_err(mas)))
- return 0;
-
- mas_set_alloc_req(mas, 0);
- ret = xa_err(mas->node);
- mas_reset(mas);
- mas_destroy(mas);
- mas_reset(mas);
return ret;
}
EXPORT_SYMBOL_GPL(mas_preallocate);
@@ -5594,7 +5687,8 @@ void mas_destroy(struct ma_state *mas)
*/
if (mas->mas_flags & MA_STATE_REBALANCE) {
unsigned char end;
-
+ if (mas_is_err(mas))
+ mas_reset(mas);
mas_start(mas);
mtree_range_walk(mas);
end = mas->end + 1;
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index f1caf4bcf937..1c68ccc1b475 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -36223,6 +36223,40 @@ static noinline void __init check_mtree_dup(struct maple_tree *mt)

extern void test_kmem_cache_bulk(void);

+
+ /* test to simulate expanding a vma from [0x7fffffffe000, 0x7ffffffff000)
+ * to [0x7ffde4ca1000, 0x7ffffffff000) and then shrinking the vma to
+ * [0x7ffde4ca1000, 0x7ffde4ca2000)
+ */
+static inline int check_vma_modification(struct maple_tree *mt)
+{
+ MA_STATE(mas, mt, 0, 0);
+
+ mtree_lock(mt);
+ /* vma with old start and old end */
+ __mas_set_range(&mas, 0x7fffffffe000, 0x7ffffffff000 - 1);
+ mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
+ mas_store_prealloc(&mas, xa_mk_value(1));
+
+ /* next write occurs partly in previous range [0, 0x7fffffffe000)*/
+ mas_prev_range(&mas, 0);
+ /* expand vma to {0x7ffde4ca1000, 0x7ffffffff000) */
+ __mas_set_range(&mas, 0x7ffde4ca1000, 0x7ffffffff000 - 1);
+ mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
+ mas_store_prealloc(&mas, xa_mk_value(1));
+
+ /* shrink vma to [0x7ffde4ca1000, 7ffde4ca2000) */
+ __mas_set_range(&mas, 0x7ffde4ca2000, 0x7ffffffff000 - 1);
+ mas_preallocate(&mas, NULL, GFP_KERNEL);
+ mas_store_prealloc(&mas, NULL);
+ mt_dump(mt, mt_dump_hex);
+
+ mas_destroy(&mas);
+ mtree_unlock(mt);
+ return 0;
+}
+
+
void farmer_tests(void)
{
struct maple_node *node;
@@ -36230,6 +36264,10 @@ void farmer_tests(void)

mt_dump(&tree, mt_dump_dec);

+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | MT_FLAGS_USE_RCU);
+ check_vma_modification(&tree);
+ mtree_destroy(&tree);
+
tree.ma_root = xa_mk_value(0);
mt_dump(&tree, mt_dump_dec);

--
2.45.2


2024-06-07 18:54:28

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 05/16] maple_tree: remove mas_destroy() from mas_nomem()

Separate call to mas_destroy() from mas_nomem() so we can check for no
memory errors without destroying the current maple state in
mas_store_gfp(). We then add calls to mas_destroy() to callers of
mas_nomem().

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 39 +++++++++++++++++++++-----------
tools/testing/radix-tree/maple.c | 10 ++++----
2 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index a7f585ed488c..b933df290887 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4527,6 +4527,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
if (*next == 0)
mas->tree->ma_flags |= MT_FLAGS_ALLOC_WRAPPED;

+ mas_destroy(mas);
return ret;
}
EXPORT_SYMBOL(mas_alloc_cyclic);
@@ -5607,18 +5608,24 @@ EXPORT_SYMBOL_GPL(mas_store);
int mas_store_gfp(struct ma_state *mas, void *entry, gfp_t gfp)
{
MA_WR_STATE(wr_mas, mas, entry);
+ int ret = 0;

- mas_wr_store_setup(&wr_mas);
- trace_ma_write(__func__, mas, 0, entry);
retry:
- mas_wr_store_entry(&wr_mas);
+ mas_wr_preallocate(&wr_mas, entry, gfp);
+ WARN_ON_ONCE(mas->store_type == wr_invalid);
+
if (unlikely(mas_nomem(mas, gfp)))
goto retry;

- if (unlikely(mas_is_err(mas)))
- return xa_err(mas->node);
+ if (mas_is_err(mas)) {
+ ret = xa_err(mas->node);
+ goto out;
+ }

- return 0;
+ mas_wr_store_entry(&wr_mas);
+out:
+ mas_destroy(mas);
+ return ret;
}
EXPORT_SYMBOL_GPL(mas_store_gfp);

@@ -6366,6 +6373,7 @@ void *mas_erase(struct ma_state *mas)
if (mas_nomem(mas, GFP_KERNEL))
goto write_retry;

+ mas_destroy(mas);
return entry;
}
EXPORT_SYMBOL_GPL(mas_erase);
@@ -6380,10 +6388,8 @@ EXPORT_SYMBOL_GPL(mas_erase);
bool mas_nomem(struct ma_state *mas, gfp_t gfp)
__must_hold(mas->tree->ma_lock)
{
- if (likely(mas->node != MA_ERROR(-ENOMEM))) {
- mas_destroy(mas);
+ if (likely(mas->node != MA_ERROR(-ENOMEM)))
return false;
- }

if (gfpflags_allow_blocking(gfp) && !mt_external_lock(mas->tree)) {
mtree_unlock(mas->tree);
@@ -6461,6 +6467,7 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
{
MA_STATE(mas, mt, index, last);
MA_WR_STATE(wr_mas, &mas, entry);
+ int ret = 0;

trace_ma_write(__func__, &mas, 0, entry);
if (WARN_ON_ONCE(xa_is_advanced(entry)))
@@ -6476,10 +6483,12 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
goto retry;

mtree_unlock(mt);
+
if (mas_is_err(&mas))
- return xa_err(mas.node);
+ ret = xa_err(mas.node);

- return 0;
+ mas_destroy(&mas);
+ return ret;
}
EXPORT_SYMBOL(mtree_store_range);

@@ -6515,6 +6524,7 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,
unsigned long last, void *entry, gfp_t gfp)
{
MA_STATE(ms, mt, first, last);
+ int ret = 0;

if (WARN_ON_ONCE(xa_is_advanced(entry)))
return -EINVAL;
@@ -6530,9 +6540,10 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,

mtree_unlock(mt);
if (mas_is_err(&ms))
- return xa_err(ms.node);
+ ret = xa_err(ms.node);

- return 0;
+ mas_destroy(&ms);
+ return ret;
}
EXPORT_SYMBOL(mtree_insert_range);

@@ -6587,6 +6598,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,

unlock:
mtree_unlock(mt);
+ mas_destroy(&mas);
return ret;
}
EXPORT_SYMBOL(mtree_alloc_range);
@@ -6668,6 +6680,7 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,

unlock:
mtree_unlock(mt);
+ mas_destroy(&mas);
return ret;
}
EXPORT_SYMBOL(mtree_alloc_rrange);
diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
index 1c68ccc1b475..e897f60d1a1d 100644
--- a/tools/testing/radix-tree/maple.c
+++ b/tools/testing/radix-tree/maple.c
@@ -119,7 +119,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
MT_BUG_ON(mt, mas.alloc->slot[0] == NULL);
mas_push_node(&mas, mn);
mas_reset(&mas);
- mas_nomem(&mas, GFP_KERNEL); /* free */
+ mas_destroy(&mas);
mtree_unlock(mt);


@@ -143,7 +143,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
mn->parent = ma_parent_ptr(mn);
ma_free_rcu(mn);
mas.status = ma_start;
- mas_nomem(&mas, GFP_KERNEL);
+ mas_destroy(&mas);
/* Allocate 3 nodes, will fail. */
mas_node_count(&mas, 3);
/* Drop the lock and allocate 3 nodes. */
@@ -160,7 +160,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
MT_BUG_ON(mt, mas_allocated(&mas) != 3);
/* Free. */
mas_reset(&mas);
- mas_nomem(&mas, GFP_KERNEL);
+ mas_destroy(&mas);

/* Set allocation request to 1. */
mas_set_alloc_req(&mas, 1);
@@ -276,6 +276,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
}
mas_reset(&mas);
MT_BUG_ON(mt, mas_nomem(&mas, GFP_KERNEL));
+ mas_destroy(&mas);

}

@@ -298,7 +299,7 @@ static noinline void __init check_new_node(struct maple_tree *mt)
}
MT_BUG_ON(mt, mas_allocated(&mas) != total);
mas_reset(&mas);
- mas_nomem(&mas, GFP_KERNEL); /* Free. */
+ mas_destroy(&mas); /* Free. */

MT_BUG_ON(mt, mas_allocated(&mas) != 0);
for (i = 1; i < 128; i++) {
@@ -35846,6 +35847,7 @@ static noinline void __init check_nomem(struct maple_tree *mt)
mas_store(&ms, &ms); /* insert 1 -> &ms */
mas_nomem(&ms, GFP_KERNEL); /* Node allocated in here. */
mtree_unlock(mt);
+ mas_destroy(&ms);
mtree_destroy(mt);
}

--
2.45.2


2024-06-07 18:54:26

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 12/16] maple_tree: remove mas_wr_modify()

There are no more users of the function, safely remove it.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 38 --------------------------------------
1 file changed, 38 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 743de734ba0c..8b17768db5f2 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4155,44 +4155,6 @@ static void mas_wr_bnode(struct ma_wr_state *wr_mas)
mas_commit_b_node(wr_mas, &b_node, wr_mas->mas->end);
}

-static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
-{
- struct ma_state *mas = wr_mas->mas;
- unsigned char new_end;
-
- /* Direct replacement */
- if (wr_mas->r_min == mas->index && wr_mas->r_max == mas->last) {
- rcu_assign_pointer(wr_mas->slots[mas->offset], wr_mas->entry);
- if (!!wr_mas->entry ^ !!wr_mas->content)
- mas_update_gap(mas);
- return;
- }
-
- /*
- * new_end exceeds the size of the maple node and cannot enter the fast
- * path.
- */
- new_end = mas_wr_new_end(wr_mas);
- if (new_end >= mt_slots[wr_mas->type])
- goto slow_path;
-
- /* Attempt to append */
- if (mas_wr_append(wr_mas, new_end))
- return;
-
- if (new_end == mas->end && mas_wr_slot_store(wr_mas))
- return;
-
- if (mas_wr_node_store(wr_mas, new_end))
- return;
-
- if (mas_is_err(mas))
- return;
-
-slow_path:
- mas_wr_bnode(wr_mas);
-}
-
/*
* mas_wr_store_entry() - Internal call to store a value
* @mas: The maple state
--
2.45.2


2024-06-07 18:54:58

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 07/16] maple_tree: use mas_store_gfp() in mtree_store_range()

Refactor mtree_store_range() to use mas_store_gfp() which will abstract
the store, memory allocation, and error handling.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index bf309b7515e7..a955316b6745 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -6459,7 +6459,6 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
unsigned long last, void *entry, gfp_t gfp)
{
MA_STATE(mas, mt, index, last);
- MA_WR_STATE(wr_mas, &mas, entry);
int ret = 0;

trace_ma_write(__func__, &mas, 0, entry);
@@ -6470,17 +6469,10 @@ int mtree_store_range(struct maple_tree *mt, unsigned long index,
return -EINVAL;

mtree_lock(mt);
-retry:
- mas_wr_store_entry(&wr_mas);
- if (mas_nomem(&mas, gfp))
- goto retry;
-
+ ret = mas_store_gfp(&mas, entry, gfp);
+ MT_BUG_ON(mas.tree, mas.store_type == wr_invalid);
mtree_unlock(mt);

- if (mas_is_err(&mas))
- ret = xa_err(mas.node);
-
- mas_destroy(&mas);
return ret;
}
EXPORT_SYMBOL(mtree_store_range);
--
2.45.2


2024-06-07 18:55:02

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 08/16] maple_tree: print store type in mas_dump()

Knowing the store type of the maple state could be helpful for debugging.
Have mas_dump() print mas->store_type.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index a955316b6745..91b18eef81e1 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -7731,6 +7731,45 @@ void mas_dump(const struct ma_state *mas)
break;
}

+ pr_err("Store Type: ");
+ switch (mas->store_type) {
+ case wr_invalid:
+ pr_err("invalid store type\n");
+ break;
+ case wr_new_root:
+ pr_err("new_root\n");
+ break;
+ case wr_store_root:
+ pr_err("store_root\n");
+ break;
+ case wr_exact_fit:
+ pr_err("exact_fit\n");
+ break;
+ case wr_split_store:
+ pr_err("split_store\n");
+ break;
+ case wr_slot_store:
+ pr_err("slot_store\n");
+ break;
+ case wr_append:
+ pr_err("append\n");
+ break;
+ case wr_node_store:
+ pr_err("node_store\n");
+ break;
+ case wr_spanning_store:
+ pr_err("spanning_store\n");
+ break;
+ case wr_rebalance:
+ pr_err("rebalance\n");
+ break;
+ case wr_bnode:
+ pr_err("write_bnode\n");
+ break;
+
+ }
+
+
pr_err("[%u/%u] index=%lx last=%lx\n", mas->offset, mas->end,
mas->index, mas->last);
pr_err(" min=%lx max=%lx alloc=%p, depth=%u, flags=%x\n",
--
2.45.2


2024-06-07 18:55:22

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 11/16] maple_tree: simplify mas_commit_b_node()

Use mas->store_type to simplify the logic of identifying the type of
write. We can also use mas_new_ma_node() instead of mt_mk_node() to
remove b_type and clean up the local variables.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 21 +++++++++------------
1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index c37bfac4f622..743de734ba0c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3431,18 +3431,14 @@ static inline bool mas_reuse_node(struct ma_wr_state *wr_mas,
static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
struct maple_big_node *b_node, unsigned char end)
{
- struct maple_node *node;
- struct maple_enode *old_enode;
- unsigned char b_end = b_node->b_end;
- enum maple_type b_type = b_node->type;
+ unsigned char b_end = 0;
+ struct maple_enode *new_enode;
+ struct maple_enode *old_enode = wr_mas->mas->node;

- old_enode = wr_mas->mas->node;
- if ((b_end < mt_min_slots[b_type]) &&
- (!mte_is_root(old_enode)) &&
- (mas_mt_height(wr_mas->mas) > 1))
+ if (wr_mas->mas->store_type == wr_rebalance)
return mas_rebalance(wr_mas->mas, b_node);

- if (b_end >= mt_slots[b_type])
+ if (wr_mas->mas->store_type == wr_split_store)
return mas_split(wr_mas->mas, b_node);

if (mas_reuse_node(wr_mas, b_node, end))
@@ -3452,9 +3448,10 @@ static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
if (mas_is_err(wr_mas->mas))
return 0;

- node = mas_pop_node(wr_mas->mas);
- node->parent = mas_mn(wr_mas->mas)->parent;
- wr_mas->mas->node = mt_mk_node(node, b_type);
+ b_end = b_node->b_end;
+ new_enode = mas_new_ma_node(wr_mas->mas, b_node);
+ mte_to_node(new_enode)->parent = mte_to_node(old_enode)->parent;
+ wr_mas->mas->node = new_enode;
mab_mas_cp(b_node, 0, b_end, wr_mas->mas, false);
mas_replace_node(wr_mas->mas, old_enode);
reuse_node:
--
2.45.2


2024-06-07 18:55:33

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 09/16] maple_tree: use store type in mas_wr_store_entry()

When storing an entry, we can read the store type that was set from a
previous partial walk of the tree. Now that the type of store is known,
select the correct write helper function to use to complete the store.

Also noinline mas_wr_spanning_store() to limit stack frame usage in
mas_wr_store_entry() as it allocates a maple_big_node on the stack.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 55 +++++++++++++++++++++++++++++++-----------------
1 file changed, 36 insertions(+), 19 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 91b18eef81e1..2c42e99c400c 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3780,7 +3780,7 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
*
* Return: 0 on error, positive on success.
*/
-static inline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
+static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
{
struct maple_subtree_state mast;
struct maple_big_node b_node;
@@ -4206,27 +4206,42 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas)
static inline void *mas_wr_store_entry(struct ma_wr_state *wr_mas)
{
struct ma_state *mas = wr_mas->mas;
+ unsigned char new_end = mas_wr_new_end(wr_mas);

- wr_mas->content = mas_start(mas);
- if (mas_is_none(mas) || mas_is_ptr(mas)) {
+ switch (mas->store_type) {
+ case wr_invalid:
+ MT_BUG_ON(mas->tree, 1);
+ return NULL;
+ case wr_new_root:
+ mas_new_root(mas, wr_mas->entry);
+ break;
+ case wr_store_root:
mas_store_root(mas, wr_mas->entry);
- return wr_mas->content;
- }
-
- if (unlikely(!mas_wr_walk(wr_mas))) {
+ break;
+ case wr_exact_fit:
+ rcu_assign_pointer(wr_mas->slots[mas->offset], wr_mas->entry);
+ if (!!wr_mas->entry ^ !!wr_mas->content)
+ mas_update_gap(mas);
+ break;
+ case wr_append:
+ mas_wr_append(wr_mas, new_end);
+ break;
+ case wr_slot_store:
+ mas_wr_slot_store(wr_mas);
+ break;
+ case wr_node_store:
+ mas_wr_node_store(wr_mas, new_end);
+ break;
+ case wr_spanning_store:
mas_wr_spanning_store(wr_mas);
- return wr_mas->content;
- }
-
- /* At this point, we are at the leaf node that needs to be altered. */
- mas_wr_end_piv(wr_mas);
- /* New root for a single pointer */
- if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
- mas_new_root(mas, wr_mas->entry);
- return wr_mas->content;
+ break;
+ case wr_split_store:
+ case wr_rebalance:
+ case wr_bnode:
+ mas_wr_bnode(wr_mas);
+ break;
}

- mas_wr_modify(wr_mas);
return wr_mas->content;
}

@@ -5590,7 +5605,8 @@ void *mas_store(struct ma_state *mas, void *entry)
* want to examine what happens if a single store operation was to
* overwrite multiple entries within a self-balancing B-Tree.
*/
- mas_wr_store_setup(&wr_mas);
+ mas_wr_prealloc_setup(&wr_mas);
+ mas_wr_store_type(&wr_mas);
mas_wr_store_entry(&wr_mas);
return wr_mas.content;
}
@@ -5639,7 +5655,8 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
{
MA_WR_STATE(wr_mas, mas, entry);

- mas_wr_store_setup(&wr_mas);
+ mas_wr_prealloc_setup(&wr_mas);
+ mas_wr_store_type(&wr_mas);
trace_ma_write(__func__, mas, 0, entry);
mas_wr_store_entry(&wr_mas);
MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
--
2.45.2


2024-06-07 18:56:12

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 03/16] maple_tree: move up mas_wr_store_setup() and mas_wr_prealloc_setup()

Subsequent patches require these definitions to be higher, no functional
changes intended.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 96 ++++++++++++++++++++++++------------------------
1 file changed, 48 insertions(+), 48 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index ccbec0f66562..2558d15bb748 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4230,6 +4230,54 @@ static inline void *mas_wr_store_entry(struct ma_wr_state *wr_mas)
return wr_mas->content;
}

+static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
+{
+ if (!mas_is_active(wr_mas->mas)) {
+ if (mas_is_start(wr_mas->mas))
+ return;
+
+ if (unlikely(mas_is_paused(wr_mas->mas)))
+ goto reset;
+
+ if (unlikely(mas_is_none(wr_mas->mas)))
+ goto reset;
+
+ if (unlikely(mas_is_overflow(wr_mas->mas)))
+ goto reset;
+
+ if (unlikely(mas_is_underflow(wr_mas->mas)))
+ goto reset;
+ }
+
+ /*
+ * A less strict version of mas_is_span_wr() where we allow spanning
+ * writes within this node. This is to stop partial walks in
+ * mas_prealloc() from being reset.
+ */
+ if (wr_mas->mas->last > wr_mas->mas->max)
+ goto reset;
+
+ if (wr_mas->entry)
+ return;
+
+ if (mte_is_leaf(wr_mas->mas->node) &&
+ wr_mas->mas->last == wr_mas->mas->max)
+ goto reset;
+
+ return;
+
+reset:
+ mas_reset(wr_mas->mas);
+}
+
+static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
+{
+ struct ma_state *mas = wr_mas->mas;
+
+ mas_wr_store_setup(wr_mas);
+ wr_mas->content = mas_start(mas);
+}
+
/**
* mas_insert() - Internal call to insert a value
* @mas: The maple state
@@ -5361,54 +5409,6 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
mt_destroy_walk(enode, mt, true);
}
}
-
-static void mas_wr_store_setup(struct ma_wr_state *wr_mas)
-{
- if (!mas_is_active(wr_mas->mas)) {
- if (mas_is_start(wr_mas->mas))
- return;
-
- if (unlikely(mas_is_paused(wr_mas->mas)))
- goto reset;
-
- if (unlikely(mas_is_none(wr_mas->mas)))
- goto reset;
-
- if (unlikely(mas_is_overflow(wr_mas->mas)))
- goto reset;
-
- if (unlikely(mas_is_underflow(wr_mas->mas)))
- goto reset;
- }
-
- /*
- * A less strict version of mas_is_span_wr() where we allow spanning
- * writes within this node. This is to stop partial walks in
- * mas_prealloc() from being reset.
- */
- if (wr_mas->mas->last > wr_mas->mas->max)
- goto reset;
-
- if (wr_mas->entry)
- return;
-
- if (mte_is_leaf(wr_mas->mas->node) &&
- wr_mas->mas->last == wr_mas->mas->max)
- goto reset;
-
- return;
-
-reset:
- mas_reset(wr_mas->mas);
-}
-
-static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
-{
- struct ma_state *mas = wr_mas->mas;
-
- mas_wr_store_setup(wr_mas);
- wr_mas->content = mas_start(mas);
-}
/* Interface */

/**
--
2.45.2


2024-06-07 18:56:27

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 14/16] maple_tree: remove node allocations from various write helper functions

These write helper functions are all called from store paths which
preallocate enough nodes that will be needed for the write. There is no
more need to allocate within the functions themselves.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 31 -------------------------------
1 file changed, 31 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 92f133ea5f00..7e1f64490662 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -2976,9 +2976,6 @@ static inline int mas_rebalance(struct ma_state *mas,
* tries to combine the data in the same way. If one node contains the
* entire range of the tree, then that node is used as a new root node.
*/
- mas_node_count(mas, empty_count * 2 - 1);
- if (mas_is_err(mas))
- return 0;

mast.orig_l = &l_mas;
mast.orig_r = &r_mas;
@@ -3029,11 +3026,6 @@ static inline void mas_destroy_rebalance(struct ma_state *mas, unsigned char end

/* set up node. */
if (in_rcu) {
- /* Allocate for both left and right as well as parent. */
- mas_node_count(mas, 3);
- if (mas_is_err(mas))
- return;
-
newnode = mas_pop_node(mas);
} else {
newnode = &reuse;
@@ -3341,10 +3333,6 @@ static int mas_split(struct ma_state *mas, struct maple_big_node *b_node)

trace_ma_op(__func__, mas);
mas->depth = mas_mt_height(mas);
- /* Allocation failures will happen early. */
- mas_node_count(mas, 1 + mas->depth * 2);
- if (mas_is_err(mas))
- return 0;

mast.l = &l_mas;
mast.r = &r_mas;
@@ -3444,10 +3432,6 @@ static noinline_for_kasan int mas_commit_b_node(struct ma_wr_state *wr_mas,
if (mas_reuse_node(wr_mas, b_node, end))
goto reuse_node;

- mas_node_count(wr_mas->mas, 1);
- if (mas_is_err(wr_mas->mas))
- return 0;
-
b_end = b_node->b_end;
new_enode = mas_new_ma_node(wr_mas->mas, b_node);
mte_to_node(new_enode)->parent = mte_to_node(old_enode)->parent;
@@ -3474,10 +3458,6 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
unsigned long *pivots;
int slot = 0;

- mas_node_count(mas, 1);
- if (unlikely(mas_is_err(mas)))
- return 0;
-
node = mas_pop_node(mas);
pivots = ma_pivots(node, type);
slots = ma_slots(node, type);
@@ -3746,10 +3726,6 @@ static inline int mas_new_root(struct ma_state *mas, void *entry)
goto done;
}

- mas_node_count(mas, 1);
- if (mas_is_err(mas))
- return 0;
-
node = mas_pop_node(mas);
pivots = ma_pivots(node, type);
slots = ma_slots(node, type);
@@ -3812,9 +3788,6 @@ static noinline int mas_wr_spanning_store(struct ma_wr_state *wr_mas)
* entries per level plus a new root.
*/
height = mas_mt_height(mas);
- mas_node_count(mas, 1 + height * 3);
- if (mas_is_err(mas))
- return 0;

/*
* Set up right side. Need to get to the next offset after the spanning
@@ -3898,10 +3871,6 @@ static inline bool mas_wr_node_store(struct ma_wr_state *wr_mas,

/* set up node. */
if (in_rcu) {
- mas_node_count(mas, 1);
- if (mas_is_err(mas))
- return false;
-
newnode = mas_pop_node(mas);
} else {
memset(&reuse, 0, sizeof(struct maple_node));
--
2.45.2


2024-06-07 18:56:31

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 15/16] maple_tree: remove repeated sanity checks from mas_wr_append()

These sanity checks are now redundant as they are already checked in
mas_wr_store_type(). We can remove them from mas_wr_append().

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 7e1f64490662..3d334e7c19a5 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4061,17 +4061,9 @@ static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
static inline bool mas_wr_append(struct ma_wr_state *wr_mas,
unsigned char new_end)
{
- struct ma_state *mas;
+ struct ma_state *mas = wr_mas->mas;
void __rcu **slots;
- unsigned char end;
-
- mas = wr_mas->mas;
- if (mt_in_rcu(mas->tree))
- return false;
-
- end = mas->end;
- if (mas->offset != end)
- return false;
+ unsigned char end = mas->end;

if (new_end < mt_pivots[wr_mas->type]) {
wr_mas->pivots[new_end] = wr_mas->pivots[end];
--
2.45.2


2024-06-07 18:56:41

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 16/16] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()

Users of mas_store_prealloc() enter this function with nodes already
preallocated. This means the store type must be already set. We can then
remove the call to mas_wr_store_type() and initialize the write state to
continue the partial walk that was done when determining the store type.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 3d334e7c19a5..5854f25a4ae5 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4027,9 +4027,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
else
wr_mas->end_piv = wr_mas->mas->max;
-
- if (!wr_mas->entry)
- mas_wr_extend_null(wr_mas);
}

static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
@@ -5598,12 +5595,23 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
{
MA_WR_STATE(wr_mas, mas, entry);

- mas_wr_prealloc_setup(&wr_mas);
- mas_wr_store_type(&wr_mas);
+ if (mas->store_type == wr_store_root) {
+ mas_wr_prealloc_setup(&wr_mas);
+ goto store;
+ }
+
+ mas_wr_walk_descend(&wr_mas);
+ if (mas->store_type != wr_spanning_store) {
+ /* set wr_mas->content to current slot */
+ wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
+ mas_wr_end_piv(&wr_mas);
+ }
+store:
trace_ma_write(__func__, mas, 0, entry);
mas_wr_store_entry(&wr_mas);
MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
mas_destroy(mas);
+
}
EXPORT_SYMBOL_GPL(mas_store_prealloc);

--
2.45.2


2024-06-07 18:57:04

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 10/16] maple_tree: convert mas_insert() to preallocate nodes

By setting the store type in mas_insert(), we no longer need to use
mas_wr_modify() to determine the correct store function to use. Instead,
set the store type and call mas_wr_store_entry(). Also, pass in the
requested gfp flags to mas_insert() so they can be passed to the call to
mas_wr_preallocate().

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 33 ++++++++++++++++-----------------
1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 2c42e99c400c..c37bfac4f622 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -4442,11 +4442,12 @@ static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry, g
* mas_insert() - Internal call to insert a value
* @mas: The maple state
* @entry: The entry to store
+ * @gfp: The GFP_FLAGS to use for allocations
*
* Return: %NULL or the contents that already exists at the requested index
* otherwise. The maple state needs to be checked for error conditions.
*/
-static inline void *mas_insert(struct ma_state *mas, void *entry)
+static inline void *mas_insert(struct ma_state *mas, void *entry, gfp_t gfp)
{
MA_WR_STATE(wr_mas, mas, entry);

@@ -4468,26 +4469,24 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
if (wr_mas.content)
goto exists;

- if (mas_is_none(mas) || mas_is_ptr(mas)) {
- mas_store_root(mas, entry);
+ mas_wr_preallocate(&wr_mas, entry, gfp);
+ if (mas_is_err(mas))
return NULL;
- }

/* spanning writes always overwrite something */
- if (!mas_wr_walk(&wr_mas))
+ if (mas->store_type == wr_spanning_store)
goto exists;

/* At this point, we are at the leaf node that needs to be altered. */
- wr_mas.offset_end = mas->offset;
- wr_mas.end_piv = wr_mas.r_max;
-
- if (wr_mas.content || (mas->last > wr_mas.r_max))
- goto exists;
+ if (mas->store_type != wr_new_root && mas->store_type != wr_store_root) {
+ wr_mas.offset_end = mas->offset;
+ wr_mas.end_piv = wr_mas.r_max;

- if (!entry)
- return NULL;
+ if (wr_mas.content || (mas->last > wr_mas.r_max))
+ goto exists;
+ }

- mas_wr_modify(&wr_mas);
+ mas_wr_store_entry(&wr_mas);
return wr_mas.content;

exists:
@@ -4532,7 +4531,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
return ret;

do {
- mas_insert(mas, entry);
+ mas_insert(mas, entry, gfp);
} while (mas_nomem(mas, gfp));
if (mas_is_err(mas))
return xa_err(mas->node);
@@ -6536,7 +6535,7 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,

mtree_lock(mt);
retry:
- mas_insert(&ms, entry);
+ mas_insert(&ms, entry, gfp);
if (mas_nomem(&ms, gfp))
goto retry;

@@ -6585,7 +6584,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
if (ret)
goto unlock;

- mas_insert(&mas, entry);
+ mas_insert(&mas, entry, gfp);
/*
* mas_nomem() may release the lock, causing the allocated area
* to be unavailable, so try to allocate a free area again.
@@ -6667,7 +6666,7 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
if (ret)
goto unlock;

- mas_insert(&mas, entry);
+ mas_insert(&mas, entry, gfp);
/*
* mas_nomem() may release the lock, causing the allocated area
* to be unavailable, so try to allocate a free area again.
--
2.45.2


2024-06-07 18:59:25

by Sidhartha Kumar

[permalink] [raw]
Subject: [PATCH v2 13/16] maple_tree: have mas_store() allocate nodes if needed

Not all users of mas_store() enter with nodes already preallocated.
Check for the MA_STATE_PREALLOC flag to decide whether to preallocate nodes
within mas_store() rather than relying on future write helper functions
to perform the allocations. This allows the write helper functions to be
simplified as they do not have to do checks to make sure there are
enough allocated nodes to perform the write.

Signed-off-by: Sidhartha Kumar <[email protected]>
---
lib/maple_tree.c | 32 ++++++++++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 8b17768db5f2..92f133ea5f00 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5528,6 +5528,20 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
mt_destroy_walk(enode, mt, true);
}
}
+
+static inline void mas_wr_store_prealloc(struct ma_wr_state *wr_mas, void *entry)
+{
+ struct ma_state *mas = wr_mas->mas;
+ int request;
+
+ mas_wr_prealloc_setup(wr_mas);
+ mas_wr_store_type(wr_mas);
+ request = mas_prealloc_calc(mas, entry);
+ if (!request)
+ return;
+
+ mas_node_count(mas, request);
+}
/* Interface */

/**
@@ -5536,8 +5550,6 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
* @entry: The entry to store.
*
* The @mas->index and @mas->last is used to set the range for the @entry.
- * Note: The @mas should have pre-allocated entries to ensure there is memory to
- * store the entry. Please see mas_expected_entries()/mas_destroy() for more details.
*
* Return: the first entry between mas->index and mas->last or %NULL.
*/
@@ -5563,9 +5575,21 @@ void *mas_store(struct ma_state *mas, void *entry)
* want to examine what happens if a single store operation was to
* overwrite multiple entries within a self-balancing B-Tree.
*/
- mas_wr_prealloc_setup(&wr_mas);
- mas_wr_store_type(&wr_mas);
+ if (mas->mas_flags & MA_STATE_PREALLOC) {
+ mas_wr_prealloc_setup(&wr_mas);
+ mas_wr_store_type(&wr_mas);
+ mas_wr_store_entry(&wr_mas);
+ MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
+ return wr_mas.content;
+ }
+
+ mas_wr_store_prealloc(&wr_mas, entry);
+ WARN_ON_ONCE(mas->store_type == wr_invalid);
+ if (mas_is_err(mas))
+ return NULL;
+
mas_wr_store_entry(&wr_mas);
+ mas_destroy(mas);
return wr_mas.content;
}
EXPORT_SYMBOL_GPL(mas_store);
--
2.45.2


2024-06-13 14:28:34

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 04/16] maple_tree: introduce mas_wr_store_type()

* Sidhartha Kumar <[email protected]> [240607 14:53]:
> Introduce mas_wr_store_type() which will set the correct store type
> based on a walk of the tree.
>
> mas_prealloc_calc() is also introduced to abstract the calculation used
> to determine the number of nodes needed for a store operation.
>
> In this change a call to mas_reset() is removed in the error case of
> mas_prealloc(). This is only needed in the MA_STATE_REBALANCE case of
> mas_destroy(). We can move the call to mas_reset() directly to
> mas_destroy().
>
> Also, add a test case to validate the order that we check the store type
> in is correct. This test models a vma expanding and then shrinking which
> is part of the boot process.
>
> Signed-off-by: Sidhartha Kumar <[email protected]>
> ---
> lib/maple_tree.c | 214 ++++++++++++++++++++++---------
> tools/testing/radix-tree/maple.c | 38 ++++++
> 2 files changed, 192 insertions(+), 60 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 2558d15bb748..a7f585ed488c 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4278,6 +4278,151 @@ static inline void mas_wr_prealloc_setup(struct ma_wr_state *wr_mas)
> wr_mas->content = mas_start(mas);
> }
>
> +/**
> + * mas_prealloc_calc() - Calculate number of nodes needed for a
> + * given store oepration
> + * @mas: The maple state
> + * @entry: The entry to store into the tree
> + *
> + * Return: Number of nodes required for preallocation.
> + */
> +static inline int mas_prealloc_calc(struct ma_state *mas, void *entry)
> +{
> + int ret = mas_mt_height(mas) * 3 + 1;
> +
> + switch (mas->store_type) {
> + case wr_invalid:
> + WARN_ON_ONCE(1);
> + break;
> + case wr_new_root:
> + ret = 1;
> + break;
> + case wr_store_root:
> + if (likely((mas->last != 0) || (mas->index != 0)))
> + ret = 1;
> + else if (((unsigned long) (entry) & 3) == 2)
> + ret = 1;
> + else
> + ret = 0;
> + break;
> + case wr_spanning_store:
> + ret = mas_mt_height(mas) * 3 + 1;
> + break;
> + case wr_split_store:
> + ret = mas_mt_height(mas) * 2 + 1;
> + break;
> + case wr_rebalance:
> + ret = mas_mt_height(mas) * 2 - 1;
> + break;
> + case wr_node_store:
> + case wr_bnode:
> + ret = mt_in_rcu(mas->tree) ? 1 : 0;
> + break;
> + case wr_append:
> + case wr_exact_fit:
> + case wr_slot_store:
> + ret = 0;
> + }
> +
> + return ret;
> +}
> +
> +/*
> + * mas_wr_store_type() - Set the store type for a given
> + * store operation.
> + * @wr_mas: The maple write state
> + */
> +static inline void mas_wr_store_type(struct ma_wr_state *wr_mas)
> +{
> + struct ma_state *mas = wr_mas->mas;
> + unsigned char new_end;
> +
> + if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) {
> + mas->store_type = wr_store_root;
> + return;
> + }
> +
> + if (unlikely(!mas_wr_walk(wr_mas))) {
> + mas->store_type = wr_spanning_store;
> + return;
> + }
> +
> + /* At this point, we are at the leaf node that needs to be altered. */
> + mas_wr_end_piv(wr_mas);
> + if (!wr_mas->entry)
> + mas_wr_extend_null(wr_mas);
> +
> + new_end = mas_wr_new_end(wr_mas);
> + if ((wr_mas->r_min == mas->index) && (wr_mas->r_max == mas->last)) {
> + mas->store_type = wr_exact_fit;
> + return;
> + }
> +
> + if (unlikely(!mas->index && mas->last == ULONG_MAX)) {
> + mas->store_type = wr_new_root;
> + return;
> + }
> +
> + /* Potential spanning rebalance collapsing a node */
> + if (new_end < mt_min_slots[wr_mas->type]) {
> + if (!mte_is_root(mas->node)) {
> + mas->store_type = wr_rebalance;
> + return;
> + }
> + mas->store_type = wr_node_store;
> + return;
> + }
> +
> + if (new_end >= mt_slots[wr_mas->type]) {
> + mas->store_type = wr_split_store;
> + return;
> + }
> +
> + if (!mt_in_rcu(mas->tree) && (mas->offset == mas->end)) {
> + mas->store_type = wr_append;
> + return;
> + }
> +
> + if ((new_end == mas->end) && (!mt_in_rcu(mas->tree) ||
> + (wr_mas->offset_end - mas->offset == 1))) {
> + mas->store_type = wr_slot_store;
> + return;
> + }
> +
> + if (mte_is_root(mas->node) || !(new_end <= mt_min_slots[wr_mas->type]) ||
> + (mas->mas_flags & MA_STATE_BULK)) {
> + mas->store_type = wr_node_store;
> + return;
> + }
> +
> + mas->store_type = wr_bnode;
> +}
> +
> +/**
> + * mas_wr_preallocate() - Preallocate enough nodes for a store operation
> + * @wr_mas: The maple write state
> + * @entry: The entry that will be stored
> + * @gfp: The GFP_FLAGS to use for allocations.
> + *
> + */
> +static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry, gfp_t gfp)
> +{
> + struct ma_state *mas = wr_mas->mas;
> + int request;
> +
> + mas_wr_prealloc_setup(wr_mas);
> + mas_wr_store_type(wr_mas);
> + request = mas_prealloc_calc(mas, entry);
> + if (!request)
> + return;
> +
> + mas_node_count_gfp(mas, request, gfp);
> + if (likely(!mas_is_err(mas)))
> + return;
> +
> + mas_set_alloc_req(mas, 0);
> +}
> +
> /**
> * mas_insert() - Internal call to insert a value
> * @mas: The maple state
> @@ -5506,69 +5651,17 @@ EXPORT_SYMBOL_GPL(mas_store_prealloc);
> int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp)
> {
> MA_WR_STATE(wr_mas, mas, entry);
> - unsigned char node_size;
> - int request = 1;
> - int ret;
> -
> -
> - if (unlikely(!mas->index && mas->last == ULONG_MAX))
> - goto ask_now;
> -
> - mas_wr_prealloc_setup(&wr_mas);
> - /* Root expand */
> - if (unlikely(mas_is_none(mas) || mas_is_ptr(mas)))
> - goto ask_now;
> -
> - if (unlikely(!mas_wr_walk(&wr_mas))) {
> - /* Spanning store, use worst case for now */
> - request = 1 + mas_mt_height(mas) * 3;
> - goto ask_now;
> - }
> -
> - /* At this point, we are at the leaf node that needs to be altered. */
> - /* Exact fit, no nodes needed. */
> - if (wr_mas.r_min == mas->index && wr_mas.r_max == mas->last)
> - return 0;
> -
> - mas_wr_end_piv(&wr_mas);
> - node_size = mas_wr_new_end(&wr_mas);
> -
> - /* Slot store, does not require additional nodes */
> - if (node_size == mas->end) {
> - /* reuse node */
> - if (!mt_in_rcu(mas->tree))
> - return 0;
> - /* shifting boundary */
> - if (wr_mas.offset_end - mas->offset == 1)
> - return 0;
> - }
> + int ret = 0;
>
> - if (node_size >= mt_slots[wr_mas.type]) {
> - /* Split, worst case for now. */
> - request = 1 + mas_mt_height(mas) * 2;
> - goto ask_now;
> + mas_wr_preallocate(&wr_mas, entry, gfp);
> + if (mas_is_err(mas)) {
> + ret = xa_err(mas->node);
> + mas_destroy(mas);
> + mas_reset(mas);
> + return ret;
> }
>
> - /* New root needs a single node */
> - if (unlikely(mte_is_root(mas->node)))
> - goto ask_now;
> -
> - /* Potential spanning rebalance collapsing a node, use worst-case */
> - if (node_size - 1 <= mt_min_slots[wr_mas.type])
> - request = mas_mt_height(mas) * 2 - 1;
> -
> - /* node store, slot store needs one node */
> -ask_now:
> - mas_node_count_gfp(mas, request, gfp);
> mas->mas_flags |= MA_STATE_PREALLOC;
> - if (likely(!mas_is_err(mas)))
> - return 0;
> -
> - mas_set_alloc_req(mas, 0);
> - ret = xa_err(mas->node);
> - mas_reset(mas);
> - mas_destroy(mas);
> - mas_reset(mas);
> return ret;
> }
> EXPORT_SYMBOL_GPL(mas_preallocate);
> @@ -5594,7 +5687,8 @@ void mas_destroy(struct ma_state *mas)
> */
> if (mas->mas_flags & MA_STATE_REBALANCE) {
> unsigned char end;
> -

nit: This new line should remain.

> + if (mas_is_err(mas))
> + mas_reset(mas);
> mas_start(mas);
> mtree_range_walk(mas);
> end = mas->end + 1;
> diff --git a/tools/testing/radix-tree/maple.c b/tools/testing/radix-tree/maple.c
> index f1caf4bcf937..1c68ccc1b475 100644
> --- a/tools/testing/radix-tree/maple.c
> +++ b/tools/testing/radix-tree/maple.c
> @@ -36223,6 +36223,40 @@ static noinline void __init check_mtree_dup(struct maple_tree *mt)
>
> extern void test_kmem_cache_bulk(void);
>
> +

nit: This new line should not be added.

> + /* test to simulate expanding a vma from [0x7fffffffe000, 0x7ffffffff000)
> + * to [0x7ffde4ca1000, 0x7ffffffff000) and then shrinking the vma to
> + * [0x7ffde4ca1000, 0x7ffde4ca2000)
> + */
> +static inline int check_vma_modification(struct maple_tree *mt)
> +{
> + MA_STATE(mas, mt, 0, 0);
> +
> + mtree_lock(mt);
> + /* vma with old start and old end */
> + __mas_set_range(&mas, 0x7fffffffe000, 0x7ffffffff000 - 1);
> + mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
> + mas_store_prealloc(&mas, xa_mk_value(1));
> +
> + /* next write occurs partly in previous range [0, 0x7fffffffe000)*/
> + mas_prev_range(&mas, 0);
> + /* expand vma to {0x7ffde4ca1000, 0x7ffffffff000) */
> + __mas_set_range(&mas, 0x7ffde4ca1000, 0x7ffffffff000 - 1);
> + mas_preallocate(&mas, xa_mk_value(1), GFP_KERNEL);
> + mas_store_prealloc(&mas, xa_mk_value(1));
> +
> + /* shrink vma to [0x7ffde4ca1000, 7ffde4ca2000) */
> + __mas_set_range(&mas, 0x7ffde4ca2000, 0x7ffffffff000 - 1);
> + mas_preallocate(&mas, NULL, GFP_KERNEL);
> + mas_store_prealloc(&mas, NULL);
> + mt_dump(mt, mt_dump_hex);
> +
> + mas_destroy(&mas);
> + mtree_unlock(mt);
> + return 0;
> +}
> +
> +

nit: This new line should not be added.

> void farmer_tests(void)
> {
> struct maple_node *node;
> @@ -36230,6 +36264,10 @@ void farmer_tests(void)
>
> mt_dump(&tree, mt_dump_dec);
>
> + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | MT_FLAGS_USE_RCU);
> + check_vma_modification(&tree);
> + mtree_destroy(&tree);
> +
> tree.ma_root = xa_mk_value(0);
> mt_dump(&tree, mt_dump_dec);
>
> --
> 2.45.2
>

2024-06-13 14:29:52

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 08/16] maple_tree: print store type in mas_dump()

* Sidhartha Kumar <[email protected]> [240607 14:53]:
> Knowing the store type of the maple state could be helpful for debugging.
> Have mas_dump() print mas->store_type.
>
> Signed-off-by: Sidhartha Kumar <[email protected]>
> ---
> lib/maple_tree.c | 39 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index a955316b6745..91b18eef81e1 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -7731,6 +7731,45 @@ void mas_dump(const struct ma_state *mas)
> break;
> }
>
> + pr_err("Store Type: ");
> + switch (mas->store_type) {
> + case wr_invalid:
> + pr_err("invalid store type\n");
> + break;
> + case wr_new_root:
> + pr_err("new_root\n");
> + break;
> + case wr_store_root:
> + pr_err("store_root\n");
> + break;
> + case wr_exact_fit:
> + pr_err("exact_fit\n");
> + break;
> + case wr_split_store:
> + pr_err("split_store\n");
> + break;
> + case wr_slot_store:
> + pr_err("slot_store\n");
> + break;
> + case wr_append:
> + pr_err("append\n");
> + break;
> + case wr_node_store:
> + pr_err("node_store\n");
> + break;
> + case wr_spanning_store:
> + pr_err("spanning_store\n");
> + break;
> + case wr_rebalance:
> + pr_err("rebalance\n");
> + break;
> + case wr_bnode:
> + pr_err("write_bnode\n");
> + break;
> +
> + }
> +
> +

There is an extra new line here

> pr_err("[%u/%u] index=%lx last=%lx\n", mas->offset, mas->end,
> mas->index, mas->last);
> pr_err(" min=%lx max=%lx alloc=%p, depth=%u, flags=%x\n",
> --
> 2.45.2
>

2024-06-13 14:41:42

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 10/16] maple_tree: convert mas_insert() to preallocate nodes

* Sidhartha Kumar <[email protected]> [240607 14:53]:
> By setting the store type in mas_insert(), we no longer need to use
> mas_wr_modify() to determine the correct store function to use. Instead,
> set the store type and call mas_wr_store_entry(). Also, pass in the
> requested gfp flags to mas_insert() so they can be passed to the call to
> mas_wr_preallocate().

This can be done without passing in the gfp flags as it was done before
by using the same flags as before (GFP_KERNEL). However the only user
already has the gfp flags to use, so this makes sense.

We may need to add a _gfp() version in the future, but most users want
to mas_store() vs mas_insert(), so I'm fine with this change.

>
> Signed-off-by: Sidhartha Kumar <[email protected]>
> ---
> lib/maple_tree.c | 33 ++++++++++++++++-----------------
> 1 file changed, 16 insertions(+), 17 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 2c42e99c400c..c37bfac4f622 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4442,11 +4442,12 @@ static inline void mas_wr_preallocate(struct ma_wr_state *wr_mas, void *entry, g
> * mas_insert() - Internal call to insert a value
> * @mas: The maple state
> * @entry: The entry to store
> + * @gfp: The GFP_FLAGS to use for allocations
> *
> * Return: %NULL or the contents that already exists at the requested index
> * otherwise. The maple state needs to be checked for error conditions.
> */
> -static inline void *mas_insert(struct ma_state *mas, void *entry)
> +static inline void *mas_insert(struct ma_state *mas, void *entry, gfp_t gfp)
> {
> MA_WR_STATE(wr_mas, mas, entry);
>
> @@ -4468,26 +4469,24 @@ static inline void *mas_insert(struct ma_state *mas, void *entry)
> if (wr_mas.content)
> goto exists;
>
> - if (mas_is_none(mas) || mas_is_ptr(mas)) {
> - mas_store_root(mas, entry);
> + mas_wr_preallocate(&wr_mas, entry, gfp);
> + if (mas_is_err(mas))
> return NULL;
> - }
>
> /* spanning writes always overwrite something */
> - if (!mas_wr_walk(&wr_mas))
> + if (mas->store_type == wr_spanning_store)
> goto exists;
>
> /* At this point, we are at the leaf node that needs to be altered. */
> - wr_mas.offset_end = mas->offset;
> - wr_mas.end_piv = wr_mas.r_max;
> -
> - if (wr_mas.content || (mas->last > wr_mas.r_max))
> - goto exists;
> + if (mas->store_type != wr_new_root && mas->store_type != wr_store_root) {
> + wr_mas.offset_end = mas->offset;
> + wr_mas.end_piv = wr_mas.r_max;
>
> - if (!entry)
> - return NULL;
> + if (wr_mas.content || (mas->last > wr_mas.r_max))
> + goto exists;
> + }
>
> - mas_wr_modify(&wr_mas);
> + mas_wr_store_entry(&wr_mas);
> return wr_mas.content;
>
> exists:
> @@ -4532,7 +4531,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
> return ret;
>
> do {
> - mas_insert(mas, entry);
> + mas_insert(mas, entry, gfp);
> } while (mas_nomem(mas, gfp));
> if (mas_is_err(mas))
> return xa_err(mas->node);
> @@ -6536,7 +6535,7 @@ int mtree_insert_range(struct maple_tree *mt, unsigned long first,
>
> mtree_lock(mt);
> retry:
> - mas_insert(&ms, entry);
> + mas_insert(&ms, entry, gfp);
> if (mas_nomem(&ms, gfp))
> goto retry;
>
> @@ -6585,7 +6584,7 @@ int mtree_alloc_range(struct maple_tree *mt, unsigned long *startp,
> if (ret)
> goto unlock;
>
> - mas_insert(&mas, entry);
> + mas_insert(&mas, entry, gfp);
> /*
> * mas_nomem() may release the lock, causing the allocated area
> * to be unavailable, so try to allocate a free area again.
> @@ -6667,7 +6666,7 @@ int mtree_alloc_rrange(struct maple_tree *mt, unsigned long *startp,
> if (ret)
> goto unlock;
>
> - mas_insert(&mas, entry);
> + mas_insert(&mas, entry, gfp);
> /*
> * mas_nomem() may release the lock, causing the allocated area
> * to be unavailable, so try to allocate a free area again.
> --
> 2.45.2
>

2024-06-13 15:03:12

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 16/16] maple_tree: remove unneeded mas_wr_walk() in mas_store_prealloc()

* Sidhartha Kumar <[email protected]> [240607 14:53]:
> Users of mas_store_prealloc() enter this function with nodes already
> preallocated. This means the store type must be already set. We can then
> remove the call to mas_wr_store_type() and initialize the write state to
> continue the partial walk that was done when determining the store type.
>
> Signed-off-by: Sidhartha Kumar <[email protected]>
> ---
> lib/maple_tree.c | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 3d334e7c19a5..5854f25a4ae5 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -4027,9 +4027,6 @@ static inline void mas_wr_end_piv(struct ma_wr_state *wr_mas)
> wr_mas->end_piv = wr_mas->pivots[wr_mas->offset_end];
> else
> wr_mas->end_piv = wr_mas->mas->max;
> -
> - if (!wr_mas->entry)
> - mas_wr_extend_null(wr_mas);
> }
>
> static inline unsigned char mas_wr_new_end(struct ma_wr_state *wr_mas)
> @@ -5598,12 +5595,23 @@ void mas_store_prealloc(struct ma_state *mas, void *entry)
> {
> MA_WR_STATE(wr_mas, mas, entry);
>
> - mas_wr_prealloc_setup(&wr_mas);
> - mas_wr_store_type(&wr_mas);
> + if (mas->store_type == wr_store_root) {
> + mas_wr_prealloc_setup(&wr_mas);
> + goto store;
> + }
> +
> + mas_wr_walk_descend(&wr_mas);
> + if (mas->store_type != wr_spanning_store) {
> + /* set wr_mas->content to current slot */
> + wr_mas.content = mas_slot_locked(mas, wr_mas.slots, mas->offset);
> + mas_wr_end_piv(&wr_mas);
> + }
> +store:
> trace_ma_write(__func__, mas, 0, entry);
> mas_wr_store_entry(&wr_mas);
> MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
> mas_destroy(mas);
> +

nit, this new line should probably be before the "store" label?

> }
> EXPORT_SYMBOL_GPL(mas_store_prealloc);
>
> --
> 2.45.2
>

2024-06-13 15:08:49

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 13/16] maple_tree: have mas_store() allocate nodes if needed

* Sidhartha Kumar <[email protected]> [240607 14:53]:
> Not all users of mas_store() enter with nodes already preallocated.
> Check for the MA_STATE_PREALLOC flag to decide whether to preallocate nodes
> within mas_store() rather than relying on future write helper functions
> to perform the allocations. This allows the write helper functions to be
> simplified as they do not have to do checks to make sure there are
> enough allocated nodes to perform the write.
>
> Signed-off-by: Sidhartha Kumar <[email protected]>
> ---
> lib/maple_tree.c | 32 ++++++++++++++++++++++++++++----
> 1 file changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/lib/maple_tree.c b/lib/maple_tree.c
> index 8b17768db5f2..92f133ea5f00 100644
> --- a/lib/maple_tree.c
> +++ b/lib/maple_tree.c
> @@ -5528,6 +5528,20 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
> mt_destroy_walk(enode, mt, true);
> }
> }
> +
> +static inline void mas_wr_store_prealloc(struct ma_wr_state *wr_mas, void *entry)
> +{
> + struct ma_state *mas = wr_mas->mas;
> + int request;
> +
> + mas_wr_prealloc_setup(wr_mas);
> + mas_wr_store_type(wr_mas);
> + request = mas_prealloc_calc(mas, entry);
> + if (!request)
> + return;
> +
> + mas_node_count(mas, request);
> +}
> /* Interface */
>
> /**
> @@ -5536,8 +5550,6 @@ static inline void mte_destroy_walk(struct maple_enode *enode,
> * @entry: The entry to store.
> *
> * The @mas->index and @mas->last is used to set the range for the @entry.
> - * Note: The @mas should have pre-allocated entries to ensure there is memory to
> - * store the entry. Please see mas_expected_entries()/mas_destroy() for more details.
> *
> * Return: the first entry between mas->index and mas->last or %NULL.
> */
> @@ -5563,9 +5575,21 @@ void *mas_store(struct ma_state *mas, void *entry)
> * want to examine what happens if a single store operation was to
> * overwrite multiple entries within a self-balancing B-Tree.
> */
> - mas_wr_prealloc_setup(&wr_mas);
> - mas_wr_store_type(&wr_mas);
> + if (mas->mas_flags & MA_STATE_PREALLOC) {
> + mas_wr_prealloc_setup(&wr_mas);
> + mas_wr_store_type(&wr_mas);
> + mas_wr_store_entry(&wr_mas);
> + MAS_WR_BUG_ON(&wr_mas, mas_is_err(mas));
> + return wr_mas.content;
> + }
> +
> + mas_wr_store_prealloc(&wr_mas, entry);

Since this is the only place mas_wr_store_prealloc() is called and the
first two things it does is the same as the if() statement above, then
both calls can be moved outside of the if() statement. That also means
the helper function is somewhat small, which makes it seem like it's
probably not worth having? Maybe I missed something though?

> + WARN_ON_ONCE(mas->store_type == wr_invalid);
> + if (mas_is_err(mas))
> + return NULL;
> +
> mas_wr_store_entry(&wr_mas);
> + mas_destroy(mas);
> return wr_mas.content;
> }
> EXPORT_SYMBOL_GPL(mas_store);
> --
> 2.45.2
>