2023-01-16 08:04:23

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 0/8] ext4: Convert inode preallocation list to an rbtree

This patch series aim to improve the performance and scalability of
inode preallocation by changing inode preallocation linked list to an
rbtree. I've ran xfstests quick on this series and plan to run auto group
as well to confirm we have no regressions.

** Shortcomings of existing implementation **

Right now, we add all the inode preallocations(PAs) to a per inode linked
list ei->i_prealloc_list. To prevent the list from growing infinitely
during heavy sparse workloads, the length of this list was capped at 512
and a trimming logic was added to trim the list whenever it grew over
this threshold, in patch 27bc446e2. This was discussed in detail in the
following lore thread [1].

[1] https://lore.kernel.org/all/[email protected]/

But from our testing, we noticed that the current implementation still
had issues with scalability as the performance degraded when the PAs
stored in the list grew. Most of the degradation was seen in
ext4_mb_normalize_request() and ext4_mb_use_preallocated() functions as
they iterated the inode PA list.

** Improvements in this patchset **

To counter the above shortcomings, this patch series modifies the inode
PA list to an rbtree, which:

- improves the performance of functions discussed above due to the
improved lookup speed.

- improves scalability by changing lookup complexity from O(n) to
O(logn). We no longer need the trimming logic as well.

As a result, the RCU implementation was needed to be changed since
lockless lookups of rbtrees do have some issues like skipping
subtrees. Hence, RCU was replaced with read write locks for inode
PAs. More information can be found in Patch 7 (that has the core
changes).

** Performance Numbers **

Performance numbers were collected with and without these patches, using an
nvme device. Details of tests/benchmarks used are as follows:

Test 1: 200,000 1KiB sparse writes using (fio)
Test 2: Fill 5GiB w/ random writes, 1KiB burst size using (fio)
Test 3: Test 2, but do 4 sequential writes before jumping to random
offset (fio)
Test 4: Fill 8GB FS w/ 2KiB files, 64 threads in parallel (fsmark)

+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| | nodelalloc | delalloc |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| | Unpatched | Patched | Unpatched | Patched |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+
| Test 1 | 11.8 MB/s | 23.3 MB/s | 27.2 MB/s | 63.7 MB/s |
| Test 2 | 1617 MB/s | 1740 MB/s | 2223 MB/s | 2208 MB/s |
| Test 3 | 1715 MB/s | 1823 MB/s | 2346 MB/s | 2364 MB/s |
| Test 4 | 14284 files/sec | 14347 files/s | 13762 files/sec | 13882 files/sec |
+──────────+──────────────────+────────────────+──────────────────+──────────────────+

In test 1, we almost see 100 to 200% increase in performance due to the high number
of sparse writes highlighting the bottleneck in the unpatched kernel. Further, on running
"perf diff patched.data unpatched.data" for test 1, we see something as follows:

2.83% +29.67% [kernel.vmlinux] [k] _raw_spin_lock
...
+3.33% [ext4] [k] ext4_mb_normalize_request.constprop.30
0.25% +2.81% [ext4] [k] ext4_mb_use_preallocated

Here we can see that the biggest different is in the _raw_spin_lock() function
of unpatched kernel, that is called from `ext4_mb_normalize_request()` as seen
here:

32.47% fio [kernel.vmlinux] [k] _raw_spin_lock
|
---_raw_spin_lock
|
--32.22%--ext4_mb_normalize_request.constprop.30

This is coming from the spin_lock(&pa->pa_lock) that is called for
each PA that we iterate over, in ext4_mb_normalize_request(). Since in rbtrees,
we lookup log(n) PAs rather than n PAs, this spin lock is taken less frequently,
as evident in the perf.

Furthermore, we see some improvements in other tests however since they don't
exercise the PA traversal path as much as test 1, the improvements are relatively
smaller.

** Summary of patches **

- Patch 1-5: Abstractions/Minor optimizations
- Patch 6: Split common inode & locality group specific fields to a union
- Patch 7: Core changes to move inode PA logic from list to rbtree
- Patch 8: Remove the trim logic as it is not needed

** Changes since PATCH v2 [1] **
- In patch 7, include a design change related to
encountering deleted PAs in inode rbtree that overlap with to be
inserted PA, when adjusting overlap. More details in the patch.
(Removed Jan's RVB for this patch)

** Changes since PATCH v1 **
- fixed styling issue
- merged ext4_mb_rb_insert() and ext4_mb_pa_cmp()

** Changes since RFC v3 **
- Changed while loops to for loops in patch 7
- Fixed some data types
- Made rbtree comparison logic more intuitive. The
rbtree insertion function still kept separate from
comparison function for reusability.

** Changes since RFC v2 **
- Added a function definition that was deleted during v2 rebase

** Changes since RFC v1 **
- Rebased over ext4 dev branch which includes Jan's patchset
that changed some code in mballoc.c

[1] https://lore.kernel.org/linux-ext4/[email protected]/

Ojaswin Mujoo (8):
ext4: Stop searching if PA doesn't satisfy non-extent file
ext4: Refactor code related to freeing PAs
ext4: Refactor code in ext4_mb_normalize_request() and
ext4_mb_use_preallocated()
ext4: Move overlap assert logic into a separate function
ext4: Abstract out overlap fix/check logic in
ext4_mb_normalize_request()
ext4: Convert pa->pa_inode_list and pa->pa_obj_lock into a union
ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list
ext4: Remove the logic to trim inode PAs

Documentation/admin-guide/ext4.rst | 3 -
fs/ext4/ext4.h | 5 +-
fs/ext4/mballoc.c | 477 +++++++++++++++++++----------
fs/ext4/mballoc.h | 17 +-
fs/ext4/super.c | 4 +-
fs/ext4/sysfs.c | 2 -
6 files changed, 333 insertions(+), 175 deletions(-)

--
2.31.1


2023-01-16 08:04:25

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 3/8] ext4: Refactor code in ext4_mb_normalize_request() and ext4_mb_use_preallocated()

Change some variable names to be more consistent and
refactor some of the code to make it easier to read.

There are no functional changes in this patch

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/ext4/mballoc.c | 97 ++++++++++++++++++++++++-----------------------
1 file changed, 49 insertions(+), 48 deletions(-)

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index a5f2803aff93..f4e699bce99f 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3999,7 +3999,8 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
loff_t orig_size __maybe_unused;
ext4_lblk_t start;
struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
- struct ext4_prealloc_space *pa;
+ struct ext4_prealloc_space *tmp_pa;
+ ext4_lblk_t tmp_pa_start, tmp_pa_end;

/* do normalize only data requests, metadata requests
do not need preallocation */
@@ -4102,56 +4103,53 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,

/* check we don't cross already preallocated blocks */
rcu_read_lock();
- list_for_each_entry_rcu(pa, &ei->i_prealloc_list, pa_inode_list) {
- ext4_lblk_t pa_end;
-
- if (pa->pa_deleted)
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ if (tmp_pa->pa_deleted)
continue;
- spin_lock(&pa->pa_lock);
- if (pa->pa_deleted) {
- spin_unlock(&pa->pa_lock);
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted) {
+ spin_unlock(&tmp_pa->pa_lock);
continue;
}

- pa_end = pa->pa_lstart + EXT4_C2B(EXT4_SB(ac->ac_sb),
- pa->pa_len);
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);

/* PA must not overlap original request */
- BUG_ON(!(ac->ac_o_ex.fe_logical >= pa_end ||
- ac->ac_o_ex.fe_logical < pa->pa_lstart));
+ BUG_ON(!(ac->ac_o_ex.fe_logical >= tmp_pa_end ||
+ ac->ac_o_ex.fe_logical < tmp_pa_start));

/* skip PAs this normalized request doesn't overlap with */
- if (pa->pa_lstart >= end || pa_end <= start) {
- spin_unlock(&pa->pa_lock);
+ if (tmp_pa_start >= end || tmp_pa_end <= start) {
+ spin_unlock(&tmp_pa->pa_lock);
continue;
}
- BUG_ON(pa->pa_lstart <= start && pa_end >= end);
+ BUG_ON(tmp_pa_start <= start && tmp_pa_end >= end);

/* adjust start or end to be adjacent to this pa */
- if (pa_end <= ac->ac_o_ex.fe_logical) {
- BUG_ON(pa_end < start);
- start = pa_end;
- } else if (pa->pa_lstart > ac->ac_o_ex.fe_logical) {
- BUG_ON(pa->pa_lstart > end);
- end = pa->pa_lstart;
+ if (tmp_pa_end <= ac->ac_o_ex.fe_logical) {
+ BUG_ON(tmp_pa_end < start);
+ start = tmp_pa_end;
+ } else if (tmp_pa_start > ac->ac_o_ex.fe_logical) {
+ BUG_ON(tmp_pa_start > end);
+ end = tmp_pa_start;
}
- spin_unlock(&pa->pa_lock);
+ spin_unlock(&tmp_pa->pa_lock);
}
rcu_read_unlock();
size = end - start;

/* XXX: extra loop to check we really don't overlap preallocations */
rcu_read_lock();
- list_for_each_entry_rcu(pa, &ei->i_prealloc_list, pa_inode_list) {
- ext4_lblk_t pa_end;
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted == 0) {
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);

- spin_lock(&pa->pa_lock);
- if (pa->pa_deleted == 0) {
- pa_end = pa->pa_lstart + EXT4_C2B(EXT4_SB(ac->ac_sb),
- pa->pa_len);
- BUG_ON(!(start >= pa_end || end <= pa->pa_lstart));
+ BUG_ON(!(start >= tmp_pa_end || end <= tmp_pa_start));
}
- spin_unlock(&pa->pa_lock);
+ spin_unlock(&tmp_pa->pa_lock);
}
rcu_read_unlock();

@@ -4361,7 +4359,8 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
int order, i;
struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
struct ext4_locality_group *lg;
- struct ext4_prealloc_space *pa, *cpa = NULL;
+ struct ext4_prealloc_space *tmp_pa, *cpa = NULL;
+ ext4_lblk_t tmp_pa_start, tmp_pa_end;
ext4_fsblk_t goal_block;

/* only data can be preallocated */
@@ -4370,18 +4369,20 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)

/* first, try per-file preallocation */
rcu_read_lock();
- list_for_each_entry_rcu(pa, &ei->i_prealloc_list, pa_inode_list) {
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {

/* all fields in this condition don't change,
* so we can skip locking for them */
- if (ac->ac_o_ex.fe_logical < pa->pa_lstart ||
- ac->ac_o_ex.fe_logical >= (pa->pa_lstart +
- EXT4_C2B(sbi, pa->pa_len)))
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
+
+ if (ac->ac_o_ex.fe_logical < tmp_pa_start ||
+ ac->ac_o_ex.fe_logical >= tmp_pa_end)
continue;

/* non-extent files can't have physical blocks past 2^32 */
if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)) &&
- (pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len) >
+ (tmp_pa->pa_pstart + EXT4_C2B(sbi, tmp_pa->pa_len) >
EXT4_MAX_BLOCK_FILE_PHYS)) {
/*
* Since PAs don't overlap, we won't find any
@@ -4391,16 +4392,16 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
}

/* found preallocated blocks, use them */
- spin_lock(&pa->pa_lock);
- if (pa->pa_deleted == 0 && pa->pa_free) {
- atomic_inc(&pa->pa_count);
- ext4_mb_use_inode_pa(ac, pa);
- spin_unlock(&pa->pa_lock);
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted == 0 && tmp_pa->pa_free) {
+ atomic_inc(&tmp_pa->pa_count);
+ ext4_mb_use_inode_pa(ac, tmp_pa);
+ spin_unlock(&tmp_pa->pa_lock);
ac->ac_criteria = 10;
rcu_read_unlock();
return true;
}
- spin_unlock(&pa->pa_lock);
+ spin_unlock(&tmp_pa->pa_lock);
}
rcu_read_unlock();

@@ -4424,16 +4425,16 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
*/
for (i = order; i < PREALLOC_TB_SIZE; i++) {
rcu_read_lock();
- list_for_each_entry_rcu(pa, &lg->lg_prealloc_list[i],
+ list_for_each_entry_rcu(tmp_pa, &lg->lg_prealloc_list[i],
pa_inode_list) {
- spin_lock(&pa->pa_lock);
- if (pa->pa_deleted == 0 &&
- pa->pa_free >= ac->ac_o_ex.fe_len) {
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted == 0 &&
+ tmp_pa->pa_free >= ac->ac_o_ex.fe_len) {

cpa = ext4_mb_check_group_pa(goal_block,
- pa, cpa);
+ tmp_pa, cpa);
}
- spin_unlock(&pa->pa_lock);
+ spin_unlock(&tmp_pa->pa_lock);
}
rcu_read_unlock();
}
--
2.31.1

2023-01-16 08:04:34

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 5/8] ext4: Abstract out overlap fix/check logic in ext4_mb_normalize_request()

Abstract out the logic of fixing PA overlaps in ext4_mb_normalize_request to
improve readability of code. This also makes it easier to make changes
to the overlap logic in future.

There are no functional changes in this patch

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/ext4/mballoc.c | 110 +++++++++++++++++++++++++++++-----------------
1 file changed, 69 insertions(+), 41 deletions(-)

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 1628b008a096..fdb9d0a8f35d 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -4007,6 +4007,74 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
rcu_read_unlock();
}

+/*
+ * Given an allocation context "ac" and a range "start", "end", check
+ * and adjust boundaries if the range overlaps with any of the existing
+ * preallocatoins stored in the corresponding inode of the allocation context.
+ *
+ *Parameters:
+ * ac allocation context
+ * start start of the new range
+ * end end of the new range
+ */
+static inline void
+ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
+ ext4_lblk_t *start, ext4_lblk_t *end)
+{
+ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ struct ext4_prealloc_space *tmp_pa;
+ ext4_lblk_t new_start, new_end;
+ ext4_lblk_t tmp_pa_start, tmp_pa_end;
+
+ new_start = *start;
+ new_end = *end;
+
+ /* check we don't cross already preallocated blocks */
+ rcu_read_lock();
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ if (tmp_pa->pa_deleted)
+ continue;
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted) {
+ spin_unlock(&tmp_pa->pa_lock);
+ continue;
+ }
+
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
+
+ /* PA must not overlap original request */
+ BUG_ON(!(ac->ac_o_ex.fe_logical >= tmp_pa_end ||
+ ac->ac_o_ex.fe_logical < tmp_pa_start));
+
+ /* skip PAs this normalized request doesn't overlap with */
+ if (tmp_pa_start >= new_end || tmp_pa_end <= new_start) {
+ spin_unlock(&tmp_pa->pa_lock);
+ continue;
+ }
+ BUG_ON(tmp_pa_start <= new_start && tmp_pa_end >= new_end);
+
+ /* adjust start or end to be adjacent to this pa */
+ if (tmp_pa_end <= ac->ac_o_ex.fe_logical) {
+ BUG_ON(tmp_pa_end < new_start);
+ new_start = tmp_pa_end;
+ } else if (tmp_pa_start > ac->ac_o_ex.fe_logical) {
+ BUG_ON(tmp_pa_start > new_end);
+ new_end = tmp_pa_start;
+ }
+ spin_unlock(&tmp_pa->pa_lock);
+ }
+ rcu_read_unlock();
+
+ /* XXX: extra loop to check we really don't overlap preallocations */
+ ext4_mb_pa_assert_overlap(ac, new_start, new_end);
+
+ *start = new_start;
+ *end = new_end;
+ return;
+}
+
/*
* Normalization means making request better in terms of
* size and alignment
@@ -4021,9 +4089,6 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
loff_t size, start_off;
loff_t orig_size __maybe_unused;
ext4_lblk_t start;
- struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
- struct ext4_prealloc_space *tmp_pa;
- ext4_lblk_t tmp_pa_start, tmp_pa_end;

/* do normalize only data requests, metadata requests
do not need preallocation */
@@ -4124,47 +4189,10 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,

end = start + size;

- /* check we don't cross already preallocated blocks */
- rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
- if (tmp_pa->pa_deleted)
- continue;
- spin_lock(&tmp_pa->pa_lock);
- if (tmp_pa->pa_deleted) {
- spin_unlock(&tmp_pa->pa_lock);
- continue;
- }
-
- tmp_pa_start = tmp_pa->pa_lstart;
- tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
-
- /* PA must not overlap original request */
- BUG_ON(!(ac->ac_o_ex.fe_logical >= tmp_pa_end ||
- ac->ac_o_ex.fe_logical < tmp_pa_start));
-
- /* skip PAs this normalized request doesn't overlap with */
- if (tmp_pa_start >= end || tmp_pa_end <= start) {
- spin_unlock(&tmp_pa->pa_lock);
- continue;
- }
- BUG_ON(tmp_pa_start <= start && tmp_pa_end >= end);
+ ext4_mb_pa_adjust_overlap(ac, &start, &end);

- /* adjust start or end to be adjacent to this pa */
- if (tmp_pa_end <= ac->ac_o_ex.fe_logical) {
- BUG_ON(tmp_pa_end < start);
- start = tmp_pa_end;
- } else if (tmp_pa_start > ac->ac_o_ex.fe_logical) {
- BUG_ON(tmp_pa_start > end);
- end = tmp_pa_start;
- }
- spin_unlock(&tmp_pa->pa_lock);
- }
- rcu_read_unlock();
size = end - start;

- /* XXX: extra loop to check we really don't overlap preallocations */
- ext4_mb_pa_assert_overlap(ac, start, end);
-
/*
* In this function "start" and "size" are normalized for better
* alignment and length such that we could preallocate more blocks.
--
2.31.1

2023-01-16 08:04:39

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 7/8] ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list

Currently, the kernel uses i_prealloc_list to hold all the inode
preallocations. This is known to cause degradation in performance in
workloads which perform large number of sparse writes on a single file.
This is mainly because functions like ext4_mb_normalize_request() and
ext4_mb_use_preallocated() iterate over this complete list, resulting in
slowdowns when large number of PAs are present.

Patch 27bc446e2 partially fixed this by enforcing a limit of 512 for
the inode preallocation list and adding logic to continually trim the
list if it grows above the threshold, however our testing revealed that
a hardcoded value is not suitable for all kinds of workloads.

To optimize this, add an rbtree to the inode and hold the inode
preallocations in this rbtree. This will make iterating over inode PAs
faster and scale much better than a linked list. Additionally, we also
had to remove the LRU logic that was added during trimming of the list
(in ext4_mb_release_context()) as it will add extra overhead in rbtree.
The discards now happen in the lowest-logical-offset-first order.

** Locking notes **

With the introduction of rbtree to maintain inode PAs, we can't use RCU
to walk the tree for searching since it can result in partial traversals
which might miss some nodes(or entire subtrees) while discards happen
in parallel (which happens under a lock). Hence this patch converts the
ei->i_prealloc_lock spin_lock to rw_lock.

Almost all the codepaths that read/modify the PA rbtrees are protected
by the higher level inode->i_data_sem (except
ext4_mb_discard_group_preallocations() and ext4_clear_inode()) IIUC, the
only place we need lock protection is when one thread is reading
"searching" the PA rbtree (earlier protected under rcu_read_lock()) and
another is "deleting" the PAs in ext4_mb_discard_group_preallocations()
function (which iterates all the PAs using the grp->bb_prealloc_list and
deletes PAs from the tree without taking any inode lock (i_data_sem)).

So, this patch converts all rcu_read_lock/unlock() paths for inode list
PA to use read_lock() and all places where we were using
ei->i_prealloc_lock spinlock will now be using write_lock().

Note that this makes the fast path (searching of the right PA e.g.
ext4_mb_use_preallocated() or ext4_mb_normalize_request()), now use
read_lock() instead of rcu_read_lock/unlock(). Ths also will now block
due to slow discard path (ext4_mb_discard_group_preallocations()) which
uses write_lock().

But this is not as bad as it looks. This is because -

1. The slow path only occurs when the normal allocation failed and we
can say that we are low on disk space. One can argue this scenario
won't be much frequent.

2. ext4_mb_discard_group_preallocations(), locks and unlocks the rwlock
for deleting every individual PA. This gives enough opportunity for
the fast path to acquire the read_lock for searching the PA inode
list.

** Design changes around deleted PAs **

In ext4_mb_adjust_overlap(), it is possible for an allocating thread to
come across a PA that is marked deleted via
ext4_mb_discard_group_preallocations() but not yet removed from the
inode PA rbtree. In such a case, we ignore any overlaps with this PA
node and simply move forward in the rbtree by comparing logical start of
to-be-inserted PA and the deleted PA node.

Although this usually doesn't cause an issue and we can move forward in
the tree, in one speacial case, i.e if range of deleted PA lies
completely inside the normalized range, we might require to travese both
the sub-trees under such a deleted PA.

To simplify this special scenario and also as an optimization, undelete
the PA If the allocating thread determines that this PA might be needed
in the near future. This results in the following changes:

- ext4_mb_use_preallocated(): Use a deleted PA if original request lies in it.
- ext4_mb_pa_adjust_overlap(): Undelete a PA if it is deleted but there
is an overlap with the normalized range.
- ext4_mb_discard_group_preallocations(): Rollback delete operation if
allocation path undeletes a PA before it is erased from inode PA
list.

Since this covers the special case we discussed above, we will always
un-delete the PA when we encounter the special case and we can then
adjust for overlap and traverse the PA rbtree without any issues.

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
---
fs/ext4/ext4.h | 4 +-
fs/ext4/mballoc.c | 239 ++++++++++++++++++++++++++++++++++------------
fs/ext4/mballoc.h | 6 +-
fs/ext4/super.c | 4 +-
4 files changed, 183 insertions(+), 70 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 140e1eb300d1..fad5f087e4c6 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1120,8 +1120,8 @@ struct ext4_inode_info {

/* mballoc */
atomic_t i_prealloc_active;
- struct list_head i_prealloc_list;
- spinlock_t i_prealloc_lock;
+ struct rb_root i_prealloc_node;
+ rwlock_t i_prealloc_lock;

/* extents status tree */
struct ext4_es_tree i_es_tree;
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 53c4efd34d1c..85598079b7ce 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3984,6 +3984,44 @@ static void ext4_mb_normalize_group_request(struct ext4_allocation_context *ac)
mb_debug(sb, "goal %u blocks for locality group\n", ac->ac_g_ex.fe_len);
}

+static void ext4_mb_mark_pa_inuse(struct super_block *sb,
+ struct ext4_prealloc_space *pa)
+{
+ struct ext4_inode_info *ei;
+
+ if (pa->pa_deleted == 0) {
+ ext4_warning(
+ sb, "pa already inuse, type:%d, pblk:%llu, lblk:%u, len:%d\n",
+ pa->pa_type, pa->pa_pstart, pa->pa_lstart, pa->pa_len);
+ return;
+ }
+
+ pa->pa_deleted = 0;
+
+ if (pa->pa_type == MB_INODE_PA) {
+ ei = EXT4_I(pa->pa_inode);
+ atomic_inc(&ei->i_prealloc_active);
+ }
+}
+
+/*
+ * This function returns the next element to look at during inode
+ * PA rbtree walk. We assume that we have held the inode PA rbtree lock
+ * (ei->i_prealloc_lock)
+ *
+ * new_start The start of the range we want to compare
+ * cur_start The existing start that we are comparing against
+ * node The node of the rb_tree
+ */
+static inline struct rb_node*
+ext4_mb_pa_rb_next_iter(ext4_lblk_t new_start, ext4_lblk_t cur_start, struct rb_node *node)
+{
+ if (new_start < cur_start)
+ return node->rb_left;
+ else
+ return node->rb_right;
+}
+
static inline void
ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
ext4_lblk_t start, ext4_lblk_t end)
@@ -3992,27 +4030,31 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
struct ext4_prealloc_space *tmp_pa;
ext4_lblk_t tmp_pa_start, tmp_pa_end;
+ struct rb_node *iter;

- rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
- spin_lock(&tmp_pa->pa_lock);
- if (tmp_pa->pa_deleted == 0) {
- tmp_pa_start = tmp_pa->pa_lstart;
- tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
+ read_lock(&ei->i_prealloc_lock);
+ iter = ei->i_prealloc_node.rb_node;
+ while (iter) {
+ tmp_pa = rb_entry(iter, struct ext4_prealloc_space,
+ pa_node.inode_node);
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);

+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted == 0)
BUG_ON(!(start >= tmp_pa_end || end <= tmp_pa_start));
- }
spin_unlock(&tmp_pa->pa_lock);
+
+ iter = ext4_mb_pa_rb_next_iter(start, tmp_pa_start, iter);
}
- rcu_read_unlock();
+ read_unlock(&ei->i_prealloc_lock);
}
-
/*
* Given an allocation context "ac" and a range "start", "end", check
* and adjust boundaries if the range overlaps with any of the existing
* preallocatoins stored in the corresponding inode of the allocation context.
*
- *Parameters:
+ * Parameters:
* ac allocation context
* start start of the new range
* end end of the new range
@@ -4024,6 +4066,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
struct ext4_prealloc_space *tmp_pa;
+ struct rb_node *iter;
ext4_lblk_t new_start, new_end;
ext4_lblk_t tmp_pa_start, tmp_pa_end;

@@ -4031,25 +4074,40 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
new_end = *end;

/* check we don't cross already preallocated blocks */
- rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
- if (tmp_pa->pa_deleted)
- continue;
- spin_lock(&tmp_pa->pa_lock);
- if (tmp_pa->pa_deleted) {
- spin_unlock(&tmp_pa->pa_lock);
- continue;
- }
+ read_lock(&ei->i_prealloc_lock);
+ for (iter = ei->i_prealloc_node.rb_node; iter;
+ iter = ext4_mb_pa_rb_next_iter(new_start, tmp_pa_start, iter)) {
+ int is_overlap;

+ tmp_pa = rb_entry(iter, struct ext4_prealloc_space,
+ pa_node.inode_node);
tmp_pa_start = tmp_pa->pa_lstart;
tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
+ is_overlap = !(tmp_pa_start >= new_end || tmp_pa_end <= new_start);
+
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted) {
+ if (is_overlap) {
+ /*
+ * Normalized range overlaps with range of this
+ * deleted PA, that means we might need it in
+ * near future. Since the PA is yet to be
+ * removed from inode PA tree and freed, lets
+ * just undelete it.
+ */
+ ext4_mb_mark_pa_inuse(ac->ac_sb, tmp_pa);
+ } else {
+ spin_unlock(&tmp_pa->pa_lock);
+ continue;
+ }
+ }

/* PA must not overlap original request */
BUG_ON(!(ac->ac_o_ex.fe_logical >= tmp_pa_end ||
ac->ac_o_ex.fe_logical < tmp_pa_start));

/* skip PAs this normalized request doesn't overlap with */
- if (tmp_pa_start >= new_end || tmp_pa_end <= new_start) {
+ if (!is_overlap) {
spin_unlock(&tmp_pa->pa_lock);
continue;
}
@@ -4065,7 +4123,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
}
spin_unlock(&tmp_pa->pa_lock);
}
- rcu_read_unlock();
+ read_unlock(&ei->i_prealloc_lock);

/* XXX: extra loop to check we really don't overlap preallocations */
ext4_mb_pa_assert_overlap(ac, new_start, new_end);
@@ -4192,7 +4250,6 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
ext4_mb_pa_adjust_overlap(ac, &start, &end);

size = end - start;
-
/*
* In this function "start" and "size" are normalized for better
* alignment and length such that we could preallocate more blocks.
@@ -4401,6 +4458,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
struct ext4_locality_group *lg;
struct ext4_prealloc_space *tmp_pa, *cpa = NULL;
ext4_lblk_t tmp_pa_start, tmp_pa_end;
+ struct rb_node *iter;
ext4_fsblk_t goal_block;

/* only data can be preallocated */
@@ -4408,14 +4466,17 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
return false;

/* first, try per-file preallocation */
- rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
+ read_lock(&ei->i_prealloc_lock);
+ for (iter = ei->i_prealloc_node.rb_node; iter;
+ iter = ext4_mb_pa_rb_next_iter(ac->ac_o_ex.fe_logical, tmp_pa_start, iter)) {
+ tmp_pa = rb_entry(iter, struct ext4_prealloc_space, pa_node.inode_node);

/* all fields in this condition don't change,
* so we can skip locking for them */
tmp_pa_start = tmp_pa->pa_lstart;
tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);

+ /* original request start doesn't lie in this PA */
if (ac->ac_o_ex.fe_logical < tmp_pa_start ||
ac->ac_o_ex.fe_logical >= tmp_pa_end)
continue;
@@ -4433,17 +4494,25 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)

/* found preallocated blocks, use them */
spin_lock(&tmp_pa->pa_lock);
- if (tmp_pa->pa_deleted == 0 && tmp_pa->pa_free) {
+ if (tmp_pa->pa_free) {
+ if (tmp_pa->pa_deleted == 1) {
+ /*
+ * Since PA is yet to be deleted from inode PA
+ * rbtree, just undelete it and use it.
+ */
+ ext4_mb_mark_pa_inuse(ac->ac_sb, tmp_pa);
+ }
+
atomic_inc(&tmp_pa->pa_count);
ext4_mb_use_inode_pa(ac, tmp_pa);
spin_unlock(&tmp_pa->pa_lock);
ac->ac_criteria = 10;
- rcu_read_unlock();
+ read_unlock(&ei->i_prealloc_lock);
return true;
}
spin_unlock(&tmp_pa->pa_lock);
}
- rcu_read_unlock();
+ read_unlock(&ei->i_prealloc_lock);

/* can we use group allocation? */
if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC))
@@ -4596,6 +4665,7 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
{
ext4_group_t grp;
ext4_fsblk_t grp_blk;
+ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);

/* in this short window concurrent discard can set pa_deleted */
spin_lock(&pa->pa_lock);
@@ -4641,16 +4711,41 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
ext4_unlock_group(sb, grp);

if (pa->pa_type == MB_INODE_PA) {
- spin_lock(pa->pa_node_lock.inode_lock);
- list_del_rcu(&pa->pa_node.inode_list);
- spin_unlock(pa->pa_node_lock.inode_lock);
+ write_lock(pa->pa_node_lock.inode_lock);
+ rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
+ write_unlock(pa->pa_node_lock.inode_lock);
+ ext4_mb_pa_free(pa);
} else {
spin_lock(pa->pa_node_lock.lg_lock);
list_del_rcu(&pa->pa_node.lg_list);
spin_unlock(pa->pa_node_lock.lg_lock);
+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
}
+}

- call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+static void ext4_mb_pa_rb_insert(struct rb_root *root, struct rb_node *new)
+{
+ struct rb_node **iter = &root->rb_node, *parent = NULL;
+ struct ext4_prealloc_space *iter_pa, *new_pa;
+ ext4_lblk_t iter_start, new_start;
+
+ while (*iter) {
+ iter_pa = rb_entry(*iter, struct ext4_prealloc_space,
+ pa_node.inode_node);
+ new_pa = rb_entry(new, struct ext4_prealloc_space,
+ pa_node.inode_node);
+ iter_start = iter_pa->pa_lstart;
+ new_start = new_pa->pa_lstart;
+
+ parent = *iter;
+ if (new_start < iter_start)
+ iter = &((*iter)->rb_left);
+ else
+ iter = &((*iter)->rb_right);
+ }
+
+ rb_link_node(new, parent, iter);
+ rb_insert_color(new, root);
}

/*
@@ -4716,7 +4811,6 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
pa->pa_len = ac->ac_b_ex.fe_len;
pa->pa_free = pa->pa_len;
spin_lock_init(&pa->pa_lock);
- INIT_LIST_HEAD(&pa->pa_node.inode_list);
INIT_LIST_HEAD(&pa->pa_group_list);
pa->pa_deleted = 0;
pa->pa_type = MB_INODE_PA;
@@ -4736,9 +4830,9 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)

list_add(&pa->pa_group_list, &grp->bb_prealloc_list);

- spin_lock(pa->pa_node_lock.inode_lock);
- list_add_rcu(&pa->pa_node.inode_list, &ei->i_prealloc_list);
- spin_unlock(pa->pa_node_lock.inode_lock);
+ write_lock(pa->pa_node_lock.inode_lock);
+ ext4_mb_pa_rb_insert(&ei->i_prealloc_node, &pa->pa_node.inode_node);
+ write_unlock(pa->pa_node_lock.inode_lock);
atomic_inc(&ei->i_prealloc_active);
}

@@ -4904,6 +4998,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
struct ext4_prealloc_space *pa, *tmp;
struct list_head list;
struct ext4_buddy e4b;
+ struct ext4_inode_info *ei;
int err;
int free = 0;

@@ -4967,18 +5062,45 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
list_del_rcu(&pa->pa_node.lg_list);
spin_unlock(pa->pa_node_lock.lg_lock);
} else {
- spin_lock(pa->pa_node_lock.inode_lock);
- list_del_rcu(&pa->pa_node.inode_list);
- spin_unlock(pa->pa_node_lock.inode_lock);
+ /*
+ * The allocation path might undelete a PA
+ * incase it expects it to be used again in near
+ * future. In that case, rollback the ongoing delete
+ * operation and don't remove the PA from inode PA
+ * tree.
+ *
+ * TODO: See if we need pa_lock since there might no
+ * path that contends with it once the rbtree writelock
+ * is taken.
+ */
+ write_lock(pa->pa_node_lock.inode_lock);
+ spin_lock(&pa->pa_lock);
+ if (pa->pa_deleted == 0) {
+ free -= pa->pa_free;
+ list_add(&pa->pa_group_list,
+ &grp->bb_prealloc_list);
+ list_del(&pa->u.pa_tmp_list);
+
+ spin_unlock(&pa->pa_lock);
+ write_unlock(pa->pa_node_lock.inode_lock);
+ continue;
+ }
+ spin_unlock(&pa->pa_lock);
+
+ ei = EXT4_I(pa->pa_inode);
+ rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
+ write_unlock(pa->pa_node_lock.inode_lock);
}

- if (pa->pa_type == MB_GROUP_PA)
+ list_del(&pa->u.pa_tmp_list);
+
+ if (pa->pa_type == MB_GROUP_PA) {
ext4_mb_release_group_pa(&e4b, pa);
- else
+ call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ } else {
ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
-
- list_del(&pa->u.pa_tmp_list);
- call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ ext4_mb_pa_free(pa);
+ }
}

ext4_unlock_group(sb, group);
@@ -5008,6 +5130,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
ext4_group_t group = 0;
struct list_head list;
struct ext4_buddy e4b;
+ struct rb_node *iter;
int err;

if (!S_ISREG(inode->i_mode)) {
@@ -5030,17 +5153,18 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)

repeat:
/* first, collect all pa's in the inode */
- spin_lock(&ei->i_prealloc_lock);
- while (!list_empty(&ei->i_prealloc_list) && needed) {
- pa = list_entry(ei->i_prealloc_list.prev,
- struct ext4_prealloc_space, pa_node.inode_list);
+ write_lock(&ei->i_prealloc_lock);
+ for (iter = rb_first(&ei->i_prealloc_node); iter && needed; iter = rb_next(iter)) {
+ pa = rb_entry(iter, struct ext4_prealloc_space,
+ pa_node.inode_node);
BUG_ON(pa->pa_node_lock.inode_lock != &ei->i_prealloc_lock);
+
spin_lock(&pa->pa_lock);
if (atomic_read(&pa->pa_count)) {
/* this shouldn't happen often - nobody should
* use preallocation while we're discarding it */
spin_unlock(&pa->pa_lock);
- spin_unlock(&ei->i_prealloc_lock);
+ write_unlock(&ei->i_prealloc_lock);
ext4_msg(sb, KERN_ERR,
"uh-oh! used pa while discarding");
WARN_ON(1);
@@ -5051,7 +5175,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
if (pa->pa_deleted == 0) {
ext4_mb_mark_pa_deleted(sb, pa);
spin_unlock(&pa->pa_lock);
- list_del_rcu(&pa->pa_node.inode_list);
+ rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
list_add(&pa->u.pa_tmp_list, &list);
needed--;
continue;
@@ -5059,7 +5183,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)

/* someone is deleting pa right now */
spin_unlock(&pa->pa_lock);
- spin_unlock(&ei->i_prealloc_lock);
+ write_unlock(&ei->i_prealloc_lock);

/* we have to wait here because pa_deleted
* doesn't mean pa is already unlinked from
@@ -5076,7 +5200,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
schedule_timeout_uninterruptible(HZ);
goto repeat;
}
- spin_unlock(&ei->i_prealloc_lock);
+ write_unlock(&ei->i_prealloc_lock);

list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
BUG_ON(pa->pa_type != MB_INODE_PA);
@@ -5108,7 +5232,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
put_bh(bitmap_bh);

list_del(&pa->u.pa_tmp_list);
- call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
+ ext4_mb_pa_free(pa);
}
}

@@ -5482,7 +5606,6 @@ static void ext4_mb_trim_inode_pa(struct inode *inode)
static int ext4_mb_release_context(struct ext4_allocation_context *ac)
{
struct inode *inode = ac->ac_inode;
- struct ext4_inode_info *ei = EXT4_I(inode);
struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
struct ext4_prealloc_space *pa = ac->ac_pa;
if (pa) {
@@ -5509,16 +5632,6 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
}
}

- if (pa->pa_type == MB_INODE_PA) {
- /*
- * treat per-inode prealloc list as a lru list, then try
- * to trim the least recently used PA.
- */
- spin_lock(pa->pa_node_lock.inode_lock);
- list_move(&pa->pa_node.inode_list, &ei->i_prealloc_list);
- spin_unlock(pa->pa_node_lock.inode_lock);
- }
-
ext4_mb_put_pa(ac, ac->ac_sb, pa);
}
if (ac->ac_bitmap_page)
diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index 398a6688c341..f8e8ee493867 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -115,7 +115,7 @@ struct ext4_free_data {

struct ext4_prealloc_space {
union {
- struct list_head inode_list; /* for inode PAs */
+ struct rb_node inode_node; /* for inode PA rbtree */
struct list_head lg_list; /* for lg PAs */
} pa_node;
struct list_head pa_group_list;
@@ -132,10 +132,10 @@ struct ext4_prealloc_space {
ext4_grpblk_t pa_free; /* how many blocks are free */
unsigned short pa_type; /* pa type. inode or group */
union {
- spinlock_t *inode_lock; /* locks the inode list holding this PA */
+ rwlock_t *inode_lock; /* locks the rbtree holding this PA */
spinlock_t *lg_lock; /* locks the lg list holding this PA */
} pa_node_lock;
- struct inode *pa_inode; /* hack, for history only */
+ struct inode *pa_inode; /* used to get the inode during group discard */
};

enum {
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 72ead3b56706..5fb3e401de6b 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1325,9 +1325,9 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
inode_set_iversion(&ei->vfs_inode, 1);
ei->i_flags = 0;
spin_lock_init(&ei->i_raw_lock);
- INIT_LIST_HEAD(&ei->i_prealloc_list);
+ ei->i_prealloc_node = RB_ROOT;
atomic_set(&ei->i_prealloc_active, 0);
- spin_lock_init(&ei->i_prealloc_lock);
+ rwlock_init(&ei->i_prealloc_lock);
ext4_es_init_tree(&ei->i_es_tree);
rwlock_init(&ei->i_es_lock);
INIT_LIST_HEAD(&ei->i_es_list);
--
2.31.1

2023-01-16 08:04:49

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 6/8] ext4: Convert pa->pa_inode_list and pa->pa_obj_lock into a union

** Splitting pa->pa_inode_list **

Currently, we use the same pa->pa_inode_list to add a pa to either
the inode preallocation list or the locality group preallocation list.
For better clarity, split this list into a union of 2 list_heads and use
either of the them based on the type of pa.

** Splitting pa->pa_obj_lock **

Currently, pa->pa_obj_lock is either assigned &ei->i_prealloc_lock for
inode PAs or lg_prealloc_lock for lg PAs, and is then used to lock the
lists containing these PAs. Make the distinction between the 2 PA types
clear by changing this lock to a union of 2 locks. Explicitly use the
pa_lock_node.inode_lock for inode PAs and pa_lock_node.lg_lock for lg
PAs.

This patch is required so that the locality group preallocation code
remains the same as in upcoming patches we are going to make changes to
inode preallocation code to move from list to rbtree based
implementation. This patch also makes it easier to review the upcoming
patches.

There are no functional changes in this patch.

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/ext4/mballoc.c | 76 +++++++++++++++++++++++++++--------------------
fs/ext4/mballoc.h | 10 +++++--
2 files changed, 52 insertions(+), 34 deletions(-)

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index fdb9d0a8f35d..53c4efd34d1c 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3994,7 +3994,7 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
ext4_lblk_t tmp_pa_start, tmp_pa_end;

rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
spin_lock(&tmp_pa->pa_lock);
if (tmp_pa->pa_deleted == 0) {
tmp_pa_start = tmp_pa->pa_lstart;
@@ -4032,7 +4032,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,

/* check we don't cross already preallocated blocks */
rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
if (tmp_pa->pa_deleted)
continue;
spin_lock(&tmp_pa->pa_lock);
@@ -4409,7 +4409,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)

/* first, try per-file preallocation */
rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {

/* all fields in this condition don't change,
* so we can skip locking for them */
@@ -4466,7 +4466,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
for (i = order; i < PREALLOC_TB_SIZE; i++) {
rcu_read_lock();
list_for_each_entry_rcu(tmp_pa, &lg->lg_prealloc_list[i],
- pa_inode_list) {
+ pa_node.lg_list) {
spin_lock(&tmp_pa->pa_lock);
if (tmp_pa->pa_deleted == 0 &&
tmp_pa->pa_free >= ac->ac_o_ex.fe_len) {
@@ -4640,9 +4640,15 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
list_del(&pa->pa_group_list);
ext4_unlock_group(sb, grp);

- spin_lock(pa->pa_obj_lock);
- list_del_rcu(&pa->pa_inode_list);
- spin_unlock(pa->pa_obj_lock);
+ if (pa->pa_type == MB_INODE_PA) {
+ spin_lock(pa->pa_node_lock.inode_lock);
+ list_del_rcu(&pa->pa_node.inode_list);
+ spin_unlock(pa->pa_node_lock.inode_lock);
+ } else {
+ spin_lock(pa->pa_node_lock.lg_lock);
+ list_del_rcu(&pa->pa_node.lg_list);
+ spin_unlock(pa->pa_node_lock.lg_lock);
+ }

call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
}
@@ -4710,7 +4716,7 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
pa->pa_len = ac->ac_b_ex.fe_len;
pa->pa_free = pa->pa_len;
spin_lock_init(&pa->pa_lock);
- INIT_LIST_HEAD(&pa->pa_inode_list);
+ INIT_LIST_HEAD(&pa->pa_node.inode_list);
INIT_LIST_HEAD(&pa->pa_group_list);
pa->pa_deleted = 0;
pa->pa_type = MB_INODE_PA;
@@ -4725,14 +4731,14 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
ei = EXT4_I(ac->ac_inode);
grp = ext4_get_group_info(sb, ac->ac_b_ex.fe_group);

- pa->pa_obj_lock = &ei->i_prealloc_lock;
+ pa->pa_node_lock.inode_lock = &ei->i_prealloc_lock;
pa->pa_inode = ac->ac_inode;

list_add(&pa->pa_group_list, &grp->bb_prealloc_list);

- spin_lock(pa->pa_obj_lock);
- list_add_rcu(&pa->pa_inode_list, &ei->i_prealloc_list);
- spin_unlock(pa->pa_obj_lock);
+ spin_lock(pa->pa_node_lock.inode_lock);
+ list_add_rcu(&pa->pa_node.inode_list, &ei->i_prealloc_list);
+ spin_unlock(pa->pa_node_lock.inode_lock);
atomic_inc(&ei->i_prealloc_active);
}

@@ -4764,7 +4770,7 @@ ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
pa->pa_len = ac->ac_b_ex.fe_len;
pa->pa_free = pa->pa_len;
spin_lock_init(&pa->pa_lock);
- INIT_LIST_HEAD(&pa->pa_inode_list);
+ INIT_LIST_HEAD(&pa->pa_node.lg_list);
INIT_LIST_HEAD(&pa->pa_group_list);
pa->pa_deleted = 0;
pa->pa_type = MB_GROUP_PA;
@@ -4780,7 +4786,7 @@ ext4_mb_new_group_pa(struct ext4_allocation_context *ac)
lg = ac->ac_lg;
BUG_ON(lg == NULL);

- pa->pa_obj_lock = &lg->lg_prealloc_lock;
+ pa->pa_node_lock.lg_lock = &lg->lg_prealloc_lock;
pa->pa_inode = NULL;

list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
@@ -4956,9 +4962,15 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {

/* remove from object (inode or locality group) */
- spin_lock(pa->pa_obj_lock);
- list_del_rcu(&pa->pa_inode_list);
- spin_unlock(pa->pa_obj_lock);
+ if (pa->pa_type == MB_GROUP_PA) {
+ spin_lock(pa->pa_node_lock.lg_lock);
+ list_del_rcu(&pa->pa_node.lg_list);
+ spin_unlock(pa->pa_node_lock.lg_lock);
+ } else {
+ spin_lock(pa->pa_node_lock.inode_lock);
+ list_del_rcu(&pa->pa_node.inode_list);
+ spin_unlock(pa->pa_node_lock.inode_lock);
+ }

if (pa->pa_type == MB_GROUP_PA)
ext4_mb_release_group_pa(&e4b, pa);
@@ -5021,8 +5033,8 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
spin_lock(&ei->i_prealloc_lock);
while (!list_empty(&ei->i_prealloc_list) && needed) {
pa = list_entry(ei->i_prealloc_list.prev,
- struct ext4_prealloc_space, pa_inode_list);
- BUG_ON(pa->pa_obj_lock != &ei->i_prealloc_lock);
+ struct ext4_prealloc_space, pa_node.inode_list);
+ BUG_ON(pa->pa_node_lock.inode_lock != &ei->i_prealloc_lock);
spin_lock(&pa->pa_lock);
if (atomic_read(&pa->pa_count)) {
/* this shouldn't happen often - nobody should
@@ -5039,7 +5051,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
if (pa->pa_deleted == 0) {
ext4_mb_mark_pa_deleted(sb, pa);
spin_unlock(&pa->pa_lock);
- list_del_rcu(&pa->pa_inode_list);
+ list_del_rcu(&pa->pa_node.inode_list);
list_add(&pa->u.pa_tmp_list, &list);
needed--;
continue;
@@ -5329,7 +5341,7 @@ ext4_mb_discard_lg_preallocations(struct super_block *sb,

spin_lock(&lg->lg_prealloc_lock);
list_for_each_entry_rcu(pa, &lg->lg_prealloc_list[order],
- pa_inode_list,
+ pa_node.lg_list,
lockdep_is_held(&lg->lg_prealloc_lock)) {
spin_lock(&pa->pa_lock);
if (atomic_read(&pa->pa_count)) {
@@ -5352,7 +5364,7 @@ ext4_mb_discard_lg_preallocations(struct super_block *sb,
ext4_mb_mark_pa_deleted(sb, pa);
spin_unlock(&pa->pa_lock);

- list_del_rcu(&pa->pa_inode_list);
+ list_del_rcu(&pa->pa_node.lg_list);
list_add(&pa->u.pa_tmp_list, &discard_list);

total_entries--;
@@ -5413,7 +5425,7 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
/* Add the prealloc space to lg */
spin_lock(&lg->lg_prealloc_lock);
list_for_each_entry_rcu(tmp_pa, &lg->lg_prealloc_list[order],
- pa_inode_list,
+ pa_node.lg_list,
lockdep_is_held(&lg->lg_prealloc_lock)) {
spin_lock(&tmp_pa->pa_lock);
if (tmp_pa->pa_deleted) {
@@ -5422,8 +5434,8 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
}
if (!added && pa->pa_free < tmp_pa->pa_free) {
/* Add to the tail of the previous entry */
- list_add_tail_rcu(&pa->pa_inode_list,
- &tmp_pa->pa_inode_list);
+ list_add_tail_rcu(&pa->pa_node.lg_list,
+ &tmp_pa->pa_node.lg_list);
added = 1;
/*
* we want to count the total
@@ -5434,7 +5446,7 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
lg_prealloc_count++;
}
if (!added)
- list_add_tail_rcu(&pa->pa_inode_list,
+ list_add_tail_rcu(&pa->pa_node.lg_list,
&lg->lg_prealloc_list[order]);
spin_unlock(&lg->lg_prealloc_lock);

@@ -5490,9 +5502,9 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
* doesn't grow big.
*/
if (likely(pa->pa_free)) {
- spin_lock(pa->pa_obj_lock);
- list_del_rcu(&pa->pa_inode_list);
- spin_unlock(pa->pa_obj_lock);
+ spin_lock(pa->pa_node_lock.lg_lock);
+ list_del_rcu(&pa->pa_node.lg_list);
+ spin_unlock(pa->pa_node_lock.lg_lock);
ext4_mb_add_n_trim(ac);
}
}
@@ -5502,9 +5514,9 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
* treat per-inode prealloc list as a lru list, then try
* to trim the least recently used PA.
*/
- spin_lock(pa->pa_obj_lock);
- list_move(&pa->pa_inode_list, &ei->i_prealloc_list);
- spin_unlock(pa->pa_obj_lock);
+ spin_lock(pa->pa_node_lock.inode_lock);
+ list_move(&pa->pa_node.inode_list, &ei->i_prealloc_list);
+ spin_unlock(pa->pa_node_lock.inode_lock);
}

ext4_mb_put_pa(ac, ac->ac_sb, pa);
diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index dcda2a943cee..398a6688c341 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -114,7 +114,10 @@ struct ext4_free_data {
};

struct ext4_prealloc_space {
- struct list_head pa_inode_list;
+ union {
+ struct list_head inode_list; /* for inode PAs */
+ struct list_head lg_list; /* for lg PAs */
+ } pa_node;
struct list_head pa_group_list;
union {
struct list_head pa_tmp_list;
@@ -128,7 +131,10 @@ struct ext4_prealloc_space {
ext4_grpblk_t pa_len; /* len of preallocated chunk */
ext4_grpblk_t pa_free; /* how many blocks are free */
unsigned short pa_type; /* pa type. inode or group */
- spinlock_t *pa_obj_lock;
+ union {
+ spinlock_t *inode_lock; /* locks the inode list holding this PA */
+ spinlock_t *lg_lock; /* locks the lg list holding this PA */
+ } pa_node_lock;
struct inode *pa_inode; /* hack, for history only */
};

--
2.31.1

2023-01-16 08:04:49

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 8/8] ext4: Remove the logic to trim inode PAs

Earlier, inode PAs were stored in a linked list. This caused a need to
periodically trim the list down inorder to avoid growing it to a very
large size, as this would severly affect performance during list
iteration.

Recent patches changed this list to an rbtree, and since the tree scales
up much better, we no longer need to have the trim functionality, hence
remove it.

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
Documentation/admin-guide/ext4.rst | 3 ---
fs/ext4/ext4.h | 1 -
fs/ext4/mballoc.c | 20 --------------------
fs/ext4/mballoc.h | 5 -----
fs/ext4/sysfs.c | 2 --
5 files changed, 31 deletions(-)

diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
index 4c559e08d11e..5740d85439ff 100644
--- a/Documentation/admin-guide/ext4.rst
+++ b/Documentation/admin-guide/ext4.rst
@@ -489,9 +489,6 @@ Files in /sys/fs/ext4/<devname>:
multiple of this tuning parameter if the stripe size is not set in the
ext4 superblock

- mb_max_inode_prealloc
- The maximum length of per-inode ext4_prealloc_space list.
-
mb_max_to_scan
The maximum number of extents the multiblock allocator will search to
find the best extent.
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index fad5f087e4c6..d2869ad7d885 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1612,7 +1612,6 @@ struct ext4_sb_info {
unsigned int s_mb_stats;
unsigned int s_mb_order2_reqs;
unsigned int s_mb_group_prealloc;
- unsigned int s_mb_max_inode_prealloc;
unsigned int s_max_dir_size_kb;
/* where last allocation was done - for stream allocation */
unsigned long s_mb_last_group;
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 85598079b7ce..273a98bcaa0d 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3419,7 +3419,6 @@ int ext4_mb_init(struct super_block *sb)
sbi->s_mb_stats = MB_DEFAULT_STATS;
sbi->s_mb_stream_request = MB_DEFAULT_STREAM_THRESHOLD;
sbi->s_mb_order2_reqs = MB_DEFAULT_ORDER2_REQS;
- sbi->s_mb_max_inode_prealloc = MB_DEFAULT_MAX_INODE_PREALLOC;
/*
* The default group preallocation is 512, which for 4k block
* sizes translates to 2 megabytes. However for bigalloc file
@@ -5583,29 +5582,11 @@ static void ext4_mb_add_n_trim(struct ext4_allocation_context *ac)
return ;
}

-/*
- * if per-inode prealloc list is too long, trim some PA
- */
-static void ext4_mb_trim_inode_pa(struct inode *inode)
-{
- struct ext4_inode_info *ei = EXT4_I(inode);
- struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
- int count, delta;
-
- count = atomic_read(&ei->i_prealloc_active);
- delta = (sbi->s_mb_max_inode_prealloc >> 2) + 1;
- if (count > sbi->s_mb_max_inode_prealloc + delta) {
- count -= sbi->s_mb_max_inode_prealloc;
- ext4_discard_preallocations(inode, count);
- }
-}
-
/*
* release all resource we used in allocation
*/
static int ext4_mb_release_context(struct ext4_allocation_context *ac)
{
- struct inode *inode = ac->ac_inode;
struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
struct ext4_prealloc_space *pa = ac->ac_pa;
if (pa) {
@@ -5641,7 +5622,6 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
if (ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)
mutex_unlock(&ac->ac_lg->lg_mutex);
ext4_mb_collect_stats(ac);
- ext4_mb_trim_inode_pa(inode);
return 0;
}

diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index f8e8ee493867..6d85ee8674a6 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -73,11 +73,6 @@
*/
#define MB_DEFAULT_GROUP_PREALLOC 512

-/*
- * maximum length of inode prealloc list
- */
-#define MB_DEFAULT_MAX_INODE_PREALLOC 512
-
/*
* Number of groups to search linearly before performing group scanning
* optimization.
diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
index d233c24ea342..f0d42cf44c71 100644
--- a/fs/ext4/sysfs.c
+++ b/fs/ext4/sysfs.c
@@ -214,7 +214,6 @@ EXT4_RW_ATTR_SBI_UI(mb_min_to_scan, s_mb_min_to_scan);
EXT4_RW_ATTR_SBI_UI(mb_order2_req, s_mb_order2_reqs);
EXT4_RW_ATTR_SBI_UI(mb_stream_req, s_mb_stream_request);
EXT4_RW_ATTR_SBI_UI(mb_group_prealloc, s_mb_group_prealloc);
-EXT4_RW_ATTR_SBI_UI(mb_max_inode_prealloc, s_mb_max_inode_prealloc);
EXT4_RW_ATTR_SBI_UI(mb_max_linear_groups, s_mb_max_linear_groups);
EXT4_RW_ATTR_SBI_UI(extent_max_zeroout_kb, s_extent_max_zeroout_kb);
EXT4_ATTR(trigger_fs_error, 0200, trigger_test_error);
@@ -264,7 +263,6 @@ static struct attribute *ext4_attrs[] = {
ATTR_LIST(mb_order2_req),
ATTR_LIST(mb_stream_req),
ATTR_LIST(mb_group_prealloc),
- ATTR_LIST(mb_max_inode_prealloc),
ATTR_LIST(mb_max_linear_groups),
ATTR_LIST(max_writeback_mb_bump),
ATTR_LIST(extent_max_zeroout_kb),
--
2.31.1

2023-01-16 08:52:17

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v3 4/8] ext4: Move overlap assert logic into a separate function

Abstract out the logic to double check for overlaps in normalize_pa to
a separate function. Since there has been no reports in past where we
have seen any overlaps which hits this bug_on(), in future we can
consider calling this function under "#ifdef AGGRESSIVE_CHECK" only.

There are no functional changes in this patch

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/ext4/mballoc.c | 36 ++++++++++++++++++++++++------------
1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index f4e699bce99f..1628b008a096 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -3984,6 +3984,29 @@ static void ext4_mb_normalize_group_request(struct ext4_allocation_context *ac)
mb_debug(sb, "goal %u blocks for locality group\n", ac->ac_g_ex.fe_len);
}

+static inline void
+ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
+ ext4_lblk_t start, ext4_lblk_t end)
+{
+ struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
+ struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
+ struct ext4_prealloc_space *tmp_pa;
+ ext4_lblk_t tmp_pa_start, tmp_pa_end;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
+ spin_lock(&tmp_pa->pa_lock);
+ if (tmp_pa->pa_deleted == 0) {
+ tmp_pa_start = tmp_pa->pa_lstart;
+ tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
+
+ BUG_ON(!(start >= tmp_pa_end || end <= tmp_pa_start));
+ }
+ spin_unlock(&tmp_pa->pa_lock);
+ }
+ rcu_read_unlock();
+}
+
/*
* Normalization means making request better in terms of
* size and alignment
@@ -4140,18 +4163,7 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
size = end - start;

/* XXX: extra loop to check we really don't overlap preallocations */
- rcu_read_lock();
- list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_inode_list) {
- spin_lock(&tmp_pa->pa_lock);
- if (tmp_pa->pa_deleted == 0) {
- tmp_pa_start = tmp_pa->pa_lstart;
- tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
-
- BUG_ON(!(start >= tmp_pa_end || end <= tmp_pa_start));
- }
- spin_unlock(&tmp_pa->pa_lock);
- }
- rcu_read_unlock();
+ ext4_mb_pa_assert_overlap(ac, start, end);

/*
* In this function "start" and "size" are normalized for better
--
2.31.1

2023-01-16 12:34:30

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v3 7/8] ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list

On Mon 16-01-23 13:32:15, Ojaswin Mujoo wrote:
> Currently, the kernel uses i_prealloc_list to hold all the inode
> preallocations. This is known to cause degradation in performance in
> workloads which perform large number of sparse writes on a single file.
> This is mainly because functions like ext4_mb_normalize_request() and
> ext4_mb_use_preallocated() iterate over this complete list, resulting in
> slowdowns when large number of PAs are present.
>
> Patch 27bc446e2 partially fixed this by enforcing a limit of 512 for
> the inode preallocation list and adding logic to continually trim the
> list if it grows above the threshold, however our testing revealed that
> a hardcoded value is not suitable for all kinds of workloads.
>
> To optimize this, add an rbtree to the inode and hold the inode
> preallocations in this rbtree. This will make iterating over inode PAs
> faster and scale much better than a linked list. Additionally, we also
> had to remove the LRU logic that was added during trimming of the list
> (in ext4_mb_release_context()) as it will add extra overhead in rbtree.
> The discards now happen in the lowest-logical-offset-first order.
>
> ** Locking notes **
>
> With the introduction of rbtree to maintain inode PAs, we can't use RCU
> to walk the tree for searching since it can result in partial traversals
> which might miss some nodes(or entire subtrees) while discards happen
> in parallel (which happens under a lock). Hence this patch converts the
> ei->i_prealloc_lock spin_lock to rw_lock.
>
> Almost all the codepaths that read/modify the PA rbtrees are protected
> by the higher level inode->i_data_sem (except
> ext4_mb_discard_group_preallocations() and ext4_clear_inode()) IIUC, the
> only place we need lock protection is when one thread is reading
> "searching" the PA rbtree (earlier protected under rcu_read_lock()) and
> another is "deleting" the PAs in ext4_mb_discard_group_preallocations()
> function (which iterates all the PAs using the grp->bb_prealloc_list and
> deletes PAs from the tree without taking any inode lock (i_data_sem)).
>
> So, this patch converts all rcu_read_lock/unlock() paths for inode list
> PA to use read_lock() and all places where we were using
> ei->i_prealloc_lock spinlock will now be using write_lock().
>
> Note that this makes the fast path (searching of the right PA e.g.
> ext4_mb_use_preallocated() or ext4_mb_normalize_request()), now use
> read_lock() instead of rcu_read_lock/unlock(). Ths also will now block
> due to slow discard path (ext4_mb_discard_group_preallocations()) which
> uses write_lock().
>
> But this is not as bad as it looks. This is because -
>
> 1. The slow path only occurs when the normal allocation failed and we
> can say that we are low on disk space. One can argue this scenario
> won't be much frequent.
>
> 2. ext4_mb_discard_group_preallocations(), locks and unlocks the rwlock
> for deleting every individual PA. This gives enough opportunity for
> the fast path to acquire the read_lock for searching the PA inode
> list.
>
> ** Design changes around deleted PAs **
>
> In ext4_mb_adjust_overlap(), it is possible for an allocating thread to
> come across a PA that is marked deleted via
> ext4_mb_discard_group_preallocations() but not yet removed from the
> inode PA rbtree. In such a case, we ignore any overlaps with this PA
> node and simply move forward in the rbtree by comparing logical start of
> to-be-inserted PA and the deleted PA node.
>
> Although this usually doesn't cause an issue and we can move forward in
> the tree, in one speacial case, i.e if range of deleted PA lies
> completely inside the normalized range, we might require to travese both
> the sub-trees under such a deleted PA.
>
> To simplify this special scenario and also as an optimization, undelete
> the PA If the allocating thread determines that this PA might be needed
> in the near future. This results in the following changes:
>
> - ext4_mb_use_preallocated(): Use a deleted PA if original request lies in it.
> - ext4_mb_pa_adjust_overlap(): Undelete a PA if it is deleted but there
> is an overlap with the normalized range.
> - ext4_mb_discard_group_preallocations(): Rollback delete operation if
> allocation path undeletes a PA before it is erased from inode PA
> list.
>
> Since this covers the special case we discussed above, we will always
> un-delete the PA when we encounter the special case and we can then
> adjust for overlap and traverse the PA rbtree without any issues.
>
> Signed-off-by: Ojaswin Mujoo <[email protected]>
> Reviewed-by: Ritesh Harjani (IBM) <[email protected]>

So I find this putting back of already deleted inode PA very fragile. For
example in current code I suspect you've missed a case in ext4_mb_put_pa()
which can mark inode PA (so it can then be spotted by
ext4_mb_pa_adjust_overlap() and marked as in use again) but
ext4_mb_put_pa() still goes on and destroys the PA.

So I'd really love to stay with the invariant that once PA is marked as
deleted, it can never go alive again. Since finding such deleted PA that is
overlapping our new range should be really rare, cannot we just make
ext4_mb_pa_adjust_overlap() rb_erase() this deleted PA and restart the
adjustment search? Since rb_erase() is not safe to be called twice, we'd
have to record somewhere in the PA that the erasure has already happened
(probably we could have two flags in 'deleted' field - deleted from group
list, deleted from object (lg_list/inode_node)) but that is still much more
robust...

Honza

> ---
> fs/ext4/ext4.h | 4 +-
> fs/ext4/mballoc.c | 239 ++++++++++++++++++++++++++++++++++------------
> fs/ext4/mballoc.h | 6 +-
> fs/ext4/super.c | 4 +-
> 4 files changed, 183 insertions(+), 70 deletions(-)
>
> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
> index 140e1eb300d1..fad5f087e4c6 100644
> --- a/fs/ext4/ext4.h
> +++ b/fs/ext4/ext4.h
> @@ -1120,8 +1120,8 @@ struct ext4_inode_info {
>
> /* mballoc */
> atomic_t i_prealloc_active;
> - struct list_head i_prealloc_list;
> - spinlock_t i_prealloc_lock;
> + struct rb_root i_prealloc_node;
> + rwlock_t i_prealloc_lock;
>
> /* extents status tree */
> struct ext4_es_tree i_es_tree;
> diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
> index 53c4efd34d1c..85598079b7ce 100644
> --- a/fs/ext4/mballoc.c
> +++ b/fs/ext4/mballoc.c
> @@ -3984,6 +3984,44 @@ static void ext4_mb_normalize_group_request(struct ext4_allocation_context *ac)
> mb_debug(sb, "goal %u blocks for locality group\n", ac->ac_g_ex.fe_len);
> }
>
> +static void ext4_mb_mark_pa_inuse(struct super_block *sb,
> + struct ext4_prealloc_space *pa)
> +{
> + struct ext4_inode_info *ei;
> +
> + if (pa->pa_deleted == 0) {
> + ext4_warning(
> + sb, "pa already inuse, type:%d, pblk:%llu, lblk:%u, len:%d\n",
> + pa->pa_type, pa->pa_pstart, pa->pa_lstart, pa->pa_len);
> + return;
> + }
> +
> + pa->pa_deleted = 0;
> +
> + if (pa->pa_type == MB_INODE_PA) {
> + ei = EXT4_I(pa->pa_inode);
> + atomic_inc(&ei->i_prealloc_active);
> + }
> +}
> +
> +/*
> + * This function returns the next element to look at during inode
> + * PA rbtree walk. We assume that we have held the inode PA rbtree lock
> + * (ei->i_prealloc_lock)
> + *
> + * new_start The start of the range we want to compare
> + * cur_start The existing start that we are comparing against
> + * node The node of the rb_tree
> + */
> +static inline struct rb_node*
> +ext4_mb_pa_rb_next_iter(ext4_lblk_t new_start, ext4_lblk_t cur_start, struct rb_node *node)
> +{
> + if (new_start < cur_start)
> + return node->rb_left;
> + else
> + return node->rb_right;
> +}
> +
> static inline void
> ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
> ext4_lblk_t start, ext4_lblk_t end)
> @@ -3992,27 +4030,31 @@ ext4_mb_pa_assert_overlap(struct ext4_allocation_context *ac,
> struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
> struct ext4_prealloc_space *tmp_pa;
> ext4_lblk_t tmp_pa_start, tmp_pa_end;
> + struct rb_node *iter;
>
> - rcu_read_lock();
> - list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
> - spin_lock(&tmp_pa->pa_lock);
> - if (tmp_pa->pa_deleted == 0) {
> - tmp_pa_start = tmp_pa->pa_lstart;
> - tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
> + read_lock(&ei->i_prealloc_lock);
> + iter = ei->i_prealloc_node.rb_node;
> + while (iter) {
> + tmp_pa = rb_entry(iter, struct ext4_prealloc_space,
> + pa_node.inode_node);
> + tmp_pa_start = tmp_pa->pa_lstart;
> + tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
>
> + spin_lock(&tmp_pa->pa_lock);
> + if (tmp_pa->pa_deleted == 0)
> BUG_ON(!(start >= tmp_pa_end || end <= tmp_pa_start));
> - }
> spin_unlock(&tmp_pa->pa_lock);
> +
> + iter = ext4_mb_pa_rb_next_iter(start, tmp_pa_start, iter);
> }
> - rcu_read_unlock();
> + read_unlock(&ei->i_prealloc_lock);
> }
> -
> /*
> * Given an allocation context "ac" and a range "start", "end", check
> * and adjust boundaries if the range overlaps with any of the existing
> * preallocatoins stored in the corresponding inode of the allocation context.
> *
> - *Parameters:
> + * Parameters:
> * ac allocation context
> * start start of the new range
> * end end of the new range
> @@ -4024,6 +4066,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
> struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
> struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
> struct ext4_prealloc_space *tmp_pa;
> + struct rb_node *iter;
> ext4_lblk_t new_start, new_end;
> ext4_lblk_t tmp_pa_start, tmp_pa_end;
>
> @@ -4031,25 +4074,40 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
> new_end = *end;
>
> /* check we don't cross already preallocated blocks */
> - rcu_read_lock();
> - list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
> - if (tmp_pa->pa_deleted)
> - continue;
> - spin_lock(&tmp_pa->pa_lock);
> - if (tmp_pa->pa_deleted) {
> - spin_unlock(&tmp_pa->pa_lock);
> - continue;
> - }
> + read_lock(&ei->i_prealloc_lock);
> + for (iter = ei->i_prealloc_node.rb_node; iter;
> + iter = ext4_mb_pa_rb_next_iter(new_start, tmp_pa_start, iter)) {
> + int is_overlap;
>
> + tmp_pa = rb_entry(iter, struct ext4_prealloc_space,
> + pa_node.inode_node);
> tmp_pa_start = tmp_pa->pa_lstart;
> tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
> + is_overlap = !(tmp_pa_start >= new_end || tmp_pa_end <= new_start);
> +
> + spin_lock(&tmp_pa->pa_lock);
> + if (tmp_pa->pa_deleted) {
> + if (is_overlap) {
> + /*
> + * Normalized range overlaps with range of this
> + * deleted PA, that means we might need it in
> + * near future. Since the PA is yet to be
> + * removed from inode PA tree and freed, lets
> + * just undelete it.
> + */
> + ext4_mb_mark_pa_inuse(ac->ac_sb, tmp_pa);
> + } else {
> + spin_unlock(&tmp_pa->pa_lock);
> + continue;
> + }
> + }
>
> /* PA must not overlap original request */
> BUG_ON(!(ac->ac_o_ex.fe_logical >= tmp_pa_end ||
> ac->ac_o_ex.fe_logical < tmp_pa_start));
>
> /* skip PAs this normalized request doesn't overlap with */
> - if (tmp_pa_start >= new_end || tmp_pa_end <= new_start) {
> + if (!is_overlap) {
> spin_unlock(&tmp_pa->pa_lock);
> continue;
> }
> @@ -4065,7 +4123,7 @@ ext4_mb_pa_adjust_overlap(struct ext4_allocation_context *ac,
> }
> spin_unlock(&tmp_pa->pa_lock);
> }
> - rcu_read_unlock();
> + read_unlock(&ei->i_prealloc_lock);
>
> /* XXX: extra loop to check we really don't overlap preallocations */
> ext4_mb_pa_assert_overlap(ac, new_start, new_end);
> @@ -4192,7 +4250,6 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
> ext4_mb_pa_adjust_overlap(ac, &start, &end);
>
> size = end - start;
> -
> /*
> * In this function "start" and "size" are normalized for better
> * alignment and length such that we could preallocate more blocks.
> @@ -4401,6 +4458,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
> struct ext4_locality_group *lg;
> struct ext4_prealloc_space *tmp_pa, *cpa = NULL;
> ext4_lblk_t tmp_pa_start, tmp_pa_end;
> + struct rb_node *iter;
> ext4_fsblk_t goal_block;
>
> /* only data can be preallocated */
> @@ -4408,14 +4466,17 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
> return false;
>
> /* first, try per-file preallocation */
> - rcu_read_lock();
> - list_for_each_entry_rcu(tmp_pa, &ei->i_prealloc_list, pa_node.inode_list) {
> + read_lock(&ei->i_prealloc_lock);
> + for (iter = ei->i_prealloc_node.rb_node; iter;
> + iter = ext4_mb_pa_rb_next_iter(ac->ac_o_ex.fe_logical, tmp_pa_start, iter)) {
> + tmp_pa = rb_entry(iter, struct ext4_prealloc_space, pa_node.inode_node);
>
> /* all fields in this condition don't change,
> * so we can skip locking for them */
> tmp_pa_start = tmp_pa->pa_lstart;
> tmp_pa_end = tmp_pa->pa_lstart + EXT4_C2B(sbi, tmp_pa->pa_len);
>
> + /* original request start doesn't lie in this PA */
> if (ac->ac_o_ex.fe_logical < tmp_pa_start ||
> ac->ac_o_ex.fe_logical >= tmp_pa_end)
> continue;
> @@ -4433,17 +4494,25 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac)
>
> /* found preallocated blocks, use them */
> spin_lock(&tmp_pa->pa_lock);
> - if (tmp_pa->pa_deleted == 0 && tmp_pa->pa_free) {
> + if (tmp_pa->pa_free) {
> + if (tmp_pa->pa_deleted == 1) {
> + /*
> + * Since PA is yet to be deleted from inode PA
> + * rbtree, just undelete it and use it.
> + */
> + ext4_mb_mark_pa_inuse(ac->ac_sb, tmp_pa);
> + }
> +
> atomic_inc(&tmp_pa->pa_count);
> ext4_mb_use_inode_pa(ac, tmp_pa);
> spin_unlock(&tmp_pa->pa_lock);
> ac->ac_criteria = 10;
> - rcu_read_unlock();
> + read_unlock(&ei->i_prealloc_lock);
> return true;
> }
> spin_unlock(&tmp_pa->pa_lock);
> }
> - rcu_read_unlock();
> + read_unlock(&ei->i_prealloc_lock);
>
> /* can we use group allocation? */
> if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC))
> @@ -4596,6 +4665,7 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
> {
> ext4_group_t grp;
> ext4_fsblk_t grp_blk;
> + struct ext4_inode_info *ei = EXT4_I(ac->ac_inode);
>
> /* in this short window concurrent discard can set pa_deleted */
> spin_lock(&pa->pa_lock);
> @@ -4641,16 +4711,41 @@ static void ext4_mb_put_pa(struct ext4_allocation_context *ac,
> ext4_unlock_group(sb, grp);
>
> if (pa->pa_type == MB_INODE_PA) {
> - spin_lock(pa->pa_node_lock.inode_lock);
> - list_del_rcu(&pa->pa_node.inode_list);
> - spin_unlock(pa->pa_node_lock.inode_lock);
> + write_lock(pa->pa_node_lock.inode_lock);
> + rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
> + write_unlock(pa->pa_node_lock.inode_lock);
> + ext4_mb_pa_free(pa);
> } else {
> spin_lock(pa->pa_node_lock.lg_lock);
> list_del_rcu(&pa->pa_node.lg_list);
> spin_unlock(pa->pa_node_lock.lg_lock);
> + call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
> }
> +}
>
> - call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
> +static void ext4_mb_pa_rb_insert(struct rb_root *root, struct rb_node *new)
> +{
> + struct rb_node **iter = &root->rb_node, *parent = NULL;
> + struct ext4_prealloc_space *iter_pa, *new_pa;
> + ext4_lblk_t iter_start, new_start;
> +
> + while (*iter) {
> + iter_pa = rb_entry(*iter, struct ext4_prealloc_space,
> + pa_node.inode_node);
> + new_pa = rb_entry(new, struct ext4_prealloc_space,
> + pa_node.inode_node);
> + iter_start = iter_pa->pa_lstart;
> + new_start = new_pa->pa_lstart;
> +
> + parent = *iter;
> + if (new_start < iter_start)
> + iter = &((*iter)->rb_left);
> + else
> + iter = &((*iter)->rb_right);
> + }
> +
> + rb_link_node(new, parent, iter);
> + rb_insert_color(new, root);
> }
>
> /*
> @@ -4716,7 +4811,6 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
> pa->pa_len = ac->ac_b_ex.fe_len;
> pa->pa_free = pa->pa_len;
> spin_lock_init(&pa->pa_lock);
> - INIT_LIST_HEAD(&pa->pa_node.inode_list);
> INIT_LIST_HEAD(&pa->pa_group_list);
> pa->pa_deleted = 0;
> pa->pa_type = MB_INODE_PA;
> @@ -4736,9 +4830,9 @@ ext4_mb_new_inode_pa(struct ext4_allocation_context *ac)
>
> list_add(&pa->pa_group_list, &grp->bb_prealloc_list);
>
> - spin_lock(pa->pa_node_lock.inode_lock);
> - list_add_rcu(&pa->pa_node.inode_list, &ei->i_prealloc_list);
> - spin_unlock(pa->pa_node_lock.inode_lock);
> + write_lock(pa->pa_node_lock.inode_lock);
> + ext4_mb_pa_rb_insert(&ei->i_prealloc_node, &pa->pa_node.inode_node);
> + write_unlock(pa->pa_node_lock.inode_lock);
> atomic_inc(&ei->i_prealloc_active);
> }
>
> @@ -4904,6 +4998,7 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
> struct ext4_prealloc_space *pa, *tmp;
> struct list_head list;
> struct ext4_buddy e4b;
> + struct ext4_inode_info *ei;
> int err;
> int free = 0;
>
> @@ -4967,18 +5062,45 @@ ext4_mb_discard_group_preallocations(struct super_block *sb,
> list_del_rcu(&pa->pa_node.lg_list);
> spin_unlock(pa->pa_node_lock.lg_lock);
> } else {
> - spin_lock(pa->pa_node_lock.inode_lock);
> - list_del_rcu(&pa->pa_node.inode_list);
> - spin_unlock(pa->pa_node_lock.inode_lock);
> + /*
> + * The allocation path might undelete a PA
> + * incase it expects it to be used again in near
> + * future. In that case, rollback the ongoing delete
> + * operation and don't remove the PA from inode PA
> + * tree.
> + *
> + * TODO: See if we need pa_lock since there might no
> + * path that contends with it once the rbtree writelock
> + * is taken.
> + */
> + write_lock(pa->pa_node_lock.inode_lock);
> + spin_lock(&pa->pa_lock);
> + if (pa->pa_deleted == 0) {
> + free -= pa->pa_free;
> + list_add(&pa->pa_group_list,
> + &grp->bb_prealloc_list);
> + list_del(&pa->u.pa_tmp_list);
> +
> + spin_unlock(&pa->pa_lock);
> + write_unlock(pa->pa_node_lock.inode_lock);
> + continue;
> + }
> + spin_unlock(&pa->pa_lock);
> +
> + ei = EXT4_I(pa->pa_inode);
> + rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
> + write_unlock(pa->pa_node_lock.inode_lock);
> }
>
> - if (pa->pa_type == MB_GROUP_PA)
> + list_del(&pa->u.pa_tmp_list);
> +
> + if (pa->pa_type == MB_GROUP_PA) {
> ext4_mb_release_group_pa(&e4b, pa);
> - else
> + call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
> + } else {
> ext4_mb_release_inode_pa(&e4b, bitmap_bh, pa);
> -
> - list_del(&pa->u.pa_tmp_list);
> - call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
> + ext4_mb_pa_free(pa);
> + }
> }
>
> ext4_unlock_group(sb, group);
> @@ -5008,6 +5130,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
> ext4_group_t group = 0;
> struct list_head list;
> struct ext4_buddy e4b;
> + struct rb_node *iter;
> int err;
>
> if (!S_ISREG(inode->i_mode)) {
> @@ -5030,17 +5153,18 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
>
> repeat:
> /* first, collect all pa's in the inode */
> - spin_lock(&ei->i_prealloc_lock);
> - while (!list_empty(&ei->i_prealloc_list) && needed) {
> - pa = list_entry(ei->i_prealloc_list.prev,
> - struct ext4_prealloc_space, pa_node.inode_list);
> + write_lock(&ei->i_prealloc_lock);
> + for (iter = rb_first(&ei->i_prealloc_node); iter && needed; iter = rb_next(iter)) {
> + pa = rb_entry(iter, struct ext4_prealloc_space,
> + pa_node.inode_node);
> BUG_ON(pa->pa_node_lock.inode_lock != &ei->i_prealloc_lock);
> +
> spin_lock(&pa->pa_lock);
> if (atomic_read(&pa->pa_count)) {
> /* this shouldn't happen often - nobody should
> * use preallocation while we're discarding it */
> spin_unlock(&pa->pa_lock);
> - spin_unlock(&ei->i_prealloc_lock);
> + write_unlock(&ei->i_prealloc_lock);
> ext4_msg(sb, KERN_ERR,
> "uh-oh! used pa while discarding");
> WARN_ON(1);
> @@ -5051,7 +5175,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
> if (pa->pa_deleted == 0) {
> ext4_mb_mark_pa_deleted(sb, pa);
> spin_unlock(&pa->pa_lock);
> - list_del_rcu(&pa->pa_node.inode_list);
> + rb_erase(&pa->pa_node.inode_node, &ei->i_prealloc_node);
> list_add(&pa->u.pa_tmp_list, &list);
> needed--;
> continue;
> @@ -5059,7 +5183,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
>
> /* someone is deleting pa right now */
> spin_unlock(&pa->pa_lock);
> - spin_unlock(&ei->i_prealloc_lock);
> + write_unlock(&ei->i_prealloc_lock);
>
> /* we have to wait here because pa_deleted
> * doesn't mean pa is already unlinked from
> @@ -5076,7 +5200,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
> schedule_timeout_uninterruptible(HZ);
> goto repeat;
> }
> - spin_unlock(&ei->i_prealloc_lock);
> + write_unlock(&ei->i_prealloc_lock);
>
> list_for_each_entry_safe(pa, tmp, &list, u.pa_tmp_list) {
> BUG_ON(pa->pa_type != MB_INODE_PA);
> @@ -5108,7 +5232,7 @@ void ext4_discard_preallocations(struct inode *inode, unsigned int needed)
> put_bh(bitmap_bh);
>
> list_del(&pa->u.pa_tmp_list);
> - call_rcu(&(pa)->u.pa_rcu, ext4_mb_pa_callback);
> + ext4_mb_pa_free(pa);
> }
> }
>
> @@ -5482,7 +5606,6 @@ static void ext4_mb_trim_inode_pa(struct inode *inode)
> static int ext4_mb_release_context(struct ext4_allocation_context *ac)
> {
> struct inode *inode = ac->ac_inode;
> - struct ext4_inode_info *ei = EXT4_I(inode);
> struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);
> struct ext4_prealloc_space *pa = ac->ac_pa;
> if (pa) {
> @@ -5509,16 +5632,6 @@ static int ext4_mb_release_context(struct ext4_allocation_context *ac)
> }
> }
>
> - if (pa->pa_type == MB_INODE_PA) {
> - /*
> - * treat per-inode prealloc list as a lru list, then try
> - * to trim the least recently used PA.
> - */
> - spin_lock(pa->pa_node_lock.inode_lock);
> - list_move(&pa->pa_node.inode_list, &ei->i_prealloc_list);
> - spin_unlock(pa->pa_node_lock.inode_lock);
> - }
> -
> ext4_mb_put_pa(ac, ac->ac_sb, pa);
> }
> if (ac->ac_bitmap_page)
> diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
> index 398a6688c341..f8e8ee493867 100644
> --- a/fs/ext4/mballoc.h
> +++ b/fs/ext4/mballoc.h
> @@ -115,7 +115,7 @@ struct ext4_free_data {
>
> struct ext4_prealloc_space {
> union {
> - struct list_head inode_list; /* for inode PAs */
> + struct rb_node inode_node; /* for inode PA rbtree */
> struct list_head lg_list; /* for lg PAs */
> } pa_node;
> struct list_head pa_group_list;
> @@ -132,10 +132,10 @@ struct ext4_prealloc_space {
> ext4_grpblk_t pa_free; /* how many blocks are free */
> unsigned short pa_type; /* pa type. inode or group */
> union {
> - spinlock_t *inode_lock; /* locks the inode list holding this PA */
> + rwlock_t *inode_lock; /* locks the rbtree holding this PA */
> spinlock_t *lg_lock; /* locks the lg list holding this PA */
> } pa_node_lock;
> - struct inode *pa_inode; /* hack, for history only */
> + struct inode *pa_inode; /* used to get the inode during group discard */
> };
>
> enum {
> diff --git a/fs/ext4/super.c b/fs/ext4/super.c
> index 72ead3b56706..5fb3e401de6b 100644
> --- a/fs/ext4/super.c
> +++ b/fs/ext4/super.c
> @@ -1325,9 +1325,9 @@ static struct inode *ext4_alloc_inode(struct super_block *sb)
> inode_set_iversion(&ei->vfs_inode, 1);
> ei->i_flags = 0;
> spin_lock_init(&ei->i_raw_lock);
> - INIT_LIST_HEAD(&ei->i_prealloc_list);
> + ei->i_prealloc_node = RB_ROOT;
> atomic_set(&ei->i_prealloc_active, 0);
> - spin_lock_init(&ei->i_prealloc_lock);
> + rwlock_init(&ei->i_prealloc_lock);
> ext4_es_init_tree(&ei->i_es_tree);
> rwlock_init(&ei->i_es_lock);
> INIT_LIST_HEAD(&ei->i_es_list);
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR

2023-01-17 10:35:53

by Ojaswin Mujoo

[permalink] [raw]
Subject: Re: [PATCH v3 7/8] ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list

On Mon, Jan 16, 2023 at 01:23:34PM +0100, Jan Kara wrote:
> > Since this covers the special case we discussed above, we will always
> > un-delete the PA when we encounter the special case and we can then
> > adjust for overlap and traverse the PA rbtree without any issues.
> >
> > Signed-off-by: Ojaswin Mujoo <[email protected]>
> > Reviewed-by: Ritesh Harjani (IBM) <[email protected]>

Hi Jan,
Thanks for the review, sharing some of my thoughts below.

>
> So I find this putting back of already deleted inode PA very fragile. For
> example in current code I suspect you've missed a case in ext4_mb_put_pa()
> which can mark inode PA (so it can then be spotted by
> ext4_mb_pa_adjust_overlap() and marked as in use again) but
> ext4_mb_put_pa() still goes on and destroys the PA.

The 2 code paths that clash here are:

ext4_mb_new_blocks() -> ext4_mb_release_context() -> ext4_mb_new_blocks()
ext4_mb_new_blocks() -> ext4_mb_normalize_request() -> ext4_mb_pa_adjust_overlap()

Since these are the only code paths from which these 2 functions are
called, for a given inode, access will always be serialized by the upper
level ei->i_data_sem, which is always taken when writing data blocks
using ext4_mb_new_block().

From my understanding of the code, I feel only
ext4_mb_discard_group_preallocations() can race against other functions
that are modifying the PA rbtree since it does not take any inode locks.

That being said, I do understand your concerns regarding the solution,
however I'm willing to work with the community to ensure our
implementation of this undelete feature is as robust as possible. Along
with fixing the bug reported here [1], I believe that it is also a good
optimization to have especially when the disk is near full and we are
seeing a lot of group discards going on.

Also, in case the deleted PA completely lies inside our new range, it is
much better to just undelete and use it rather than deleting the
existing PA and reallocating the range again. I think the advantage
would be even bigger in ext4_mb_use_preallocated() function where we can
just undelete and use the PA and skip the entire allocation, incase original
range lies in a deleted PA.

>
> So I'd really love to stay with the invariant that once PA is marked as
> deleted, it can never go alive again. Since finding such deleted PA that is
> overlapping our new range should be really rare, cannot we just make
> ext4_mb_pa_adjust_overlap() rb_erase() this deleted PA and restart the
> adjustment search? Since rb_erase() is not safe to be called twice, we'd
> have to record somewhere in the PA that the erasure has already happened
> (probably we could have two flags in 'deleted' field - deleted from group
> list, deleted from object (lg_list/inode_node)) but that is still much more
> robust...
Right, this is an apporach we did consider but we felt it would be a
better to take this opportunity to make the above optimization. Would
love to hear your thoughts on this.

Thanks,
Ojaswin

>
> Honza
>
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR

2023-01-17 11:05:22

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v3 7/8] ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list

On Tue 17-01-23 16:00:47, Ojaswin Mujoo wrote:
> On Mon, Jan 16, 2023 at 01:23:34PM +0100, Jan Kara wrote:
> > > Since this covers the special case we discussed above, we will always
> > > un-delete the PA when we encounter the special case and we can then
> > > adjust for overlap and traverse the PA rbtree without any issues.
> > >
> > > Signed-off-by: Ojaswin Mujoo <[email protected]>
> > > Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
>
> Hi Jan,
> Thanks for the review, sharing some of my thoughts below.
>
> >
> > So I find this putting back of already deleted inode PA very fragile. For
> > example in current code I suspect you've missed a case in ext4_mb_put_pa()
> > which can mark inode PA (so it can then be spotted by
> > ext4_mb_pa_adjust_overlap() and marked as in use again) but
> > ext4_mb_put_pa() still goes on and destroys the PA.
>
> The 2 code paths that clash here are:
>
> ext4_mb_new_blocks() -> ext4_mb_release_context() -> ext4_mb_put_pa()
> ext4_mb_new_blocks() -> ext4_mb_normalize_request() -> ext4_mb_pa_adjust_overlap()
>
> Since these are the only code paths from which these 2 functions are
> called, for a given inode, access will always be serialized by the upper
> level ei->i_data_sem, which is always taken when writing data blocks
> using ext4_mb_new_block().

Indeed, inode->i_data_sem prevents the race I was afraid of.

> From my understanding of the code, I feel only
> ext4_mb_discard_group_preallocations() can race against other functions
> that are modifying the PA rbtree since it does not take any inode locks.
>
> That being said, I do understand your concerns regarding the solution,
> however I'm willing to work with the community to ensure our
> implementation of this undelete feature is as robust as possible. Along
> with fixing the bug reported here [1], I believe that it is also a good
> optimization to have especially when the disk is near full and we are
> seeing a lot of group discards going on.
>
> Also, in case the deleted PA completely lies inside our new range, it is
> much better to just undelete and use it rather than deleting the
> existing PA and reallocating the range again. I think the advantage
> would be even bigger in ext4_mb_use_preallocated() function where we can
> just undelete and use the PA and skip the entire allocation, incase original
> range lies in a deleted PA.

Thanks for explantion. However I think you're optimizing the wrong thing.
We are running out of space (to run ext4_mb_discard_group_preallocations()
at all) and we allocate from an area covered by PA that we've just decided
to discard - if anything relies on performance of the filesystem in ENOSPC
conditions it has serious problems no matter what. Sure, we should deliver
the result (either ENOSPC or some block allocation) in a reasonable time
but the performance does not really matter much because all the scanning
and flushing is going to slow down everything a lot anyway. One additional
scan of the rbtree is really negligible in this case. So what we should
rather optimize for in this case is the code simplicity and maintainability
of this rare corner-case that will also likely get only a small amount of
testing. And in terms of code simplicity the delete & restart solution
seems to be much better (at least as far as I'm imagining it - maybe the
code will prove me wrong ;)).

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2023-01-19 06:35:27

by Ojaswin Mujoo

[permalink] [raw]
Subject: Re: [PATCH v3 7/8] ext4: Use rbtrees to manage PAs instead of inode i_prealloc_list

On Tue, Jan 17, 2023 at 12:03:35PM +0100, Jan Kara wrote:
> On Tue 17-01-23 16:00:47, Ojaswin Mujoo wrote:
> > On Mon, Jan 16, 2023 at 01:23:34PM +0100, Jan Kara wrote:
> > > > Since this covers the special case we discussed above, we will always
> > > > un-delete the PA when we encounter the special case and we can then
> > > > adjust for overlap and traverse the PA rbtree without any issues.
> > > >
> > > > Signed-off-by: Ojaswin Mujoo <[email protected]>
> > > > Reviewed-by: Ritesh Harjani (IBM) <[email protected]>
> >
> > Hi Jan,
> > Thanks for the review, sharing some of my thoughts below.
> >
> > >
> > > So I find this putting back of already deleted inode PA very fragile. For
> > > example in current code I suspect you've missed a case in ext4_mb_put_pa()
> > > which can mark inode PA (so it can then be spotted by
> > > ext4_mb_pa_adjust_overlap() and marked as in use again) but
> > > ext4_mb_put_pa() still goes on and destroys the PA.
> >
> > The 2 code paths that clash here are:
> >
> > ext4_mb_new_blocks() -> ext4_mb_release_context() -> ext4_mb_put_pa()
> > ext4_mb_new_blocks() -> ext4_mb_normalize_request() -> ext4_mb_pa_adjust_overlap()
> >
> > Since these are the only code paths from which these 2 functions are
> > called, for a given inode, access will always be serialized by the upper
> > level ei->i_data_sem, which is always taken when writing data blocks
> > using ext4_mb_new_block().
>
> Indeed, inode->i_data_sem prevents the race I was afraid of.
>
> > From my understanding of the code, I feel only
> > ext4_mb_discard_group_preallocations() can race against other functions
> > that are modifying the PA rbtree since it does not take any inode locks.
> >
> > That being said, I do understand your concerns regarding the solution,
> > however I'm willing to work with the community to ensure our
> > implementation of this undelete feature is as robust as possible. Along
> > with fixing the bug reported here [1], I believe that it is also a good
> > optimization to have especially when the disk is near full and we are
> > seeing a lot of group discards going on.
> >
> > Also, in case the deleted PA completely lies inside our new range, it is
> > much better to just undelete and use it rather than deleting the
> > existing PA and reallocating the range again. I think the advantage
> > would be even bigger in ext4_mb_use_preallocated() function where we can
> > just undelete and use the PA and skip the entire allocation, incase original
> > range lies in a deleted PA.
>
> Thanks for explantion. However I think you're optimizing the wrong thing.
> We are running out of space (to run ext4_mb_discard_group_preallocations()
> at all) and we allocate from an area covered by PA that we've just decided
> to discard - if anything relies on performance of the filesystem in ENOSPC
> conditions it has serious problems no matter what. Sure, we should deliver
> the result (either ENOSPC or some block allocation) in a reasonable time
> but the performance does not really matter much because all the scanning
> and flushing is going to slow down everything a lot anyway. One additional
> scan of the rbtree is really negligible in this case. So what we should
> rather optimize for in this case is the code simplicity and maintainability
> of this rare corner-case that will also likely get only a small amount of
> testing. And in terms of code simplicity the delete & restart solution
> seems to be much better (at least as far as I'm imagining it - maybe the
> code will prove me wrong ;)).
Hi Jan,

So I did try out the 'rb_erase from ext4_mb_adjust_overlap() and retry' method,
with ane extra pa_removed flag, but the locking is getting pretty messy. I'm
not sure if such a design is possible is the lock we currently have.

Basically, the issue I'm facing is that we are having to drop the
locks read locks and accquire the write locks in
ext4_mb_adjust_overlap(), which looks something like this:

spin_unlock(&tmp_pa->pa_lock);
read_unlock(&ei->i_prealloc_lock);

write_lock(&ei->i_prealloc_lock);
spin_lock(&tmp_pa->pa_lock);

We have to preserve the order and drop both tree and PA locks to avoid
deadlocks. With this approach, the issue is that in between dropping and
accquiring this lock, the group discard path can actually go ahead and free the
PA memory after calling rb erase on it, which can result in use after free in
the adjust overlap path. This is because the PA is freed without any locks in
discard path, as it assumes no other thread will have a reference to it. This
assumption was true earlier since our allocation path never gave up the rbtree
lock however it is not possible with this approach now. Essentially, the
concept of having two different areas where a PA can be deleted is bringing in
additional challenges and complexity, which might make things worse from a
maintainers/reviewers point of view.

After brainstorming a bit, I think there might be a few alternatives here:

1. Instead of deleting PA in the adjust overlap thread, make it sleep till group
discard path goes ahead and deletes/frees it. At this point we can wake it up and retry
allocation.

* Pros: We can be sure that PA would have been removed at the time of retry so
we don't waste extra retries. C
* Cons: Extra complexity in code.

2. Just go for a retry in adjust overlap without doing anything. In ideal case,
by the time we start retrying the PA might be already removed. Worse case: We
keep looping again and again since discard path has not deleted it yet.

* Pros: Simplest approach, code remains straightforward.
* Cons: We can end up uselessly retrying if the discard path doesn't delete the PA fast enough.

3. The approach of undeleting the PA (proposed in this patchset) that we've already discussed.

Now, to be honest, I still prefer the undelete PA approach as it makes more
sense to me and I think the code is simple enough as there are not many paths
that might race. Mostly just adjust_overlap and group discard or
use_preallocated and group discard.

However, I'm still open to improve the approach you suggested to overcome the
issues discussed in the mailing thread. For reference I'm also attaching the
diff of changes I did to implement the rb_erase and retry solution in this
mail. (The diff is on top of this patch series)

Regards,
Ojaswin
>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR


Attachments:
(No filename) (6.41 kB)
rberase.patch (4.66 kB)
Download all attachments