2023-06-30 09:15:34

by Ojaswin Mujoo

[permalink] [raw]
Subject: [PATCH v2] ext4: Replace CR_FAST macro with inline function for readability

Replace CR_FAST with ext4_mb_cr_expensive() inline function for better
readability. This function returns true if the criteria is one of the
expensive/slower ones where lots of disk IO/prefetching is acceptable.

No functional changes are intended in this patch.

Signed-off-by: Ojaswin Mujoo <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---

Changes since v1 [1]

* reworded an if condition in ext_mb_good_group_nolock
and added a comment to better describe the intent of the
condition
* Picked up RVB from Jan.

[1]
https://lore.kernel.org/linux-ext4/[email protected]/T/#ma9428de15c4084937274f4d66dcd4d53f505d32a

fs/ext4/ext4.h | 7 ++++---
fs/ext4/mballoc.c | 13 +++++++++----
2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 45a531446ea2..e404169a2858 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -176,9 +176,6 @@ enum criteria {
EXT4_MB_NUM_CRS
};

-/* criteria below which we use fast block scanning and avoid unnecessary IO */
-#define CR_FAST CR_GOAL_LEN_SLOW
-
/*
* Flags used in mballoc's allocation_context flags field.
*
@@ -2924,6 +2921,10 @@ extern int ext4_trim_fs(struct super_block *, struct fstrim_range *);
extern void ext4_process_freed_data(struct super_block *sb, tid_t commit_tid);
extern void ext4_mb_mark_bb(struct super_block *sb, ext4_fsblk_t block,
int len, int state);
+static inline bool ext4_mb_cr_expensive(enum criteria cr)
+{
+ return cr >= CR_GOAL_LEN_SLOW;
+}

/* inode.c */
void ext4_inode_csum_set(struct inode *inode, struct ext4_inode *raw,
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index a2475b8c9fb5..542577f621c6 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2446,7 +2446,7 @@ void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
break;
}

- if (ac->ac_criteria < CR_FAST) {
+ if (!ext4_mb_cr_expensive(ac->ac_criteria)) {
/*
* In CR_GOAL_LEN_FAST and CR_BEST_AVAIL_LEN, we are
* sure that this group will have a large enough
@@ -2630,7 +2630,12 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
free = grp->bb_free;
if (free == 0)
goto out;
- if (cr <= CR_FAST && free < ac->ac_g_ex.fe_len)
+ /*
+ * In all criterias except CR_ANY_FREE we try to avoid groups that
+ * can't possibly satisfy the full goal request due to insufficient
+ * free blocks.
+ */
+ if (cr < CR_ANY_FREE && free < ac->ac_g_ex.fe_len)
goto out;
if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
goto out;
@@ -2654,7 +2659,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
* sure we locate metadata blocks in the first block group in
* the flex_bg if possible.
*/
- if (cr < CR_FAST &&
+ if (!ext4_mb_cr_expensive(cr) &&
(!sbi->s_log_groups_per_flex ||
((group & ((1 << sbi->s_log_groups_per_flex) - 1)) != 0)) &&
!(ext4_has_group_desc_csum(sb) &&
@@ -2848,7 +2853,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
* spend a lot of time loading imperfect groups
*/
if ((prefetch_grp == group) &&
- (cr >= CR_FAST ||
+ (ext4_mb_cr_expensive(cr) ||
prefetch_ios < sbi->s_mb_prefetch_limit)) {
nr = sbi->s_mb_prefetch;
if (ext4_has_feature_flex_bg(sb)) {
--
2.31.1



2023-08-02 03:04:34

by Ritesh Harjani

[permalink] [raw]
Subject: Re: [PATCH v2] ext4: Replace CR_FAST macro with inline function for readability

Ojaswin Mujoo <[email protected]> writes:

> Replace CR_FAST with ext4_mb_cr_expensive() inline function for better
> readability. This function returns true if the criteria is one of the
> expensive/slower ones where lots of disk IO/prefetching is acceptable.
>
> No functional changes are intended in this patch.
>
> Signed-off-by: Ojaswin Mujoo <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>
> ---
>
> Changes since v1 [1]
>
> * reworded an if condition in ext_mb_good_group_nolock
> and added a comment to better describe the intent of the
> condition
> * Picked up RVB from Jan.
>
> [1]
> https://lore.kernel.org/linux-ext4/[email protected]/T/#ma9428de15c4084937274f4d66dcd4d53f505d32a
>
> fs/ext4/ext4.h | 7 ++++---
> fs/ext4/mballoc.c | 13 +++++++++----
> 2 files changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
> index 45a531446ea2..e404169a2858 100644
> --- a/fs/ext4/ext4.h
> +++ b/fs/ext4/ext4.h
> @@ -176,9 +176,6 @@ enum criteria {
> EXT4_MB_NUM_CRS
> };
>
> -/* criteria below which we use fast block scanning and avoid unnecessary IO */
> -#define CR_FAST CR_GOAL_LEN_SLOW
> -
> /*
> * Flags used in mballoc's allocation_context flags field.
> *
> @@ -2924,6 +2921,10 @@ extern int ext4_trim_fs(struct super_block *, struct fstrim_range *);
> extern void ext4_process_freed_data(struct super_block *sb, tid_t commit_tid);
> extern void ext4_mb_mark_bb(struct super_block *sb, ext4_fsblk_t block,
> int len, int state);
> +static inline bool ext4_mb_cr_expensive(enum criteria cr)
> +{
> + return cr >= CR_GOAL_LEN_SLOW;
> +}
>
> /* inode.c */
> void ext4_inode_csum_set(struct inode *inode, struct ext4_inode *raw,
> diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
> index a2475b8c9fb5..542577f621c6 100644
> --- a/fs/ext4/mballoc.c
> +++ b/fs/ext4/mballoc.c
> @@ -2446,7 +2446,7 @@ void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
> break;
> }
>
> - if (ac->ac_criteria < CR_FAST) {
> + if (!ext4_mb_cr_expensive(ac->ac_criteria)) {
> /*
> * In CR_GOAL_LEN_FAST and CR_BEST_AVAIL_LEN, we are
> * sure that this group will have a large enough
> @@ -2630,7 +2630,12 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
> free = grp->bb_free;
> if (free == 0)
> goto out;
> - if (cr <= CR_FAST && free < ac->ac_g_ex.fe_len)
> + /*
> + * In all criterias except CR_ANY_FREE we try to avoid groups that
> + * can't possibly satisfy the full goal request due to insufficient
> + * free blocks.
> + */
> + if (cr < CR_ANY_FREE && free < ac->ac_g_ex.fe_len)
> goto out;

cr CR_BEST_AVAIL_LEN, is the one where we can allocate something smaller
than the original goal length. But in that we alter the goal length
itself while selecting the criteria.
So checking free < ac->ac_g_ex.fe_len should work here.


Looks good to me. Feel free to add -

Reviewed-by: Ritesh Harjani (IBM) <[email protected]>



> if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
> goto out;
> @@ -2654,7 +2659,7 @@ static int ext4_mb_good_group_nolock(struct ext4_allocation_context *ac,
> * sure we locate metadata blocks in the first block group in
> * the flex_bg if possible.
> */
> - if (cr < CR_FAST &&
> + if (!ext4_mb_cr_expensive(cr) &&
> (!sbi->s_log_groups_per_flex ||
> ((group & ((1 << sbi->s_log_groups_per_flex) - 1)) != 0)) &&
> !(ext4_has_group_desc_csum(sb) &&
> @@ -2848,7 +2853,7 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
> * spend a lot of time loading imperfect groups
> */
> if ((prefetch_grp == group) &&
> - (cr >= CR_FAST ||
> + (ext4_mb_cr_expensive(cr) ||
> prefetch_ios < sbi->s_mb_prefetch_limit)) {
> nr = sbi->s_mb_prefetch;
> if (ext4_has_feature_flex_bg(sb)) {
> --
> 2.31.1

-ritesh