2022-08-11 17:20:42

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 0/5] More fixups for raid5

Hey,

The first patch in this series is a fix for another race issue with
the test infrastructure.

The remaining 4 patches address Christoph's feedback in my previous
patchset that I sent rather late in the cycle. (Sorry about that).

This series is based on current md-next (ae0a80935d6a6).

Logan

--

David Sloan (1):
md: Flush workqueue md_rdev_misc_wq in md_alloc()

Logan Gunthorpe (4):
md/raid5: Refactor raid5_get_active_stripe()
md/raid5: Drop extern on function declarations in raid5.h
md/raid5: Cleanup prototype of raid5_get_active_stripe()
md/raid5: Don't read ->active_stripes if it's not needed

drivers/md/md.c | 1 +
drivers/md/raid5-cache.c | 3 +-
drivers/md/raid5.c | 132 ++++++++++++++++++++-------------------
drivers/md/raid5.h | 32 ++++++----
4 files changed, 90 insertions(+), 78 deletions(-)


base-commit: ae0a80935d6a65764b0db00c8b03d3807b4110a6
--
2.30.2


2022-08-11 17:37:22

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 3/5] md/raid5: Drop extern on function declarations in raid5.h

externs should not be used in function declarations, so clean those
up.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/raid5.h | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index a5082bed83c8..4be2feb9e74a 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -803,16 +803,14 @@ raid5_get_dev_page(struct stripe_head *sh, int disk_idx)
}
#endif

-extern void md_raid5_kick_device(struct r5conf *conf);
-extern int raid5_set_cache_size(struct mddev *mddev, int size);
-extern sector_t raid5_compute_blocknr(struct stripe_head *sh, int i, int previous);
-extern void raid5_release_stripe(struct stripe_head *sh);
-extern sector_t raid5_compute_sector(struct r5conf *conf, sector_t r_sector,
- int previous, int *dd_idx,
- struct stripe_head *sh);
-extern struct stripe_head *
-raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
- bool previous, bool noblock, bool noquiesce);
-extern int raid5_calc_degraded(struct r5conf *conf);
-extern int r5c_journal_mode_set(struct mddev *mddev, int journal_mode);
+void md_raid5_kick_device(struct r5conf *conf);
+int raid5_set_cache_size(struct mddev *mddev, int size);
+sector_t raid5_compute_blocknr(struct stripe_head *sh, int i, int previous);
+void raid5_release_stripe(struct stripe_head *sh);
+sector_t raid5_compute_sector(struct r5conf *conf, sector_t r_sector,
+ int previous, int *dd_idx, struct stripe_head *sh);
+struct stripe_head *raid5_get_active_stripe(struct r5conf *conf,
+ sector_t sector, bool previous, bool noblock, bool noquiesce);
+int raid5_calc_degraded(struct r5conf *conf);
+int r5c_journal_mode_set(struct mddev *mddev, int journal_mode);
#endif
--
2.30.2

2022-08-11 17:37:33

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 5/5] md/raid5: Don't read ->active_stripes if it's not needed

The atomic_read() is not needed in many cases so only do
the read after the first checks are done.

Suggested-by: Christoph Hellwig <[email protected]>
Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/raid5.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 7e7205bb40f1..0b73eeea1b19 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -789,15 +789,14 @@ struct stripe_request_ctx {
*/
static bool is_inactive_blocked(struct r5conf *conf, int hash)
{
- int active = atomic_read(&conf->active_stripes);
-
if (list_empty(conf->inactive_list + hash))
return false;

if (!test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state))
return true;

- return active < (conf->max_nr_stripes * 3 / 4);
+ return (atomic_read(&conf->active_stripes) <
+ (conf->max_nr_stripes * 3 / 4));
}

struct stripe_head *raid5_get_active_stripe(struct r5conf *conf,
--
2.30.2

2022-08-11 17:37:38

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 1/5] md: Flush workqueue md_rdev_misc_wq in md_alloc()

From: David Sloan <[email protected]>

A race condition still exists when removing and re-creating md devices
in test cases. However, it is only seen on some setups.

The race condition was tracked down to a reference still being held
to the kobject by the rdev in the md_rdev_misc_wq which will be released
in rdev_delayed_delete().

md_alloc() waits for previous deletions by waiting on the md_misc_wq,
but the md_rdev_misc_wq may still be holding a reference to a recently
removed device.

To fix this, also flush the md_rdev_misc_wq in md_alloc().

Signed-off-by: David Sloan <[email protected]>
[[email protected]: rewrote commit message]
Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/md.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index afaf36b2f6ab..71d221601bf8 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5620,6 +5620,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
* removed (mddev_delayed_delete).
*/
flush_workqueue(md_misc_wq);
+ flush_workqueue(md_rdev_misc_wq);

mutex_lock(&disks_mutex);
mddev = mddev_alloc(dev);
--
2.30.2

2022-08-11 17:40:35

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 4/5] md/raid5: Cleanup prototype of raid5_get_active_stripe()

Drop the three bools in the prototype of raid5_get_active_stripe()
and replace them with a flags parameter.

At the same time, drop the distinction with __raid5_get_active_stripe().

Suggested-by: Christoph Hellwig <[email protected]>
Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/raid5-cache.c | 3 ++-
drivers/md/raid5.c | 49 +++++++++++++++++++++-------------------
drivers/md/raid5.h | 12 +++++++++-
3 files changed, 39 insertions(+), 25 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index f4e1cc1ece43..5e35a7321ded 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -1923,7 +1923,8 @@ r5c_recovery_alloc_stripe(
{
struct stripe_head *sh;

- sh = raid5_get_active_stripe(conf, stripe_sect, 0, noblock, 0);
+ sh = raid5_get_active_stripe(conf, NULL, stripe_sect,
+ noblock ? R5_GAS_NOBLOCK : 0);
if (!sh)
return NULL; /* no more stripe available */

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 1288ef9e1571..7e7205bb40f1 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -800,19 +800,20 @@ static bool is_inactive_blocked(struct r5conf *conf, int hash)
return active < (conf->max_nr_stripes * 3 / 4);
}

-static struct stripe_head *__raid5_get_active_stripe(struct r5conf *conf,
+struct stripe_head *raid5_get_active_stripe(struct r5conf *conf,
struct stripe_request_ctx *ctx, sector_t sector,
- bool previous, bool noblock, bool noquiesce)
+ unsigned int flags)
{
struct stripe_head *sh;
int hash = stripe_hash_locks_hash(conf, sector);
+ int previous = !!(flags & R5_GAS_PREVIOUS);

pr_debug("get_stripe, sector %llu\n", (unsigned long long)sector);

spin_lock_irq(conf->hash_locks + hash);

for (;;) {
- if (!noquiesce && conf->quiesce) {
+ if (!(flags & R5_GAS_NOQUIESCE) && conf->quiesce) {
/*
* Must release the reference to batch_last before
* waiting, on quiesce, otherwise the batch_last will
@@ -848,7 +849,7 @@ static struct stripe_head *__raid5_get_active_stripe(struct r5conf *conf,
set_bit(R5_ALLOC_MORE, &conf->cache_state);
}

- if (noblock)
+ if (flags & R5_GAS_NOBLOCK)
break;

set_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
@@ -863,13 +864,6 @@ static struct stripe_head *__raid5_get_active_stripe(struct r5conf *conf,
return sh;
}

-struct stripe_head *raid5_get_active_stripe(struct r5conf *conf,
- sector_t sector, bool previous, bool noblock, bool noquiesce)
-{
- return __raid5_get_active_stripe(conf, NULL, sector, previous, noblock,
- noquiesce);
-}
-
static bool is_full_stripe_write(struct stripe_head *sh)
{
BUG_ON(sh->overwrite_disks > (sh->disks - sh->raid_conf->max_degraded));
@@ -4636,7 +4630,8 @@ static void handle_stripe_expansion(struct r5conf *conf, struct stripe_head *sh)
sector_t bn = raid5_compute_blocknr(sh, i, 1);
sector_t s = raid5_compute_sector(conf, bn, 0,
&dd_idx, NULL);
- sh2 = raid5_get_active_stripe(conf, s, 0, 1, 1);
+ sh2 = raid5_get_active_stripe(conf, NULL, s,
+ R5_GAS_NOBLOCK | R5_GAS_NOQUIESCE);
if (sh2 == NULL)
/* so far only the early blocks of this stripe
* have been requested. When later blocks
@@ -5273,7 +5268,9 @@ static void handle_stripe(struct stripe_head *sh)
/* Finish reconstruct operations initiated by the expansion process */
if (sh->reconstruct_state == reconstruct_state_result) {
struct stripe_head *sh_src
- = raid5_get_active_stripe(conf, sh->sector, 1, 1, 1);
+ = raid5_get_active_stripe(conf, NULL, sh->sector,
+ R5_GAS_PREVIOUS | R5_GAS_NOBLOCK |
+ R5_GAS_NOQUIESCE);
if (sh_src && test_bit(STRIPE_EXPAND_SOURCE, &sh_src->state)) {
/* sh cannot be written until sh_src has been read.
* so arrange for sh to be delayed a little
@@ -5823,7 +5820,7 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
DEFINE_WAIT(w);
int d;
again:
- sh = raid5_get_active_stripe(conf, logical_sector, 0, 0, 0);
+ sh = raid5_get_active_stripe(conf, NULL, logical_sector, 0);
prepare_to_wait(&conf->wait_for_overlap, &w,
TASK_UNINTERRUPTIBLE);
set_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
@@ -5978,7 +5975,7 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
enum stripe_result ret;
struct stripe_head *sh;
sector_t new_sector;
- int previous = 0;
+ int previous = 0, flags = 0;
int seq, dd_idx;

seq = read_seqcount_begin(&conf->gen_lock);
@@ -6012,8 +6009,11 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
pr_debug("raid456: %s, sector %llu logical %llu\n", __func__,
new_sector, logical_sector);

- sh = __raid5_get_active_stripe(conf, ctx, new_sector, previous,
- (bi->bi_opf & REQ_RAHEAD), 0);
+ if (previous)
+ flags |= R5_GAS_PREVIOUS;
+ if (bi->bi_opf & REQ_RAHEAD)
+ flags |= R5_GAS_NOBLOCK;
+ sh = raid5_get_active_stripe(conf, ctx, new_sector, flags);
if (unlikely(!sh)) {
/* cannot get stripe, just give-up */
bi->bi_status = BLK_STS_IOERR;
@@ -6362,7 +6362,8 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
for (i = 0; i < reshape_sectors; i += RAID5_STRIPE_SECTORS(conf)) {
int j;
int skipped_disk = 0;
- sh = raid5_get_active_stripe(conf, stripe_addr+i, 0, 0, 1);
+ sh = raid5_get_active_stripe(conf, NULL, stripe_addr+i,
+ R5_GAS_NOQUIESCE);
set_bit(STRIPE_EXPANDING, &sh->state);
atomic_inc(&conf->reshape_stripes);
/* If any of this stripe is beyond the end of the old
@@ -6411,7 +6412,8 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
if (last_sector >= mddev->dev_sectors)
last_sector = mddev->dev_sectors - 1;
while (first_sector <= last_sector) {
- sh = raid5_get_active_stripe(conf, first_sector, 1, 0, 1);
+ sh = raid5_get_active_stripe(conf, NULL, first_sector,
+ R5_GAS_PREVIOUS | R5_GAS_NOQUIESCE);
set_bit(STRIPE_EXPAND_SOURCE, &sh->state);
set_bit(STRIPE_HANDLE, &sh->state);
raid5_release_stripe(sh);
@@ -6531,9 +6533,10 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n

md_bitmap_cond_end_sync(mddev->bitmap, sector_nr, false);

- sh = raid5_get_active_stripe(conf, sector_nr, 0, 1, 0);
+ sh = raid5_get_active_stripe(conf, NULL, sector_nr,
+ R5_GAS_NOBLOCK);
if (sh == NULL) {
- sh = raid5_get_active_stripe(conf, sector_nr, 0, 0, 0);
+ sh = raid5_get_active_stripe(conf, NULL, sector_nr, 0);
/* make sure we don't swamp the stripe cache if someone else
* is trying to get access
*/
@@ -6596,8 +6599,8 @@ static int retry_aligned_read(struct r5conf *conf, struct bio *raid_bio,
/* already done this stripe */
continue;

- sh = raid5_get_active_stripe(conf, sector, 0, 1, 1);
-
+ sh = raid5_get_active_stripe(conf, NULL, sector,
+ R5_GAS_NOBLOCK | R5_GAS_NOQUIESCE);
if (!sh) {
/* failed to get a stripe - must wait */
conf->retry_read_aligned = raid_bio;
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 4be2feb9e74a..e873938a6125 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -809,8 +809,18 @@ sector_t raid5_compute_blocknr(struct stripe_head *sh, int i, int previous);
void raid5_release_stripe(struct stripe_head *sh);
sector_t raid5_compute_sector(struct r5conf *conf, sector_t r_sector,
int previous, int *dd_idx, struct stripe_head *sh);
+
+struct stripe_request_ctx;
+/* get stripe from previous generation (when reshaping) */
+#define R5_GAS_PREVIOUS (1 << 0)
+/* do not block waiting for a free stripe */
+#define R5_GAS_NOBLOCK (1 << 1)
+/* do not block waiting for quiesce to be released */
+#define R5_GAS_NOQUIESCE (1 << 2)
struct stripe_head *raid5_get_active_stripe(struct r5conf *conf,
- sector_t sector, bool previous, bool noblock, bool noquiesce);
+ struct stripe_request_ctx *ctx, sector_t sector,
+ unsigned int flags);
+
int raid5_calc_degraded(struct r5conf *conf);
int r5c_journal_mode_set(struct mddev *mddev, int journal_mode);
#endif
--
2.30.2

2022-08-11 17:40:40

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH 2/5] md/raid5: Refactor raid5_get_active_stripe()

Refactor raid5_get_active_stripe() without the gotos with an
explicit infinite loop and some additional nesting.

Suggested-by: Christoph Hellwig <[email protected]>
Signed-off-by: Logan Gunthorpe <[email protected]>
---
drivers/md/raid5.c | 82 +++++++++++++++++++++++-----------------------
1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 4456ac51f7c5..1288ef9e1571 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -811,54 +811,54 @@ static struct stripe_head *__raid5_get_active_stripe(struct r5conf *conf,

spin_lock_irq(conf->hash_locks + hash);

-retry:
- if (!noquiesce && conf->quiesce) {
- /*
- * Must release the reference to batch_last before waiting,
- * on quiesce, otherwise the batch_last will hold a reference
- * to a stripe and raid5_quiesce() will deadlock waiting for
- * active_stripes to go to zero.
- */
- if (ctx && ctx->batch_last) {
- raid5_release_stripe(ctx->batch_last);
- ctx->batch_last = NULL;
- }
-
- wait_event_lock_irq(conf->wait_for_quiescent, !conf->quiesce,
- *(conf->hash_locks + hash));
- }
+ for (;;) {
+ if (!noquiesce && conf->quiesce) {
+ /*
+ * Must release the reference to batch_last before
+ * waiting, on quiesce, otherwise the batch_last will
+ * hold a reference to a stripe and raid5_quiesce()
+ * will deadlock waiting for active_stripes to go to
+ * zero.
+ */
+ if (ctx && ctx->batch_last) {
+ raid5_release_stripe(ctx->batch_last);
+ ctx->batch_last = NULL;
+ }

- sh = find_get_stripe(conf, sector, conf->generation - previous, hash);
- if (sh)
- goto out;
+ wait_event_lock_irq(conf->wait_for_quiescent,
+ !conf->quiesce,
+ *(conf->hash_locks + hash));
+ }

- if (test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state))
- goto wait_for_stripe;
+ sh = find_get_stripe(conf, sector, conf->generation - previous,
+ hash);
+ if (sh)
+ break;

- sh = get_free_stripe(conf, hash);
- if (sh) {
- r5c_check_stripe_cache_usage(conf);
- init_stripe(sh, sector, previous);
- atomic_inc(&sh->count);
- goto out;
- }
+ if (!test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state)) {
+ sh = get_free_stripe(conf, hash);
+ if (sh) {
+ r5c_check_stripe_cache_usage(conf);
+ init_stripe(sh, sector, previous);
+ atomic_inc(&sh->count);
+ break;
+ }

- if (!test_bit(R5_DID_ALLOC, &conf->cache_state))
- set_bit(R5_ALLOC_MORE, &conf->cache_state);
+ if (!test_bit(R5_DID_ALLOC, &conf->cache_state))
+ set_bit(R5_ALLOC_MORE, &conf->cache_state);
+ }

-wait_for_stripe:
- if (noblock)
- goto out;
+ if (noblock)
+ break;

- set_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
- r5l_wake_reclaim(conf->log, 0);
- wait_event_lock_irq(conf->wait_for_stripe,
- is_inactive_blocked(conf, hash),
- *(conf->hash_locks + hash));
- clear_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
- goto retry;
+ set_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
+ r5l_wake_reclaim(conf->log, 0);
+ wait_event_lock_irq(conf->wait_for_stripe,
+ is_inactive_blocked(conf, hash),
+ *(conf->hash_locks + hash));
+ clear_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
+ }

-out:
spin_unlock_irq(conf->hash_locks + hash);
return sh;
}
--
2.30.2

2022-08-11 18:12:33

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH 1/5] md: Flush workqueue md_rdev_misc_wq in md_alloc()

On Thu, Aug 11, 2022 at 10:14 AM Logan Gunthorpe <[email protected]> wrote:
>
> From: David Sloan <[email protected]>
>
> A race condition still exists when removing and re-creating md devices
> in test cases. However, it is only seen on some setups.
>
> The race condition was tracked down to a reference still being held
> to the kobject by the rdev in the md_rdev_misc_wq which will be released
> in rdev_delayed_delete().
>
> md_alloc() waits for previous deletions by waiting on the md_misc_wq,
> but the md_rdev_misc_wq may still be holding a reference to a recently
> removed device.
>
> To fix this, also flush the md_rdev_misc_wq in md_alloc().
>
> Signed-off-by: David Sloan <[email protected]>
> [[email protected]: rewrote commit message]
> Signed-off-by: Logan Gunthorpe <[email protected]>

Applied 1/5 to md-fixes.

Thanks!
Song

2022-08-13 07:00:44

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/5] More fixups for raid5

The series looks fine to me, but all except for patch 1 really
is 6.1 material.

2022-08-29 23:30:22

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH 2/5] md/raid5: Refactor raid5_get_active_stripe()

On Thu, Aug 11, 2022 at 10:14 AM Logan Gunthorpe <[email protected]> wrote:
>
> Refactor raid5_get_active_stripe() without the gotos with an
> explicit infinite loop and some additional nesting.
>
> Suggested-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Logan Gunthorpe <[email protected]>

Applied to md-next.

Thanks!
Song

> ---
> drivers/md/raid5.c | 82 +++++++++++++++++++++++-----------------------
> 1 file changed, 41 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 4456ac51f7c5..1288ef9e1571 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -811,54 +811,54 @@ static struct stripe_head *__raid5_get_active_stripe(struct r5conf *conf,
>
> spin_lock_irq(conf->hash_locks + hash);
>
> -retry:
> - if (!noquiesce && conf->quiesce) {
> - /*
> - * Must release the reference to batch_last before waiting,
> - * on quiesce, otherwise the batch_last will hold a reference
> - * to a stripe and raid5_quiesce() will deadlock waiting for
> - * active_stripes to go to zero.
> - */
> - if (ctx && ctx->batch_last) {
> - raid5_release_stripe(ctx->batch_last);
> - ctx->batch_last = NULL;
> - }
> -
> - wait_event_lock_irq(conf->wait_for_quiescent, !conf->quiesce,
> - *(conf->hash_locks + hash));
> - }
> + for (;;) {
> + if (!noquiesce && conf->quiesce) {
> + /*
> + * Must release the reference to batch_last before
> + * waiting, on quiesce, otherwise the batch_last will
> + * hold a reference to a stripe and raid5_quiesce()
> + * will deadlock waiting for active_stripes to go to
> + * zero.
> + */
> + if (ctx && ctx->batch_last) {
> + raid5_release_stripe(ctx->batch_last);
> + ctx->batch_last = NULL;
> + }
>
> - sh = find_get_stripe(conf, sector, conf->generation - previous, hash);
> - if (sh)
> - goto out;
> + wait_event_lock_irq(conf->wait_for_quiescent,
> + !conf->quiesce,
> + *(conf->hash_locks + hash));
> + }
>
> - if (test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state))
> - goto wait_for_stripe;
> + sh = find_get_stripe(conf, sector, conf->generation - previous,
> + hash);
> + if (sh)
> + break;
>
> - sh = get_free_stripe(conf, hash);
> - if (sh) {
> - r5c_check_stripe_cache_usage(conf);
> - init_stripe(sh, sector, previous);
> - atomic_inc(&sh->count);
> - goto out;
> - }
> + if (!test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state)) {
> + sh = get_free_stripe(conf, hash);
> + if (sh) {
> + r5c_check_stripe_cache_usage(conf);
> + init_stripe(sh, sector, previous);
> + atomic_inc(&sh->count);
> + break;
> + }
>
> - if (!test_bit(R5_DID_ALLOC, &conf->cache_state))
> - set_bit(R5_ALLOC_MORE, &conf->cache_state);
> + if (!test_bit(R5_DID_ALLOC, &conf->cache_state))
> + set_bit(R5_ALLOC_MORE, &conf->cache_state);
> + }
>
> -wait_for_stripe:
> - if (noblock)
> - goto out;
> + if (noblock)
> + break;
>
> - set_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
> - r5l_wake_reclaim(conf->log, 0);
> - wait_event_lock_irq(conf->wait_for_stripe,
> - is_inactive_blocked(conf, hash),
> - *(conf->hash_locks + hash));
> - clear_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
> - goto retry;
> + set_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
> + r5l_wake_reclaim(conf->log, 0);
> + wait_event_lock_irq(conf->wait_for_stripe,
> + is_inactive_blocked(conf, hash),
> + *(conf->hash_locks + hash));
> + clear_bit(R5_INACTIVE_BLOCKED, &conf->cache_state);
> + }
>
> -out:
> spin_unlock_irq(conf->hash_locks + hash);
> return sh;
> }
> --
> 2.30.2
>