2022-09-14 01:47:37

by Yu Kuai

[permalink] [raw]
Subject: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()

From: Yu Kuai <[email protected]>

Currently the nasty condition is wait_barrier() is hard to read. This
patch factor out the condition into a function.

There are no functional changes.

Signed-off-by: Yu Kuai <[email protected]>
---
drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
1 file changed, 32 insertions(+), 24 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 64d6e4cd8a3a..56458a53043d 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
wake_up(&conf->wait_barrier);
}

+static bool stop_waiting_barrier(struct r10conf *conf)
+{
+ /* barrier is dropped */
+ if (!conf->barrier)
+ return true;
+
+ /*
+ * If there are already pending requests (preventing the barrier from
+ * rising completely), and the pre-process bio queue isn't empty, then
+ * don't wait, as we need to empty that queue to get the nr_pending
+ * count down.
+ */
+ if (atomic_read(&conf->nr_pending)) {
+ struct bio_list *bio_list = current->bio_list;
+
+ if (bio_list && (!bio_list_empty(&bio_list[0]) ||
+ !bio_list_empty(&bio_list[1])))
+ return true;
+ }
+
+ /* move on if recovery thread is blocked by us */
+ if (conf->mddev->thread->tsk == current &&
+ test_bit(MD_RECOVERY_RUNNING, &conf->mddev->recovery) &&
+ conf->nr_queued > 0)
+ return true;
+
+ return false;
+}
+
static bool wait_barrier(struct r10conf *conf, bool nowait)
{
bool ret = true;

spin_lock_irq(&conf->resync_lock);
if (conf->barrier) {
- struct bio_list *bio_list = current->bio_list;
- conf->nr_waiting++;
- /* Wait for the barrier to drop.
- * However if there are already pending
- * requests (preventing the barrier from
- * rising completely), and the
- * pre-process bio queue isn't empty,
- * then don't wait, as we need to empty
- * that queue to get the nr_pending
- * count down.
- */
/* Return false when nowait flag is set */
if (nowait) {
ret = false;
} else {
+ conf->nr_waiting++;
raid10_log(conf->mddev, "wait barrier");
wait_event_lock_irq(conf->wait_barrier,
- !conf->barrier ||
- (atomic_read(&conf->nr_pending) &&
- bio_list &&
- (!bio_list_empty(&bio_list[0]) ||
- !bio_list_empty(&bio_list[1]))) ||
- /* move on if recovery thread is
- * blocked by us
- */
- (conf->mddev->thread->tsk == current &&
- test_bit(MD_RECOVERY_RUNNING,
- &conf->mddev->recovery) &&
- conf->nr_queued > 0),
+ stop_waiting_barrier(conf),
conf->resync_lock);
+ conf->nr_waiting--;
}
- conf->nr_waiting--;
if (!conf->nr_waiting)
wake_up(&conf->wait_barrier);
}
--
2.31.1


2022-09-14 07:02:39

by Paul Menzel

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()

Dear Yu,


Thank you for the improved patch. Three minor nits.

Am 14.09.22 um 03:49 schrieb Yu Kuai:
> From: Yu Kuai <[email protected]>

In the summary/title, I’d spell *Clean up* with a space. Maybe even use:

md/raid10: Factor out code from wait_barrier() to stop_waiting_barrier()

> Currently the nasty condition is wait_barrier() is hard to read. This
> patch factor out the condition into a function.

The first *is* above can be removed, and factor*s* needs an s.
> There are no functional changes.
>
> Signed-off-by: Yu Kuai <[email protected]>
> ---
> drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
> 1 file changed, 32 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 64d6e4cd8a3a..56458a53043d 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
> wake_up(&conf->wait_barrier);
> }
>
> +static bool stop_waiting_barrier(struct r10conf *conf)
> +{
> + /* barrier is dropped */
> + if (!conf->barrier)
> + return true;
> +
> + /*
> + * If there are already pending requests (preventing the barrier from
> + * rising completely), and the pre-process bio queue isn't empty, then
> + * don't wait, as we need to empty that queue to get the nr_pending
> + * count down.
> + */
> + if (atomic_read(&conf->nr_pending)) {
> + struct bio_list *bio_list = current->bio_list;
> +
> + if (bio_list && (!bio_list_empty(&bio_list[0]) ||
> + !bio_list_empty(&bio_list[1])))
> + return true;
> + }
> +
> + /* move on if recovery thread is blocked by us */
> + if (conf->mddev->thread->tsk == current &&
> + test_bit(MD_RECOVERY_RUNNING, &conf->mddev->recovery) &&
> + conf->nr_queued > 0)
> + return true;
> +
> + return false;
> +}
> +
> static bool wait_barrier(struct r10conf *conf, bool nowait)
> {
> bool ret = true;
>
> spin_lock_irq(&conf->resync_lock);
> if (conf->barrier) {
> - struct bio_list *bio_list = current->bio_list;
> - conf->nr_waiting++;
> - /* Wait for the barrier to drop.
> - * However if there are already pending
> - * requests (preventing the barrier from
> - * rising completely), and the
> - * pre-process bio queue isn't empty,
> - * then don't wait, as we need to empty
> - * that queue to get the nr_pending
> - * count down.
> - */
> /* Return false when nowait flag is set */
> if (nowait) {
> ret = false;
> } else {
> + conf->nr_waiting++;
> raid10_log(conf->mddev, "wait barrier");
> wait_event_lock_irq(conf->wait_barrier,
> - !conf->barrier ||
> - (atomic_read(&conf->nr_pending) &&
> - bio_list &&
> - (!bio_list_empty(&bio_list[0]) ||
> - !bio_list_empty(&bio_list[1]))) ||
> - /* move on if recovery thread is
> - * blocked by us
> - */
> - (conf->mddev->thread->tsk == current &&
> - test_bit(MD_RECOVERY_RUNNING,
> - &conf->mddev->recovery) &&
> - conf->nr_queued > 0),
> + stop_waiting_barrier(conf),
> conf->resync_lock);
> + conf->nr_waiting--;
> }
> - conf->nr_waiting--;
> if (!conf->nr_waiting)
> wake_up(&conf->wait_barrier);
> }

Acked-by: Paul Menzel <[email protected]>


Kind regards,

Paul

2022-09-14 11:11:18

by Yu Kuai

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()

Hi, Paul

在 2022/09/14 14:36, Paul Menzel 写道:
> Dear Yu,
>
>
> Thank you for the improved patch. Three minor nits.
>
> Am 14.09.22 um 03:49 schrieb Yu Kuai:
>> From: Yu Kuai <[email protected]>
>
> In the summary/title, I’d spell *Clean up* with a space. Maybe even use:
>
> md/raid10: Factor out code from wait_barrier() to stop_waiting_barrier()

Ok, sounds good, I'll change that in next version.
>
>> Currently the nasty condition is wait_barrier() is hard to read. This
>> patch factor out the condition into a function.
>
> The first *is* above can be removed, and factor*s* needs an s.

Yes, you're right.
>> There are no functional changes.
>>
>> Signed-off-by: Yu Kuai <[email protected]>
>> ---
>>   drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
>>   1 file changed, 32 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index 64d6e4cd8a3a..56458a53043d 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
>>       wake_up(&conf->wait_barrier);
>>   }
>> +static bool stop_waiting_barrier(struct r10conf *conf)
>> +{
>> +    /* barrier is dropped */
>> +    if (!conf->barrier)
>> +        return true;
>> +
>> +    /*
>> +     * If there are already pending requests (preventing the barrier
>> from
>> +     * rising completely), and the pre-process bio queue isn't empty,
>> then
>> +     * don't wait, as we need to empty that queue to get the nr_pending
>> +     * count down.
>> +     */
>> +    if (atomic_read(&conf->nr_pending)) {
>> +        struct bio_list *bio_list = current->bio_list;
>> +
>> +        if (bio_list && (!bio_list_empty(&bio_list[0]) ||
>> +                 !bio_list_empty(&bio_list[1])))
>> +            return true;
>> +    }
>> +
>> +    /* move on if recovery thread is blocked by us */
>> +    if (conf->mddev->thread->tsk == current &&
>> +        test_bit(MD_RECOVERY_RUNNING, &conf->mddev->recovery) &&
>> +        conf->nr_queued > 0)
>> +        return true;
>> +
>> +    return false;
>> +}
>> +
>>   static bool wait_barrier(struct r10conf *conf, bool nowait)
>>   {
>>       bool ret = true;
>>       spin_lock_irq(&conf->resync_lock);
>>       if (conf->barrier) {
>> -        struct bio_list *bio_list = current->bio_list;
>> -        conf->nr_waiting++;
>> -        /* Wait for the barrier to drop.
>> -         * However if there are already pending
>> -         * requests (preventing the barrier from
>> -         * rising completely), and the
>> -         * pre-process bio queue isn't empty,
>> -         * then don't wait, as we need to empty
>> -         * that queue to get the nr_pending
>> -         * count down.
>> -         */
>>           /* Return false when nowait flag is set */
>>           if (nowait) {
>>               ret = false;
>>           } else {
>> +            conf->nr_waiting++;
>>               raid10_log(conf->mddev, "wait barrier");
>>               wait_event_lock_irq(conf->wait_barrier,
>> -                        !conf->barrier ||
>> -                        (atomic_read(&conf->nr_pending) &&
>> -                         bio_list &&
>> -                         (!bio_list_empty(&bio_list[0]) ||
>> -                          !bio_list_empty(&bio_list[1]))) ||
>> -                         /* move on if recovery thread is
>> -                          * blocked by us
>> -                          */
>> -                         (conf->mddev->thread->tsk == current &&
>> -                          test_bit(MD_RECOVERY_RUNNING,
>> -                               &conf->mddev->recovery) &&
>> -                          conf->nr_queued > 0),
>> +                        stop_waiting_barrier(conf),
>>                           conf->resync_lock);
>> +            conf->nr_waiting--;
>>           }
>> -        conf->nr_waiting--;
>>           if (!conf->nr_waiting)
>>               wake_up(&conf->wait_barrier);
>>       }
>
> Acked-by: Paul Menzel <[email protected]>

Thanks,
Kuai
>
>
> Kind regards,
>
> Paul
>
> .
>

2022-09-14 16:20:10

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()



On 2022-09-13 19:49, Yu Kuai wrote:
> From: Yu Kuai <[email protected]>
>
> Currently the nasty condition is wait_barrier() is hard to read. This
> patch factor out the condition into a function.
>
> There are no functional changes.
>
> Signed-off-by: Yu Kuai <[email protected]>
> ---
> drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
> 1 file changed, 32 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 64d6e4cd8a3a..56458a53043d 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
> wake_up(&conf->wait_barrier);
> }
>
> +static bool stop_waiting_barrier(struct r10conf *conf)
> +{
> + /* barrier is dropped */
> + if (!conf->barrier)
> + return true;
> +
> + /*
> + * If there are already pending requests (preventing the barrier from
> + * rising completely), and the pre-process bio queue isn't empty, then
> + * don't wait, as we need to empty that queue to get the nr_pending
> + * count down.
> + */
> + if (atomic_read(&conf->nr_pending)) {
> + struct bio_list *bio_list = current->bio_list;

I'd probably just put the bio_list declaration at the top of this
function, then the nested if statements are not necessary. The compiler
should be able to optimize the access just fine.

> if (conf->barrier) {
> - struct bio_list *bio_list = current->bio_list;
> - conf->nr_waiting++;
> - /* Wait for the barrier to drop.
> - * However if there are already pending
> - * requests (preventing the barrier from
> - * rising completely), and the
> - * pre-process bio queue isn't empty,
> - * then don't wait, as we need to empty
> - * that queue to get the nr_pending
> - * count down.
> - */
> /* Return false when nowait flag is set */
> if (nowait) {
> ret = false;
> } else {
> + conf->nr_waiting++;

Technically speaking, I think moving nr_waiting counts as a functional
change. As best as I can see it is correct, but it should probably be at
least mentioned in the commit message, or maybe done as a separate
commit with it's own justification. That way if it causes problems down
the road, a bisect will make the issue clearer.

Thanks,

Logan

2022-09-15 08:00:39

by Yu Kuai

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()

Hi,

在 2022/09/15 0:16, Logan Gunthorpe 写道:
>
>
> On 2022-09-13 19:49, Yu Kuai wrote:
>> From: Yu Kuai <[email protected]>
>>
>> Currently the nasty condition is wait_barrier() is hard to read. This
>> patch factor out the condition into a function.
>>
>> There are no functional changes.
>>
>> Signed-off-by: Yu Kuai <[email protected]>
>> ---
>> drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
>> 1 file changed, 32 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index 64d6e4cd8a3a..56458a53043d 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
>> wake_up(&conf->wait_barrier);
>> }
>>
>> +static bool stop_waiting_barrier(struct r10conf *conf)
>> +{
>> + /* barrier is dropped */
>> + if (!conf->barrier)
>> + return true;
>> +
>> + /*
>> + * If there are already pending requests (preventing the barrier from
>> + * rising completely), and the pre-process bio queue isn't empty, then
>> + * don't wait, as we need to empty that queue to get the nr_pending
>> + * count down.
>> + */
>> + if (atomic_read(&conf->nr_pending)) {
>> + struct bio_list *bio_list = current->bio_list;
>
> I'd probably just put the bio_list declaration at the top of this
> function, then the nested if statements are not necessary. The compiler
> should be able to optimize the access just fine.
>
>> if (conf->barrier) {
>> - struct bio_list *bio_list = current->bio_list;
>> - conf->nr_waiting++;
>> - /* Wait for the barrier to drop.
>> - * However if there are already pending
>> - * requests (preventing the barrier from
>> - * rising completely), and the
>> - * pre-process bio queue isn't empty,
>> - * then don't wait, as we need to empty
>> - * that queue to get the nr_pending
>> - * count down.
>> - */
>> /* Return false when nowait flag is set */
>> if (nowait) {
>> ret = false;
>> } else {
>> + conf->nr_waiting++;
>
> Technically speaking, I think moving nr_waiting counts as a functional
> change. As best as I can see it is correct, but it should probably be at
> least mentioned in the commit message, or maybe done as a separate
> commit with it's own justification. That way if it causes problems down
> the road, a bisect will make the issue clearer.

Thanks for your advice, I just think increase and decrease nr_waiting in
the case 'nowait' is pointless, and I move it incidentally.

I'll post a separate clean up patch to do that.

Paul, can I still add your Acked-by for this patch?

Thanks,
Kuai
>
> Thanks,
>
> Logan
> .
>

2022-09-15 09:16:58

by Paul Menzel

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()

Dear Yu,


Am 15.09.22 um 09:21 schrieb Yu Kuai:

> 在 2022/09/15 0:16, Logan Gunthorpe 写道:

>> On 2022-09-13 19:49, Yu Kuai wrote:
>>> From: Yu Kuai <[email protected]>
>>>
>>> Currently the nasty condition is wait_barrier() is hard to read. This
>>> patch factor out the condition into a function.
>>>
>>> There are no functional changes.
>>>
>>> Signed-off-by: Yu Kuai <[email protected]>
>>> ---
>>>   drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
>>>   1 file changed, 32 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>>> index 64d6e4cd8a3a..56458a53043d 100644
>>> --- a/drivers/md/raid10.c
>>> +++ b/drivers/md/raid10.c
>>> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
>>>       wake_up(&conf->wait_barrier);
>>>   }
>>> +static bool stop_waiting_barrier(struct r10conf *conf)
>>> +{
>>> +    /* barrier is dropped */
>>> +    if (!conf->barrier)
>>> +        return true;
>>> +
>>> +    /*
>>> +     * If there are already pending requests (preventing the barrier
>>> from
>>> +     * rising completely), and the pre-process bio queue isn't
>>> empty, then
>>> +     * don't wait, as we need to empty that queue to get the nr_pending
>>> +     * count down.
>>> +     */
>>> +    if (atomic_read(&conf->nr_pending)) {
>>> +        struct bio_list *bio_list = current->bio_list;
>>
>> I'd probably just put the bio_list declaration at the top of this
>> function, then the nested if statements are not necessary. The compiler
>> should be able to optimize the access just fine.
>>
>>>       if (conf->barrier) {
>>> -        struct bio_list *bio_list = current->bio_list;
>>> -        conf->nr_waiting++;
>>> -        /* Wait for the barrier to drop.
>>> -         * However if there are already pending
>>> -         * requests (preventing the barrier from
>>> -         * rising completely), and the
>>> -         * pre-process bio queue isn't empty,
>>> -         * then don't wait, as we need to empty
>>> -         * that queue to get the nr_pending
>>> -         * count down.
>>> -         */
>>>           /* Return false when nowait flag is set */
>>>           if (nowait) {
>>>               ret = false;
>>>           } else {
>>> +            conf->nr_waiting++;
>>
>> Technically speaking, I think moving nr_waiting counts as a functional
>> change. As best as I can see it is correct, but it should probably be at
>> least mentioned in the commit message, or maybe done as a separate
>> commit with it's own justification. That way if it causes problems down
>> the road, a bisect will make the issue clearer.
>
> Thanks for your advice, I just think increase and decrease nr_waiting in
> the case 'nowait' is pointless, and I move it incidentally.
>
> I'll post a separate clean up patch to do that.
>
> Paul, can I still add your Acked-by for this patch?

Yes, sure.


Kind regards,

Paul