2023-09-26 03:05:09

by Yu Kuai

[permalink] [raw]
Subject: [PATCH v2 1/2] md: factor out a new helper to put mddev

From: Yu Kuai <[email protected]>

There are no functional changes, the new helper will still hold
'all_mddevs_lock' after putting mddev, and it will be used to simplify
md_seq_ops.

Signed-off-by: Yu Kuai <[email protected]>
---
drivers/md/md.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 10cb4dfbf4ae..a5ef6f7da8ec 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)

static void mddev_delayed_delete(struct work_struct *ws);

-void mddev_put(struct mddev *mddev)
+static void __mddev_put(struct mddev *mddev, bool locked)
{
- if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+ if (locked) {
+ spin_lock(&all_mddevs_lock);
+ if (!atomic_dec_and_test(&mddev->active))
+ return;
+ } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
return;
+
if (!mddev->raid_disks && list_empty(&mddev->disks) &&
mddev->ctime == 0 && !mddev->hold_active) {
/* Array is not configured at all, and not held active,
@@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
*/
queue_work(md_misc_wq, &mddev->del_work);
}
- spin_unlock(&all_mddevs_lock);
+
+ if (!locked)
+ spin_unlock(&all_mddevs_lock);
+}
+
+void mddev_put(struct mddev *mddev)
+{
+ __mddev_put(mddev, false);
}

static void md_safemode_timeout(struct timer_list *t);
--
2.39.2


2023-09-26 12:57:56

by Yu Kuai

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

Hi,

?? 2023/09/26 20:45, Mariusz Tkaczyk д??:
> On Tue, 26 Sep 2023 10:58:26 +0800
> Yu Kuai <[email protected]> wrote:
>
>> From: Yu Kuai <[email protected]>
>>
>> There are no functional changes, the new helper will still hold
>> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
>> md_seq_ops.
>>
>> Signed-off-by: Yu Kuai <[email protected]>
>> ---
>> drivers/md/md.c | 18 +++++++++++++++---
>> 1 file changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
>> *mddev)
>> static void mddev_delayed_delete(struct work_struct *ws);
>>
>> -void mddev_put(struct mddev *mddev)
>> +static void __mddev_put(struct mddev *mddev, bool locked)
>> {
>> - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> + if (locked) {
>> + spin_lock(&all_mddevs_lock);
>> + if (!atomic_dec_and_test(&mddev->active))
>> + return;
>
> It is "locked" and we are taking lock? It seems weird to me. Perhaps "do_lock"
> would be better? Do you meant "lockdep_assert_held(&all_mddevs_lock);"

Yes, do_lock is a better name, true means this function will return with
lock held.
>
> Something is wrong here, we have two paths and in both cases we are
> taking lock.

No, in the first path, lock is held unconditionaly, that's what we
expected in md_seq_show(); in the next path, lock will only be held if
active is decreased to 0.

Thanks,
Kuai

>
>> + } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> return;
>> +
>> if (!mddev->raid_disks && list_empty(&mddev->disks) &&
>> mddev->ctime == 0 && !mddev->hold_active) {
>> /* Array is not configured at all, and not held active,
>> @@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
>> */
>> queue_work(md_misc_wq, &mddev->del_work);
>> }
>> - spin_unlock(&all_mddevs_lock);
>> +
>> + if (!locked)
>> + spin_unlock(&all_mddevs_lock);
> As above, I'm not sure if it is correct.
>
> Thanks,
> Mariusz
>
> .
>

2023-09-26 19:55:52

by Mariusz Tkaczyk

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

On Tue, 26 Sep 2023 10:58:26 +0800
Yu Kuai <[email protected]> wrote:

> From: Yu Kuai <[email protected]>
>
> There are no functional changes, the new helper will still hold
> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> md_seq_ops.
>
> Signed-off-by: Yu Kuai <[email protected]>
> ---
> drivers/md/md.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
> *mddev)
> static void mddev_delayed_delete(struct work_struct *ws);
>
> -void mddev_put(struct mddev *mddev)
> +static void __mddev_put(struct mddev *mddev, bool locked)
> {
> - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> + if (locked) {
> + spin_lock(&all_mddevs_lock);
> + if (!atomic_dec_and_test(&mddev->active))
> + return;

It is "locked" and we are taking lock? It seems weird to me. Perhaps "do_lock"
would be better? Do you meant "lockdep_assert_held(&all_mddevs_lock);"

Something is wrong here, we have two paths and in both cases we are
taking lock.

> + } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> return;
> +
> if (!mddev->raid_disks && list_empty(&mddev->disks) &&
> mddev->ctime == 0 && !mddev->hold_active) {
> /* Array is not configured at all, and not held active,
> @@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
> */
> queue_work(md_misc_wq, &mddev->del_work);
> }
> - spin_unlock(&all_mddevs_lock);
> +
> + if (!locked)
> + spin_unlock(&all_mddevs_lock);
As above, I'm not sure if it is correct.

Thanks,
Mariusz

2023-09-27 01:49:14

by Yu Kuai

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

Hi,

在 2023/09/27 8:15, Song Liu 写道:
> On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <[email protected]> wrote:
>>
>> From: Yu Kuai <[email protected]>
>>
>> There are no functional changes, the new helper will still hold
>> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
>> md_seq_ops.
>>
>> Signed-off-by: Yu Kuai <[email protected]>
>> ---
>> drivers/md/md.c | 18 +++++++++++++++---
>> 1 file changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
>>
>> static void mddev_delayed_delete(struct work_struct *ws);
>>
>> -void mddev_put(struct mddev *mddev)
>> +static void __mddev_put(struct mddev *mddev, bool locked)
>> {
>> - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> + if (locked) {
>> + spin_lock(&all_mddevs_lock);
>> + if (!atomic_dec_and_test(&mddev->active))
>> + return;
>> + } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>> return;
>> +
>
> This condition is indeed very confusing. No matter whether we call the
> flag "locked" or "do_lock", it is not really accurate.
>
> How about we factor out a helper with the following logic:
>
> if (!mddev->raid_disks && list_empty(&mddev->disks) &&
> mddev->ctime == 0 && !mddev->hold_active) {
> /* Array is not configured at all, and not held active,
> * so destroy it */
> set_bit(MD_DELETED, &mddev->flags);
>
> /*
> * Call queue_work inside the spinlock so that
> * flush_workqueue() after mddev_find will succeed in waiting
> * for the work to be done.
> */
> queue_work(md_misc_wq, &mddev->del_work);
> }
>
> and then use it at the two callers?
>
> Does this make sense?

Yes, that sounds great. I'll do this in v3.

Thanks,
Kuai

>
> Thanks,
> Song
> .
>

2023-09-27 02:03:09

by Mariusz Tkaczyk

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

On Tue, 26 Sep 2023 20:54:01 +0800
Yu Kuai <[email protected]> wrote:

> Hi,
>
> ?? 2023/09/26 20:45, Mariusz Tkaczyk д??:
> > On Tue, 26 Sep 2023 10:58:26 +0800
> > Yu Kuai <[email protected]> wrote:
> >
> >> From: Yu Kuai <[email protected]>
> >>
> >> There are no functional changes, the new helper will still hold
> >> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> >> md_seq_ops.
> >>
> >> Signed-off-by: Yu Kuai <[email protected]>
> >> ---
> >> drivers/md/md.c | 18 +++++++++++++++---
> >> 1 file changed, 15 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/md/md.c b/drivers/md/md.c
> >> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> >> --- a/drivers/md/md.c
> >> +++ b/drivers/md/md.c
> >> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
> >> *mddev)
> >> static void mddev_delayed_delete(struct work_struct *ws);
> >>
> >> -void mddev_put(struct mddev *mddev)
> >> +static void __mddev_put(struct mddev *mddev, bool locked)
> >> {
> >> - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> >> + if (locked) {
> >> + spin_lock(&all_mddevs_lock);
> >> + if (!atomic_dec_and_test(&mddev->active))
> >> + return;
> >
> > It is "locked" and we are taking lock? It seems weird to me. Perhaps
> > "do_lock" would be better? Do you meant
> > "lockdep_assert_held(&all_mddevs_lock);"
>
> Yes, do_lock is a better name, true means this function will return with
> lock held.
> >
> > Something is wrong here, we have two paths and in both cases we are
> > taking lock.
>
> No, in the first path, lock is held unconditionaly, that's what we
> expected in md_seq_show(); in the next path, lock will only be held if
> active is decreased to 0.
>

Ok I see, you described it in commit message.
IMO it is bad practice to return with locked resource and not highlight it in
function name.In this case, I would prefer to respect that device is already
locked, not lock it here:

(assuming bool means "locked")
spin_lock(&all_mddevs_lock);
__mddev_put(mddev, true); <- function known that lock is held.
spin_unlock((mddev);

your "do_lock" approach:
__mddev_put(mddev, true); <- lock is taken here and we are returning
spin_unlock((mddev);

You could change name to something like "all_mddev_lock_and_put(mddev)" to
indicate that we are locking all_mddevs. It fits for me too.
Note: it is just my preference, feel free to ignore :)

Mariusz

2023-09-27 03:03:47

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <[email protected]> wrote:
>
> From: Yu Kuai <[email protected]>
>
> There are no functional changes, the new helper will still hold
> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> md_seq_ops.
>
> Signed-off-by: Yu Kuai <[email protected]>
> ---
> drivers/md/md.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
>
> static void mddev_delayed_delete(struct work_struct *ws);
>
> -void mddev_put(struct mddev *mddev)
> +static void __mddev_put(struct mddev *mddev, bool locked)
> {
> - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> + if (locked) {
> + spin_lock(&all_mddevs_lock);
> + if (!atomic_dec_and_test(&mddev->active))
> + return;
> + } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> return;
> +

This condition is indeed very confusing. No matter whether we call the
flag "locked" or "do_lock", it is not really accurate.

How about we factor out a helper with the following logic:

if (!mddev->raid_disks && list_empty(&mddev->disks) &&
mddev->ctime == 0 && !mddev->hold_active) {
/* Array is not configured at all, and not held active,
* so destroy it */
set_bit(MD_DELETED, &mddev->flags);

/*
* Call queue_work inside the spinlock so that
* flush_workqueue() after mddev_find will succeed in waiting
* for the work to be done.
*/
queue_work(md_misc_wq, &mddev->del_work);
}

and then use it at the two callers?

Does this make sense?

Thanks,
Song