2021-06-24 12:43:18

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 0/3] Cleanup for zsmalloc

Hi all,
This series contains cleanups to remove confusing code in obj_free(),
combine two atomic ops and improve readability for async_free_zspage().
More details can be found in the respective changelogs. Thanks!

Miaohe Lin (3):
mm/zsmalloc.c: remove confusing code in obj_free()
mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()
mm/zsmalloc.c: improve readability for async_free_zspage()

mm/zsmalloc.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

--
2.23.0


2021-06-24 12:44:18

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
atomic_long_read() == 0. Use it to make code more succinct.

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/zsmalloc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1476289b619f..0b4b23740d78 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
static inline void zs_pool_dec_isolated(struct zs_pool *pool)
{
VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
- atomic_long_dec(&pool->isolated_pages);
/*
* There's no possibility of racing, since wait_for_isolated_drain()
* checks the isolated count under &class->lock after enqueuing
* on migration_wait.
*/
- if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
+ if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
wake_up_all(&pool->migration_wait);
}

--
2.23.0

2021-06-24 12:44:57

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 3/3] mm/zsmalloc.c: improve readability for async_free_zspage()

The class is extracted from pool->size_class[class_idx] again before
calling __free_zspage(). It looks like class will change after we fetch
the class lock. But this is misleading as class will stay unchanged.

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/zsmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 0b4b23740d78..8598ee07284b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2161,7 +2161,7 @@ static void async_free_zspage(struct work_struct *work)
VM_BUG_ON(fullness != ZS_EMPTY);
class = pool->size_class[class_idx];
spin_lock(&class->lock);
- __free_zspage(pool, pool->size_class[class_idx], zspage);
+ __free_zspage(pool, class, zspage);
spin_unlock(&class->lock);
}
};
--
2.23.0

2021-06-25 05:06:23

by Muchun Song

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
>
> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> atomic_long_read() == 0. Use it to make code more succinct.

Actually, they are not equal. atomic_long_dec_and_test implies a
full memory barrier around it but atomic_long_dec and atomic_long_read
don't.

That RMW operations that have a return value is equal to the following.

smp_mb__before_atomic()
non-RMW operations or RMW operations that have no return value
smp_mb__after_atomic()

Thanks.

>
> Signed-off-by: Miaohe Lin <[email protected]>
> ---
> mm/zsmalloc.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 1476289b619f..0b4b23740d78 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> {
> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> - atomic_long_dec(&pool->isolated_pages);
> /*
> * There's no possibility of racing, since wait_for_isolated_drain()
> * checks the isolated count under &class->lock after enqueuing
> * on migration_wait.
> */
> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> wake_up_all(&pool->migration_wait);
> }
>
> --
> 2.23.0
>

2021-06-25 06:33:25

by Miaohe Lin

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On 2021/6/25 13:01, Muchun Song wrote:
> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
>>
>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>> atomic_long_read() == 0. Use it to make code more succinct.
>
> Actually, they are not equal. atomic_long_dec_and_test implies a
> full memory barrier around it but atomic_long_dec and atomic_long_read
> don't.
>

Many thanks for comment. They are indeed not completely equal as you said.
What I mean is they can do the same things we want in this specified context.
Thanks again.

> That RMW operations that have a return value is equal to the following.
>
> smp_mb__before_atomic()
> non-RMW operations or RMW operations that have no return value
> smp_mb__after_atomic()
>
> Thanks.
>
>>
>> Signed-off-by: Miaohe Lin <[email protected]>
>> ---
>> mm/zsmalloc.c | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index 1476289b619f..0b4b23740d78 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>> {
>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>> - atomic_long_dec(&pool->isolated_pages);
>> /*
>> * There's no possibility of racing, since wait_for_isolated_drain()
>> * checks the isolated count under &class->lock after enqueuing
>> * on migration_wait.
>> */
>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>> wake_up_all(&pool->migration_wait);
>> }
>>
>> --
>> 2.23.0
>>
> .
>

2021-06-25 07:31:37

by Muchun Song

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <[email protected]> wrote:
>
> On 2021/6/25 13:01, Muchun Song wrote:
> > On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
> >>
> >> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> >> atomic_long_read() == 0. Use it to make code more succinct.
> >
> > Actually, they are not equal. atomic_long_dec_and_test implies a
> > full memory barrier around it but atomic_long_dec and atomic_long_read
> > don't.
> >
>
> Many thanks for comment. They are indeed not completely equal as you said.
> What I mean is they can do the same things we want in this specified context.
> Thanks again.

I don't think so. Using individual operations can eliminate memory barriers.
We will pay for the barrier if we use atomic_long_dec_and_test here.

>
> > That RMW operations that have a return value is equal to the following.
> >
> > smp_mb__before_atomic()
> > non-RMW operations or RMW operations that have no return value
> > smp_mb__after_atomic()
> >
> > Thanks.
> >
> >>
> >> Signed-off-by: Miaohe Lin <[email protected]>
> >> ---
> >> mm/zsmalloc.c | 3 +--
> >> 1 file changed, 1 insertion(+), 2 deletions(-)
> >>
> >> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >> index 1476289b619f..0b4b23740d78 100644
> >> --- a/mm/zsmalloc.c
> >> +++ b/mm/zsmalloc.c
> >> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> >> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> >> {
> >> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> >> - atomic_long_dec(&pool->isolated_pages);
> >> /*
> >> * There's no possibility of racing, since wait_for_isolated_drain()
> >> * checks the isolated count under &class->lock after enqueuing
> >> * on migration_wait.
> >> */
> >> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> >> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> >> wake_up_all(&pool->migration_wait);
> >> }
> >>
> >> --
> >> 2.23.0
> >>
> > .
> >
>

2021-06-25 08:47:19

by Miaohe Lin

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On 2021/6/25 15:29, Muchun Song wrote:
> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <[email protected]> wrote:
>>
>> On 2021/6/25 13:01, Muchun Song wrote:
>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
>>>>
>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>
>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>> don't.
>>>
>>
>> Many thanks for comment. They are indeed not completely equal as you said.
>> What I mean is they can do the same things we want in this specified context.
>> Thanks again.
>
> I don't think so. Using individual operations can eliminate memory barriers.
> We will pay for the barrier if we use atomic_long_dec_and_test here.

The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
weird. I think it's worth to do this with the cost of barrier.

>
>>
>>> That RMW operations that have a return value is equal to the following.
>>>
>>> smp_mb__before_atomic()
>>> non-RMW operations or RMW operations that have no return value
>>> smp_mb__after_atomic()
>>>
>>> Thanks.
>>>
>>>>
>>>> Signed-off-by: Miaohe Lin <[email protected]>
>>>> ---
>>>> mm/zsmalloc.c | 3 +--
>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>> index 1476289b619f..0b4b23740d78 100644
>>>> --- a/mm/zsmalloc.c
>>>> +++ b/mm/zsmalloc.c
>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>> {
>>>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>> - atomic_long_dec(&pool->isolated_pages);
>>>> /*
>>>> * There's no possibility of racing, since wait_for_isolated_drain()
>>>> * checks the isolated count under &class->lock after enqueuing
>>>> * on migration_wait.
>>>> */
>>>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>> wake_up_all(&pool->migration_wait);
>>>> }
>>>>
>>>> --
>>>> 2.23.0
>>>>
>>> .
>>>
>>
> .
>

2021-06-25 09:34:24

by Miaohe Lin

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On 2021/6/25 16:46, Miaohe Lin wrote:
> On 2021/6/25 15:29, Muchun Song wrote:
>> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <[email protected]> wrote:
>>>
>>> On 2021/6/25 13:01, Muchun Song wrote:
>>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
>>>>>
>>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>>
>>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>>> don't.
>>>>
>>>
>>> Many thanks for comment. They are indeed not completely equal as you said.
>>> What I mean is they can do the same things we want in this specified context.
>>> Thanks again.
>>
>> I don't think so. Using individual operations can eliminate memory barriers.
>> We will pay for the barrier if we use atomic_long_dec_and_test here.
>
> The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
> weird. I think it's worth to do this with the cost of barrier.
>

It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:

zs_pool_dec_isolated zs_unregister_migration
pool->destroying != true
pool->destroying = true;
smp_mb();
wait_for_isolated_drain
wait_event with atomic_long_read(&pool->isolated_pages) != 0
atomic_long_dec(&pool->isolated_pages);
atomic_long_read(&pool->isolated_pages) == 0

Thus wake_up_all is missed.
And the comment in zs_pool_dec_isolated() said:
/*
* There's no possibility of racing, since wait_for_isolated_drain()
* checks the isolated count under &class->lock after enqueuing
* on migration_wait.
*/

But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
is possible. Does this make senses for you ?
Thanks.

>>
>>>
>>>> That RMW operations that have a return value is equal to the following.
>>>>
>>>> smp_mb__before_atomic()
>>>> non-RMW operations or RMW operations that have no return value
>>>> smp_mb__after_atomic()
>>>>
>>>> Thanks.
>>>>
>>>>>
>>>>> Signed-off-by: Miaohe Lin <[email protected]>
>>>>> ---
>>>>> mm/zsmalloc.c | 3 +--
>>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>>> index 1476289b619f..0b4b23740d78 100644
>>>>> --- a/mm/zsmalloc.c
>>>>> +++ b/mm/zsmalloc.c
>>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>>> {
>>>>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>>> - atomic_long_dec(&pool->isolated_pages);
>>>>> /*
>>>>> * There's no possibility of racing, since wait_for_isolated_drain()
>>>>> * checks the isolated count under &class->lock after enqueuing
>>>>> * on migration_wait.
>>>>> */
>>>>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>>> wake_up_all(&pool->migration_wait);
>>>>> }
>>>>>
>>>>> --
>>>>> 2.23.0
>>>>>
>>>> .
>>>>
>>>
>> .
>>
>

2021-06-25 10:42:28

by Muchun Song

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On Fri, Jun 25, 2021 at 5:32 PM Miaohe Lin <[email protected]> wrote:
>
> On 2021/6/25 16:46, Miaohe Lin wrote:
> > On 2021/6/25 15:29, Muchun Song wrote:
> >> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <[email protected]> wrote:
> >>>
> >>> On 2021/6/25 13:01, Muchun Song wrote:
> >>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
> >>>>>
> >>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
> >>>>> atomic_long_read() == 0. Use it to make code more succinct.
> >>>>
> >>>> Actually, they are not equal. atomic_long_dec_and_test implies a
> >>>> full memory barrier around it but atomic_long_dec and atomic_long_read
> >>>> don't.
> >>>>
> >>>
> >>> Many thanks for comment. They are indeed not completely equal as you said.
> >>> What I mean is they can do the same things we want in this specified context.
> >>> Thanks again.
> >>
> >> I don't think so. Using individual operations can eliminate memory barriers.
> >> We will pay for the barrier if we use atomic_long_dec_and_test here.
> >
> > The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
> > weird. I think it's worth to do this with the cost of barrier.
> >
>
> It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
> is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:
>
> zs_pool_dec_isolated zs_unregister_migration
> pool->destroying != true
> pool->destroying = true;
> smp_mb();
> wait_for_isolated_drain
> wait_event with atomic_long_read(&pool->isolated_pages) != 0
> atomic_long_dec(&pool->isolated_pages);
> atomic_long_read(&pool->isolated_pages) == 0

I am not familiar with zsmalloc. So I do not know whether the race
that you mentioned above exists. But If it exists, the fix also does
not make sense to me. If there should be inserted a smp_mb between
atomic_long_dec and atomic_long_read, you should insert
smp_mb__after_atomic instead of using atomic_long_dec_and_test.
Because smp_mb__after_atomic can be optimized on certain architecture
(e.g. x86_64).

Thanks.

>
> Thus wake_up_all is missed.
> And the comment in zs_pool_dec_isolated() said:
> /*
> * There's no possibility of racing, since wait_for_isolated_drain()
> * checks the isolated count under &class->lock after enqueuing
> * on migration_wait.
> */
>
> But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
> is possible. Does this make senses for you ?
> Thanks.
>
> >>
> >>>
> >>>> That RMW operations that have a return value is equal to the following.
> >>>>
> >>>> smp_mb__before_atomic()
> >>>> non-RMW operations or RMW operations that have no return value
> >>>> smp_mb__after_atomic()
> >>>>
> >>>> Thanks.
> >>>>
> >>>>>
> >>>>> Signed-off-by: Miaohe Lin <[email protected]>
> >>>>> ---
> >>>>> mm/zsmalloc.c | 3 +--
> >>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
> >>>>>
> >>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >>>>> index 1476289b619f..0b4b23740d78 100644
> >>>>> --- a/mm/zsmalloc.c
> >>>>> +++ b/mm/zsmalloc.c
> >>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
> >>>>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> >>>>> {
> >>>>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> >>>>> - atomic_long_dec(&pool->isolated_pages);
> >>>>> /*
> >>>>> * There's no possibility of racing, since wait_for_isolated_drain()
> >>>>> * checks the isolated count under &class->lock after enqueuing
> >>>>> * on migration_wait.
> >>>>> */
> >>>>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> >>>>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
> >>>>> wake_up_all(&pool->migration_wait);
> >>>>> }
> >>>>>
> >>>>> --
> >>>>> 2.23.0
> >>>>>
> >>>> .
> >>>>
> >>>
> >> .
> >>
> >
>

2021-07-01 02:45:00

by Miaohe Lin

[permalink] [raw]
Subject: Re: [Phishing Risk] [External] [PATCH 2/3] mm/zsmalloc.c: combine two atomic ops in zs_pool_dec_isolated()

On 2021/6/25 18:40, Muchun Song wrote:
> On Fri, Jun 25, 2021 at 5:32 PM Miaohe Lin <[email protected]> wrote:
>>
>> On 2021/6/25 16:46, Miaohe Lin wrote:
>>> On 2021/6/25 15:29, Muchun Song wrote:
>>>> On Fri, Jun 25, 2021 at 2:32 PM Miaohe Lin <[email protected]> wrote:
>>>>>
>>>>> On 2021/6/25 13:01, Muchun Song wrote:
>>>>>> On Thu, Jun 24, 2021 at 8:40 PM Miaohe Lin <[email protected]> wrote:
>>>>>>>
>>>>>>> atomic_long_dec_and_test() is equivalent to atomic_long_dec() and
>>>>>>> atomic_long_read() == 0. Use it to make code more succinct.
>>>>>>
>>>>>> Actually, they are not equal. atomic_long_dec_and_test implies a
>>>>>> full memory barrier around it but atomic_long_dec and atomic_long_read
>>>>>> don't.
>>>>>>
>>>>>
>>>>> Many thanks for comment. They are indeed not completely equal as you said.
>>>>> What I mean is they can do the same things we want in this specified context.
>>>>> Thanks again.
>>>>
>>>> I don't think so. Using individual operations can eliminate memory barriers.
>>>> We will pay for the barrier if we use atomic_long_dec_and_test here.
>>>
>>> The combination of atomic_long_dec and atomic_long_read usecase is rare and looks somehow
>>> weird. I think it's worth to do this with the cost of barrier.
>>>
>>
>> It seems there is race between zs_pool_dec_isolated and zs_unregister_migration if pool->destroying
>> is reordered before the atomic_long_dec and atomic_long_read ops. So this memory barrier is necessary:
>>
>> zs_pool_dec_isolated zs_unregister_migration
>> pool->destroying != true
>> pool->destroying = true;
>> smp_mb();
>> wait_for_isolated_drain
>> wait_event with atomic_long_read(&pool->isolated_pages) != 0
>> atomic_long_dec(&pool->isolated_pages);
>> atomic_long_read(&pool->isolated_pages) == 0
>
> I am not familiar with zsmalloc. So I do not know whether the race
> that you mentioned above exists. But If it exists, the fix also does
> not make sense to me. If there should be inserted a smp_mb between
> atomic_long_dec and atomic_long_read, you should insert
> smp_mb__after_atomic instead of using atomic_long_dec_and_test.
> Because smp_mb__after_atomic can be optimized on certain architecture
> (e.g. x86_64).
>

Sorry for the delay.

I think there is two options:

atomic_long_dec(&pool->isolated_pages);
smp_mb__after_atomic
atomic_long_read(&pool->isolated_pages) == 0

We have two atomic ops with one smp_mb.

vs

atomic_long_dec_and_test while implies __smp_mb__before_atomic + atomic_long_ops + smp_mb__after_atomic.

We have one atomic ops with two smp_mb.

I think either one works but prefer later one. What do you think?

Thanks.

> Thanks.
>
>>
>> Thus wake_up_all is missed.
>> And the comment in zs_pool_dec_isolated() said:
>> /*
>> * There's no possibility of racing, since wait_for_isolated_drain()
>> * checks the isolated count under &class->lock after enqueuing
>> * on migration_wait.
>> */
>>
>> But I found &class->lock is indeed not acquired for wait_for_isolated_drain(). So I think the above race
>> is possible. Does this make senses for you ?
>> Thanks.
>>
>>>>
>>>>>
>>>>>> That RMW operations that have a return value is equal to the following.
>>>>>>
>>>>>> smp_mb__before_atomic()
>>>>>> non-RMW operations or RMW operations that have no return value
>>>>>> smp_mb__after_atomic()
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Miaohe Lin <[email protected]>
>>>>>>> ---
>>>>>>> mm/zsmalloc.c | 3 +--
>>>>>>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>>>>>>> index 1476289b619f..0b4b23740d78 100644
>>>>>>> --- a/mm/zsmalloc.c
>>>>>>> +++ b/mm/zsmalloc.c
>>>>>>> @@ -1828,13 +1828,12 @@ static void putback_zspage_deferred(struct zs_pool *pool,
>>>>>>> static inline void zs_pool_dec_isolated(struct zs_pool *pool)
>>>>>>> {
>>>>>>> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
>>>>>>> - atomic_long_dec(&pool->isolated_pages);
>>>>>>> /*
>>>>>>> * There's no possibility of racing, since wait_for_isolated_drain()
>>>>>>> * checks the isolated count under &class->lock after enqueuing
>>>>>>> * on migration_wait.
>>>>>>> */
>>>>>>> - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
>>>>>>> + if (atomic_long_dec_and_test(&pool->isolated_pages) && pool->destroying)
>>>>>>> wake_up_all(&pool->migration_wait);
>>>>>>> }
>>>>>>>
>>>>>>> --
>>>>>>> 2.23.0
>>>>>>>
>>>>>> .
>>>>>>
>>>>>
>>>> .
>>>>
>>>
>>
> .
>