2024-02-20 04:48:47

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On (24/02/19 13:33), Chengming Zhou wrote:
> static void migrate_write_unlock(struct zspage *zspage)
> {
> write_unlock(&zspage->lock);
> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> dst_zspage = isolate_dst_zspage(class);
> if (!dst_zspage)
> break;
> - migrate_write_lock(dst_zspage);
> }
>
> src_zspage = isolate_src_zspage(class);
> if (!src_zspage)
> break;
>
> - migrate_write_lock_nested(src_zspage);
> -
> + migrate_write_lock(src_zspage);
> migrate_zspage(pool, src_zspage, dst_zspage);
> - fg = putback_zspage(class, src_zspage);
> migrate_write_unlock(src_zspage);
>
> + fg = putback_zspage(class, src_zspage);

Hmm. Lockless putback doesn't look right to me. We modify critical
zspage fileds in putback_zspage().

> if (fg == ZS_INUSE_RATIO_0) {
> free_zspage(pool, class, src_zspage);
> pages_freed += class->pages_per_zspage;
> @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
> || spin_is_contended(&pool->lock)) {
> putback_zspage(class, dst_zspage);
> - migrate_write_unlock(dst_zspage);
> dst_zspage = NULL;
>
> spin_unlock(&pool->lock);
> @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> }
> }
>
> - if (src_zspage) {
> + if (src_zspage)
> putback_zspage(class, src_zspage);
> - migrate_write_unlock(src_zspage);
> - }
>
> - if (dst_zspage) {
> + if (dst_zspage)
> putback_zspage(class, dst_zspage);
> - migrate_write_unlock(dst_zspage);
> - }


2024-02-20 04:52:41

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> On (24/02/19 13:33), Chengming Zhou wrote:
>> static void migrate_write_unlock(struct zspage *zspage)
>> {
>> write_unlock(&zspage->lock);
>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>> dst_zspage = isolate_dst_zspage(class);
>> if (!dst_zspage)
>> break;
>> - migrate_write_lock(dst_zspage);
>> }
>>
>> src_zspage = isolate_src_zspage(class);
>> if (!src_zspage)
>> break;
>>
>> - migrate_write_lock_nested(src_zspage);
>> -
>> + migrate_write_lock(src_zspage);
>> migrate_zspage(pool, src_zspage, dst_zspage);
>> - fg = putback_zspage(class, src_zspage);
>> migrate_write_unlock(src_zspage);
>>
>> + fg = putback_zspage(class, src_zspage);
>
> Hmm. Lockless putback doesn't look right to me. We modify critical
> zspage fileds in putback_zspage().

Which I think is protected by pool->lock, right? We already held it.

>
>> if (fg == ZS_INUSE_RATIO_0) {
>> free_zspage(pool, class, src_zspage);
>> pages_freed += class->pages_per_zspage;
>> @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>> if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
>> || spin_is_contended(&pool->lock)) {
>> putback_zspage(class, dst_zspage);
>> - migrate_write_unlock(dst_zspage);
>> dst_zspage = NULL;
>>
>> spin_unlock(&pool->lock);
>> @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>> }
>> }
>>
>> - if (src_zspage) {
>> + if (src_zspage)
>> putback_zspage(class, src_zspage);
>> - migrate_write_unlock(src_zspage);
>> - }
>>
>> - if (dst_zspage) {
>> + if (dst_zspage)
>> putback_zspage(class, dst_zspage);
>> - migrate_write_unlock(dst_zspage);
>> - }

2024-02-20 04:56:51

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On (24/02/20 12:51), Chengming Zhou wrote:
> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> > On (24/02/19 13:33), Chengming Zhou wrote:
> >> static void migrate_write_unlock(struct zspage *zspage)
> >> {
> >> write_unlock(&zspage->lock);
> >> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >> dst_zspage = isolate_dst_zspage(class);
> >> if (!dst_zspage)
> >> break;
> >> - migrate_write_lock(dst_zspage);
> >> }
> >>
> >> src_zspage = isolate_src_zspage(class);
> >> if (!src_zspage)
> >> break;
> >>
> >> - migrate_write_lock_nested(src_zspage);
> >> -
> >> + migrate_write_lock(src_zspage);
> >> migrate_zspage(pool, src_zspage, dst_zspage);
> >> - fg = putback_zspage(class, src_zspage);
> >> migrate_write_unlock(src_zspage);
> >>
> >> + fg = putback_zspage(class, src_zspage);
> >
> > Hmm. Lockless putback doesn't look right to me. We modify critical
> > zspage fileds in putback_zspage().
>
> Which I think is protected by pool->lock, right? We already held it.

Not really. We have, for example, the following patterns:

get_zspage_mapping()
spin_lock(&pool->lock)

2024-02-20 05:00:08

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On 2024/2/20 12:53, Sergey Senozhatsky wrote:
> On (24/02/20 12:51), Chengming Zhou wrote:
>> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
>>> On (24/02/19 13:33), Chengming Zhou wrote:
>>>> static void migrate_write_unlock(struct zspage *zspage)
>>>> {
>>>> write_unlock(&zspage->lock);
>>>> @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>>>> dst_zspage = isolate_dst_zspage(class);
>>>> if (!dst_zspage)
>>>> break;
>>>> - migrate_write_lock(dst_zspage);
>>>> }
>>>>
>>>> src_zspage = isolate_src_zspage(class);
>>>> if (!src_zspage)
>>>> break;
>>>>
>>>> - migrate_write_lock_nested(src_zspage);
>>>> -
>>>> + migrate_write_lock(src_zspage);
>>>> migrate_zspage(pool, src_zspage, dst_zspage);
>>>> - fg = putback_zspage(class, src_zspage);
>>>> migrate_write_unlock(src_zspage);
>>>>
>>>> + fg = putback_zspage(class, src_zspage);
>>>
>>> Hmm. Lockless putback doesn't look right to me. We modify critical
>>> zspage fileds in putback_zspage().
>>
>> Which I think is protected by pool->lock, right? We already held it.
>
> Not really. We have, for example, the following patterns:
>
> get_zspage_mapping()
> spin_lock(&pool->lock)

Right, this pattern is not safe actually, since we can't get stable fullness
value of zspage outside pool->lock.

But this pattern usage is only used in free_zspage path, so should be ok.
Actually we don't use the fullness value returned from get_zspage_mapping()
in the free_zspage() path, only use the class value to get the class.

Anyway, this pattern is confusing, I think we should clean up that?

Thanks.

2024-02-20 05:02:47

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested()

On (24/02/20 12:59), Chengming Zhou wrote:
> On 2024/2/20 12:53, Sergey Senozhatsky wrote:
> > On (24/02/20 12:51), Chengming Zhou wrote:
> >> On 2024/2/20 12:48, Sergey Senozhatsky wrote:
> >>> On (24/02/19 13:33), Chengming Zhou wrote:
> >>> [..]
> >
> > Not really. We have, for example, the following patterns:
> >
> > get_zspage_mapping()
> > spin_lock(&pool->lock)
>
> Right, this pattern is not safe actually, since we can't get stable fullness
> value of zspage outside pool->lock.
>
> But this pattern usage is only used in free_zspage path, so should be ok.
> Actually we don't use the fullness value returned from get_zspage_mapping()
> in the free_zspage() path, only use the class value to get the class.
>
> Anyway, this pattern is confusing, I think we should clean up that?

Right, looks so.