2023-07-27 07:02:56

by Andrew Yang

[permalink] [raw]
Subject: [PATCH v2] zsmalloc: Fix races between modifications of fullness and isolated

Since fullness and isolated share the same unsigned int,
modifications of them should be protected by the same lock.

Signed-off-by: Andrew Yang <[email protected]>
Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for migration")
---
v2: Moving comment too
---
mm/zsmalloc.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 32f5bc4074df..b58f957429f0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1777,6 +1777,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,

static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
{
+ struct zs_pool *pool;
struct zspage *zspage;

/*
@@ -1786,9 +1787,10 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
VM_BUG_ON_PAGE(PageIsolated(page), page);

zspage = get_zspage(page);
- migrate_write_lock(zspage);
+ pool = zspage->pool;
+ spin_lock(&pool->lock);
inc_zspage_isolation(zspage);
- migrate_write_unlock(zspage);
+ spin_unlock(&pool->lock);

return true;
}
@@ -1854,12 +1856,12 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
kunmap_atomic(s_addr);

replace_sub_page(class, zspage, newpage, page);
+ dec_zspage_isolation(zspage);
/*
* Since we complete the data copy and set up new zspage structure,
* it's okay to release the pool's lock.
*/
spin_unlock(&pool->lock);
- dec_zspage_isolation(zspage);
migrate_write_unlock(zspage);

get_page(newpage);
@@ -1876,14 +1878,16 @@ static int zs_page_migrate(struct page *newpage, struct page *page,

static void zs_page_putback(struct page *page)
{
+ struct zs_pool *pool;
struct zspage *zspage;

VM_BUG_ON_PAGE(!PageIsolated(page), page);

zspage = get_zspage(page);
- migrate_write_lock(zspage);
+ pool = zspage->pool;
+ spin_lock(&pool->lock);
dec_zspage_isolation(zspage);
- migrate_write_unlock(zspage);
+ spin_unlock(&pool->lock);
}

static const struct movable_operations zsmalloc_mops = {
--
2.18.0



2023-07-28 05:11:09

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v2] zsmalloc: Fix races between modifications of fullness and isolated

On (23/07/27 14:29), Andrew Yang wrote:
>
> Since fullness and isolated share the same unsigned int,
> modifications of them should be protected by the same lock.
>
> Signed-off-by: Andrew Yang <[email protected]>
> Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for migration")

Reviewed-by: Sergey Senozhatsky <[email protected]>

2023-08-10 17:21:05

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v2] zsmalloc: Fix races between modifications of fullness and isolated

On Wed, Jul 26, 2023 at 11:30 PM Andrew Yang <[email protected]> wrote:
>
> Since fullness and isolated share the same unsigned int,
> modifications of them should be protected by the same lock.
>
> Signed-off-by: Andrew Yang <[email protected]>
> Fixes: c4549b871102 ("zsmalloc: remove zspage isolation for migration")
> ---
> v2: Moving comment too
> ---
> mm/zsmalloc.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 32f5bc4074df..b58f957429f0 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1777,6 +1777,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
>
> static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> {
> + struct zs_pool *pool;
> struct zspage *zspage;
>
> /*
> @@ -1786,9 +1787,10 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> VM_BUG_ON_PAGE(PageIsolated(page), page);
>
> zspage = get_zspage(page);
> - migrate_write_lock(zspage);
> + pool = zspage->pool;
> + spin_lock(&pool->lock);
> inc_zspage_isolation(zspage);
> - migrate_write_unlock(zspage);
> + spin_unlock(&pool->lock);
>
> return true;
> }
> @@ -1854,12 +1856,12 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> kunmap_atomic(s_addr);
>
> replace_sub_page(class, zspage, newpage, page);
> + dec_zspage_isolation(zspage);
> /*
> * Since we complete the data copy and set up new zspage structure,
> * it's okay to release the pool's lock.
> */
> spin_unlock(&pool->lock);
> - dec_zspage_isolation(zspage);
> migrate_write_unlock(zspage);
>
> get_page(newpage);
> @@ -1876,14 +1878,16 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>
> static void zs_page_putback(struct page *page)
> {
> + struct zs_pool *pool;
> struct zspage *zspage;
>
> VM_BUG_ON_PAGE(!PageIsolated(page), page);
>
> zspage = get_zspage(page);
> - migrate_write_lock(zspage);
> + pool = zspage->pool;
> + spin_lock(&pool->lock);
> dec_zspage_isolation(zspage);
> - migrate_write_unlock(zspage);
> + spin_unlock(&pool->lock);
> }
>
> static const struct movable_operations zsmalloc_mops = {
> --
> 2.18.0
>
>

I think this fixes an issue of mine that has been bugging me for
the past couple of weeks :) Thanks a lot, Andrew!

This should be added to other Linux stable versions too, right?
(with the caveat that before 6.2, use class's lock instead of
pool's lock).

Reviewed-by: Nhat Pham <[email protected]>