2022-05-09 10:00:39

by Sultan Alsawaf

[permalink] [raw]
Subject: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

From: Sultan Alsawaf <[email protected]>

The asynchronous zspage free worker tries to lock a zspage's entire page
list without defending against page migration. Since pages which haven't
yet been locked can concurrently migrate off the zspage page list while
lock_zspage() churns away, lock_zspage() can suffer from a few different
lethal races. It can lock a page which no longer belongs to the zspage and
unsafely dereference page_private(), it can unsafely dereference a torn
pointer to the next page (since there's a data race), and it can observe a
spurious NULL pointer to the next page and thus not lock all of the
zspage's pages (since a single page migration will reconstruct the entire
page list, and create_page_chain() unconditionally zeroes out each list
pointer in the process).

Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
with page migration.

Cc: [email protected]
Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
Signed-off-by: Sultan Alsawaf <[email protected]>
---
mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++----
1 file changed, 33 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..5d5fc04385b8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
*/
static void lock_zspage(struct zspage *zspage)
{
- struct page *page = get_first_page(zspage);
+ struct page *curr_page, *page;

- do {
- lock_page(page);
- } while ((page = get_next_page(page)) != NULL);
+ /*
+ * Pages we haven't locked yet can be migrated off the list while we're
+ * trying to lock them, so we need to be careful and only attempt to
+ * lock each page under migrate_read_lock(). Otherwise, the page we lock
+ * may no longer belong to the zspage. This means that we may wait for
+ * the wrong page to unlock, so we must take a reference to the page
+ * prior to waiting for it to unlock outside migrate_read_lock().
+ */
+ while (1) {
+ migrate_read_lock(zspage);
+ page = get_first_page(zspage);
+ if (trylock_page(page))
+ break;
+ get_page(page);
+ migrate_read_unlock(zspage);
+ wait_on_page_locked(page);
+ put_page(page);
+ }
+
+ curr_page = page;
+ while ((page = get_next_page(curr_page))) {
+ if (trylock_page(page)) {
+ curr_page = page;
+ } else {
+ get_page(page);
+ migrate_read_unlock(zspage);
+ wait_on_page_locked(page);
+ put_page(page);
+ migrate_read_lock(zspage);
+ }
+ }
+ migrate_read_unlock(zspage);
}

static int zs_init_fs_context(struct fs_context *fc)
--
2.36.0



2022-05-10 08:05:03

by Sultan Alsawaf

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Mon, May 09, 2022 at 05:06:32PM -0700, Andrew Morton wrote:
> Why not simply lock_page() here? The get_page() alone won't protect
> from all the dire consequences which you have identified?

Hi,

My reasoning is that if the page migrated, then we've got the last reference
to it anyway and there's no point in locking. But more importantly, we'd still
need to take migrate_read_lock() again in order to verify whether or not the
page migrated because of data races stemming from replace_sub_page(), so I don't
think there's much to gain by using lock_page(). When any of the pages in the
zspage migrates, the entire page list is reconstructed and every page's private
storage is rewritten. I had drafted another change that fixes the data races by
trimming out all of that redundant work done in replace_sub_page(), but I wanted
to keep this patch small to make it easier to review and easier to backport.

Sultan

2022-05-10 12:34:41

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Sun, 8 May 2022 19:47:02 -0700 Sultan Alsawaf <[email protected]> wrote:

> From: Sultan Alsawaf <[email protected]>
>
> The asynchronous zspage free worker tries to lock a zspage's entire page
> list without defending against page migration. Since pages which haven't
> yet been locked can concurrently migrate off the zspage page list while
> lock_zspage() churns away, lock_zspage() can suffer from a few different
> lethal races. It can lock a page which no longer belongs to the zspage and
> unsafely dereference page_private(), it can unsafely dereference a torn
> pointer to the next page (since there's a data race), and it can observe a
> spurious NULL pointer to the next page and thus not lock all of the
> zspage's pages (since a single page migration will reconstruct the entire
> page list, and create_page_chain() unconditionally zeroes out each list
> pointer in the process).
>
> Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
> with page migration.
>
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
> */
> static void lock_zspage(struct zspage *zspage)
> {
> - struct page *page = get_first_page(zspage);
> + struct page *curr_page, *page;
>
> - do {
> - lock_page(page);
> - } while ((page = get_next_page(page)) != NULL);
> + /*
> + * Pages we haven't locked yet can be migrated off the list while we're
> + * trying to lock them, so we need to be careful and only attempt to
> + * lock each page under migrate_read_lock(). Otherwise, the page we lock
> + * may no longer belong to the zspage. This means that we may wait for
> + * the wrong page to unlock, so we must take a reference to the page
> + * prior to waiting for it to unlock outside migrate_read_lock().
> + */
> + while (1) {
> + migrate_read_lock(zspage);
> + page = get_first_page(zspage);
> + if (trylock_page(page))
> + break;
> + get_page(page);
> + migrate_read_unlock(zspage);
> + wait_on_page_locked(page);

Why not simply lock_page() here? The get_page() alone won't protect
from all the dire consequences which you have identified?

> + put_page(page);
> + }
> +
> + curr_page = page;
> + while ((page = get_next_page(curr_page))) {
> + if (trylock_page(page)) {
> + curr_page = page;
> + } else {
> + get_page(page);
> + migrate_read_unlock(zspage);
> + wait_on_page_locked(page);

ditto.

> + put_page(page);
> + migrate_read_lock(zspage);
> + }
> + }
> + migrate_read_unlock(zspage);
> }
>
> static int zs_init_fs_context(struct fs_context *fc)


2022-05-12 05:25:11

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> From: Sultan Alsawaf <[email protected]>
>
> The asynchronous zspage free worker tries to lock a zspage's entire page
> list without defending against page migration. Since pages which haven't
> yet been locked can concurrently migrate off the zspage page list while
> lock_zspage() churns away, lock_zspage() can suffer from a few different
> lethal races. It can lock a page which no longer belongs to the zspage and
> unsafely dereference page_private(), it can unsafely dereference a torn
> pointer to the next page (since there's a data race), and it can observe a
> spurious NULL pointer to the next page and thus not lock all of the
> zspage's pages (since a single page migration will reconstruct the entire
> page list, and create_page_chain() unconditionally zeroes out each list
> pointer in the process).
>
> Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
> with page migration.
>
> Cc: [email protected]
> Fixes: 48b4800a1c6a ("zsmalloc: page migration support")

Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
unnecessary loops but not return -EBUSY if zspage is not inuse)?
Because we didn't migrate ZS_EMPTY pages before.

> Signed-off-by: Sultan Alsawaf <[email protected]>
> ---
> mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++----
> 1 file changed, 33 insertions(+), 4 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..5d5fc04385b8 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
> */
> static void lock_zspage(struct zspage *zspage)
> {
> - struct page *page = get_first_page(zspage);
> + struct page *curr_page, *page;
>
> - do {
> - lock_page(page);
> - } while ((page = get_next_page(page)) != NULL);
> + /*
> + * Pages we haven't locked yet can be migrated off the list while we're
> + * trying to lock them, so we need to be careful and only attempt to
> + * lock each page under migrate_read_lock(). Otherwise, the page we lock
> + * may no longer belong to the zspage. This means that we may wait for
> + * the wrong page to unlock, so we must take a reference to the page
> + * prior to waiting for it to unlock outside migrate_read_lock().

I couldn't get the point here. Why couldn't we simple lock zspage migration?

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..05ff2315b7b1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)

list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
list_del(&zspage->list);
+
+ migrate_read_lock(zspage);
lock_zspage(zspage);
+ migrate_read_unlock(zspage);

get_zspage_mapping(zspage, &class_idx, &fullness);
VM_BUG_ON(fullness != ZS_EMPTY);


> + */
> + while (1) {
> + migrate_read_lock(zspage);
> + page = get_first_page(zspage);
> + if (trylock_page(page))
> + break;
> + get_page(page);
> + migrate_read_unlock(zspage);
> + wait_on_page_locked(page);
> + put_page(page);
> + }
> +
> + curr_page = page;
> + while ((page = get_next_page(curr_page))) {
> + if (trylock_page(page)) {
> + curr_page = page;
> + } else {
> + get_page(page);
> + migrate_read_unlock(zspage);
> + wait_on_page_locked(page);
> + put_page(page);
> + migrate_read_lock(zspage);
> + }
> + }
> + migrate_read_unlock(zspage);
> }
>
> static int zs_init_fs_context(struct fs_context *fc)
> --
> 2.36.0
>

2022-05-12 08:54:54

by Sultan Alsawaf

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, May 11, 2022 at 11:01:01AM -0700, Minchan Kim wrote:
> On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> > Cc: [email protected]
> > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
>
> Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> unnecessary loops but not return -EBUSY if zspage is not inuse)?
> Because we didn't migrate ZS_EMPTY pages before.

Hi,

Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.

> I couldn't get the point here. Why couldn't we simple lock zspage migration?
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..05ff2315b7b1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
>
> list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
> list_del(&zspage->list);
> +
> + migrate_read_lock(zspage);
> lock_zspage(zspage);
> + migrate_read_unlock(zspage);
>
> get_zspage_mapping(zspage, &class_idx, &fullness);
> VM_BUG_ON(fullness != ZS_EMPTY);

There are two problems with this:
1. migrate_read_lock() is a rwlock and lock_page() can sleep.
2. This will cause a deadlock because it violates the lock ordering in
zs_page_migrate(), since zs_page_migrate() takes migrate_write_lock() under
the page lock.

Sultan

2022-05-13 05:45:42

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, May 11, 2022 at 01:43:22PM -0700, Andrew Morton wrote:
> On Wed, 11 May 2022 12:50:20 -0700 Sultan Alsawaf <[email protected]> wrote:
>
> > > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > > Because we didn't migrate ZS_EMPTY pages before.
> >
> > Hi,
> >
> > Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.
>
> I updated the changelog, thanks.

Thanhks, Andrew.

Feel free to include my

Acked-by: Minchan Kim <[email protected]>

2022-05-13 06:01:01

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, 11 May 2022 12:50:20 -0700 Sultan Alsawaf <[email protected]> wrote:

> > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > Because we didn't migrate ZS_EMPTY pages before.
>
> Hi,
>
> Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.

I updated the changelog, thanks.

2022-05-13 09:37:39

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, May 11, 2022 at 02:45:30PM -0700, Sultan Alsawaf wrote:
> On Wed, May 11, 2022 at 02:07:19PM -0700, Minchan Kim wrote:
> > Then, how about this?
>
> Your proposal is completely wrong still. My original patch is fine; we can stick
> with that.
>
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 9152fbde33b5..2f205c18aee4 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
> > * To prevent zspage destroy during migration, zspage freeing should
> > * hold locks of all pages in the zspage.
> > */
> > -static void lock_zspage(struct zspage *zspage)
> > +static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
> > {
> > - struct page *page = get_first_page(zspage);
> > -
> > + struct page *page;
> > + int nr_locked;
> > + struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
> > + struct address_space *mapping;
> > +retry:
> > + nr_locked = 0;
> > + memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));
>
> This memset() zeroes out memory past the end of the array because it is an array
> of pointers, not an array of page structs; the sizeof() is incorrect.
>
> > + page = get_first_page(zspage);
>
> You can't use get_first_page() outside of the migrate lock.
>
> > do {
> > lock_page(page);
>
> You can't lock a page that you don't own.

That's key point what my idea was wrong.

Thanks for correction!

2022-05-13 15:06:55

by Sultan Alsawaf

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, May 11, 2022 at 02:07:19PM -0700, Minchan Kim wrote:
> Then, how about this?

Your proposal is completely wrong still. My original patch is fine; we can stick
with that.

> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..2f205c18aee4 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
> * To prevent zspage destroy during migration, zspage freeing should
> * hold locks of all pages in the zspage.
> */
> -static void lock_zspage(struct zspage *zspage)
> +static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
> {
> - struct page *page = get_first_page(zspage);
> -
> + struct page *page;
> + int nr_locked;
> + struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
> + struct address_space *mapping;
> +retry:
> + nr_locked = 0;
> + memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));

This memset() zeroes out memory past the end of the array because it is an array
of pointers, not an array of page structs; the sizeof() is incorrect.

> + page = get_first_page(zspage);

You can't use get_first_page() outside of the migrate lock.

> do {
> lock_page(page);

You can't lock a page that you don't own.

> + locked_pages[nr_locked++] = page;
> + /*
> + * subpage in the zspage could be migrated under us so
> + * verify it. Once it happens, retry the lock sequence.
> + */
> + mapping = page_mapping(page)

This doesn't compile.

> + if (mapping != pool->inode->i_mapping ||
> + page_private(page) != (unsigned long)zspage) {
> + do {
> + unlock_page(locked_pages[--nr_locked]);
> + } while (nr_locked > 0)

This doesn't compile.

> + goto retry;
> + }

There's no need to unlock all of the pages that were locked so far because once
a page is locked, it cannot be migrated.

> } while ((page = get_next_page(page)) != NULL);
> }

You can't use get_next_page() outside of the migrate lock.

>
> @@ -1987,7 +2006,7 @@ static void async_free_zspage(struct work_struct *work)
>
> list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
> list_del(&zspage->list);
> - lock_zspage(zspage);
> + lock_zspage(pool, zspage);
>
> get_zspage_mapping(zspage, &class_idx, &fullness);
> VM_BUG_ON(fullness != ZS_EMPTY);

Sultan

2022-05-14 02:20:14

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration

On Wed, May 11, 2022 at 12:50:20PM -0700, Sultan Alsawaf wrote:
> On Wed, May 11, 2022 at 11:01:01AM -0700, Minchan Kim wrote:
> > On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> > > Cc: [email protected]
> > > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> >
> > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > Because we didn't migrate ZS_EMPTY pages before.
>
> Hi,
>
> Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.
>
> > I couldn't get the point here. Why couldn't we simple lock zspage migration?
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 9152fbde33b5..05ff2315b7b1 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
> >
> > list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
> > list_del(&zspage->list);
> > +
> > + migrate_read_lock(zspage);
> > lock_zspage(zspage);
> > + migrate_read_unlock(zspage);
> >
> > get_zspage_mapping(zspage, &class_idx, &fullness);
> > VM_BUG_ON(fullness != ZS_EMPTY);
>
> There are two problems with this:
> 1. migrate_read_lock() is a rwlock and lock_page() can sleep.
> 2. This will cause a deadlock because it violates the lock ordering in
> zs_page_migrate(), since zs_page_migrate() takes migrate_write_lock() under
> the page lock.
>

That's true. Thanks!

Then, how about this?

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..2f205c18aee4 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
* To prevent zspage destroy during migration, zspage freeing should
* hold locks of all pages in the zspage.
*/
-static void lock_zspage(struct zspage *zspage)
+static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
{
- struct page *page = get_first_page(zspage);
-
+ struct page *page;
+ int nr_locked;
+ struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
+ struct address_space *mapping;
+retry:
+ nr_locked = 0;
+ memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));
+ page = get_first_page(zspage);
do {
lock_page(page);
+ locked_pages[nr_locked++] = page;
+ /*
+ * subpage in the zspage could be migrated under us so
+ * verify it. Once it happens, retry the lock sequence.
+ */
+ mapping = page_mapping(page)
+ if (mapping != pool->inode->i_mapping ||
+ page_private(page) != (unsigned long)zspage) {
+ do {
+ unlock_page(locked_pages[--nr_locked]);
+ } while (nr_locked > 0)
+ goto retry;
+ }
} while ((page = get_next_page(page)) != NULL);
}

@@ -1987,7 +2006,7 @@ static void async_free_zspage(struct work_struct *work)

list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
list_del(&zspage->list);
- lock_zspage(zspage);
+ lock_zspage(pool, zspage);

get_zspage_mapping(zspage, &class_idx, &fullness);
VM_BUG_ON(fullness != ZS_EMPTY);