2024-02-07 11:54:37

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH v4] mm/zswap: invalidate old entry when store fail or !zswap_enabled

From: Chengming Zhou <[email protected]>

We may encounter duplicate entry in the zswap_store():

1. swap slot that freed to per-cpu swap cache, doesn't invalidate
the zswap entry, then got reused. This has been fixed.

2. !exclusive load mode, swapin folio will leave its zswap entry
on the tree, then swapout again. This has been removed.

3. one folio can be dirtied again after zswap_store(), so need to
zswap_store() again. This should be handled correctly.

So we must invalidate the old duplicate entry before insert the
new one, which actually doesn't have to be done at the beginning
of zswap_store(). And this is a normal situation, we shouldn't
WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want
to detect swap entry UAF problem? But not very necessary here.)

The good point is that we don't need to lock tree twice in the
store success path.

Note we still need to invalidate the old duplicate entry in the
store failure path, otherwise the new data in swapfile could be
overwrite by the old data in zswap pool when lru writeback.

We have to do this even when !zswap_enabled since zswap can be
disabled anytime. If the folio store success before, then got
dirtied again but zswap disabled, we won't invalidate the old
duplicate entry in the zswap_store(). So later lru writeback
may overwrite the new data in swapfile.

Fixes: 42c06a0e8ebe ("mm: kill frontswap")
Cc: <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Acked-by: Yosry Ahmed <[email protected]>
Acked-by: Chris Li <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
v4:
- VM_WARN_ON generate no code when !CONFIG_DEBUG_VM, change
to use WARN_ON.

v3:
- Fix a few grammatical problems in comments, per Yosry.

v2:
- Change the duplicate entry invalidation loop to if, since we hold
the lock, we won't find it once we invalidate it, per Yosry.
- Add Fixes tag.
---
mm/zswap.c | 33 ++++++++++++++++-----------------
1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index cd67f7f6b302..62fe307521c9 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1518,18 +1518,8 @@ bool zswap_store(struct folio *folio)
return false;

if (!zswap_enabled)
- return false;
+ goto check_old;

- /*
- * If this is a duplicate, it must be removed before attempting to store
- * it, otherwise, if the store fails the old page won't be removed from
- * the tree, and it might be written back overriding the new data.
- */
- spin_lock(&tree->lock);
- entry = zswap_rb_search(&tree->rbroot, offset);
- if (entry)
- zswap_invalidate_entry(tree, entry);
- spin_unlock(&tree->lock);
objcg = get_obj_cgroup_from_folio(folio);
if (objcg && !obj_cgroup_may_zswap(objcg)) {
memcg = get_mem_cgroup_from_objcg(objcg);
@@ -1608,14 +1598,12 @@ bool zswap_store(struct folio *folio)
/* map */
spin_lock(&tree->lock);
/*
- * A duplicate entry should have been removed at the beginning of this
- * function. Since the swap entry should be pinned, if a duplicate is
- * found again here it means that something went wrong in the swap
- * cache.
+ * The folio may have been dirtied again, invalidate the
+ * possibly stale entry before inserting the new entry.
*/
- while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
- WARN_ON(1);
+ if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
zswap_invalidate_entry(tree, dupentry);
+ WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry));
}
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
@@ -1638,6 +1626,17 @@ bool zswap_store(struct folio *folio)
reject:
if (objcg)
obj_cgroup_put(objcg);
+check_old:
+ /*
+ * If the zswap store fails or zswap is disabled, we must invalidate the
+ * possibly stale entry which was previously stored at this offset.
+ * Otherwise, writeback could overwrite the new data in the swapfile.
+ */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ if (entry)
+ zswap_invalidate_entry(tree, entry);
+ spin_unlock(&tree->lock);
return false;

shrink:
--
2.40.1



2024-02-07 23:07:09

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v4] mm/zswap: invalidate old entry when store fail or !zswap_enabled

On Wed, Feb 7, 2024 at 3:54 AM <[email protected]> wrote:
>
> From: Chengming Zhou <[email protected]>
>
> We may encounter duplicate entry in the zswap_store():
>
> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate
> the zswap entry, then got reused. This has been fixed.
>
> 2. !exclusive load mode, swapin folio will leave its zswap entry
> on the tree, then swapout again. This has been removed.
>
> 3. one folio can be dirtied again after zswap_store(), so need to
> zswap_store() again. This should be handled correctly.
>
> So we must invalidate the old duplicate entry before insert the
> new one, which actually doesn't have to be done at the beginning
> of zswap_store(). And this is a normal situation, we shouldn't
> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want
> to detect swap entry UAF problem? But not very necessary here.)
>
> The good point is that we don't need to lock tree twice in the
> store success path.
>
> Note we still need to invalidate the old duplicate entry in the
> store failure path, otherwise the new data in swapfile could be
> overwrite by the old data in zswap pool when lru writeback.
>
> We have to do this even when !zswap_enabled since zswap can be
> disabled anytime. If the folio store success before, then got
> dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback
> may overwrite the new data in swapfile.
>
> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
> Cc: <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Acked-by: Yosry Ahmed <[email protected]>
> Acked-by: Chris Li <[email protected]>
> Signed-off-by: Chengming Zhou <[email protected]>

Acked-by: Nhat Pham <[email protected]>

Sorry for being late to the party, and thanks for fixing this, Chengming!

> ---
> v4:
> - VM_WARN_ON generate no code when !CONFIG_DEBUG_VM, change
> to use WARN_ON.
>
> v3:
> - Fix a few grammatical problems in comments, per Yosry.
>
> v2:
> - Change the duplicate entry invalidation loop to if, since we hold
> the lock, we won't find it once we invalidate it, per Yosry.
> - Add Fixes tag.
> ---
> mm/zswap.c | 33 ++++++++++++++++-----------------
> 1 file changed, 16 insertions(+), 17 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index cd67f7f6b302..62fe307521c9 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1518,18 +1518,8 @@ bool zswap_store(struct folio *folio)
> return false;
>
> if (!zswap_enabled)
> - return false;
> + goto check_old;
>
> - /*
> - * If this is a duplicate, it must be removed before attempting to store
> - * it, otherwise, if the store fails the old page won't be removed from
> - * the tree, and it might be written back overriding the new data.
> - */
> - spin_lock(&tree->lock);
> - entry = zswap_rb_search(&tree->rbroot, offset);
> - if (entry)
> - zswap_invalidate_entry(tree, entry);
> - spin_unlock(&tree->lock);
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> memcg = get_mem_cgroup_from_objcg(objcg);
> @@ -1608,14 +1598,12 @@ bool zswap_store(struct folio *folio)
> /* map */
> spin_lock(&tree->lock);
> /*
> - * A duplicate entry should have been removed at the beginning of this
> - * function. Since the swap entry should be pinned, if a duplicate is
> - * found again here it means that something went wrong in the swap
> - * cache.
> + * The folio may have been dirtied again, invalidate the
> + * possibly stale entry before inserting the new entry.
> */
> - while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
> - WARN_ON(1);
> + if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
> zswap_invalidate_entry(tree, dupentry);
> + WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry));
> }
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> @@ -1638,6 +1626,17 @@ bool zswap_store(struct folio *folio)
> reject:
> if (objcg)
> obj_cgroup_put(objcg);
> +check_old:
> + /*
> + * If the zswap store fails or zswap is disabled, we must invalidate the
> + * possibly stale entry which was previously stored at this offset.
> + * Otherwise, writeback could overwrite the new data in the swapfile.
> + */
> + spin_lock(&tree->lock);
> + entry = zswap_rb_search(&tree->rbroot, offset);
> + if (entry)
> + zswap_invalidate_entry(tree, entry);
> + spin_unlock(&tree->lock);
> return false;
>
> shrink:
> --
> 2.40.1
>

2024-02-07 23:43:18

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v4] mm/zswap: invalidate old entry when store fail or !zswap_enabled

On Wed, 7 Feb 2024 11:54:06 +0000 [email protected] wrote:

> From: Chengming Zhou <[email protected]>
>
> We may encounter duplicate entry in the zswap_store():
>
> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate
> the zswap entry, then got reused. This has been fixed.
>
> 2. !exclusive load mode, swapin folio will leave its zswap entry
> on the tree, then swapout again. This has been removed.
>
> 3. one folio can be dirtied again after zswap_store(), so need to
> zswap_store() again. This should be handled correctly.
>
> So we must invalidate the old duplicate entry before insert the
> new one, which actually doesn't have to be done at the beginning
> of zswap_store(). And this is a normal situation, we shouldn't
> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want
> to detect swap entry UAF problem? But not very necessary here.)
>
> The good point is that we don't need to lock tree twice in the
> store success path.
>
> Note we still need to invalidate the old duplicate entry in the
> store failure path, otherwise the new data in swapfile could be
> overwrite by the old data in zswap pool when lru writeback.
>
> We have to do this even when !zswap_enabled since zswap can be
> disabled anytime. If the folio store success before, then got
> dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback
> may overwrite the new data in swapfile.
>
> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
> Cc: <[email protected]>

We have a patch ordering issue.

As a cc:stable hotfix, this should be merged into 6.8-rcX and later
backported into -stable trees. So it will go
mm-hotfixes-unstable->mm-hotfixes-stable->mainline. So someone has to
make this patch merge and work against latest mm-hotfixes-unstable.

The patch you sent appears to be based on linux-next, so it has
dependencies upon mm-unstable patches which won't be merged into
mainline until the next merge window.

So can you please redo and retest this against mm.git's
mm-hotfixes-unstable branch? Then I'll try to figure out how to merge
the gigentic pile of mm-unstable zswap changes on top of that.

Thanks.

2024-02-08 02:33:30

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH mm-hotfixes-unstable] mm/zswap: invalidate duplicate entry when !zswap_enabled

From: Chengming Zhou <[email protected]>

We have to invalidate any duplicate entry even when !zswap_enabled
since zswap can be disabled anytime. If the folio store success before,
then got dirtied again but zswap disabled, we won't invalidate the old
duplicate entry in the zswap_store(). So later lru writeback may
overwrite the new data in swapfile.

Fixes: 42c06a0e8ebe ("mm: kill frontswap")
Cc: <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
mm/zswap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index fe7ee2640c69..32633d0597dc 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
if (folio_test_large(folio))
return false;

- if (!zswap_enabled || !tree)
+ if (!tree)
return false;

/*
@@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
spin_unlock(&tree->lock);
+
+ if (!zswap_enabled)
+ return false;
+
objcg = get_obj_cgroup_from_folio(folio);
if (objcg && !obj_cgroup_may_zswap(objcg)) {
memcg = get_mem_cgroup_from_objcg(objcg);
--
2.40.1


2024-02-08 02:35:13

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH v4] mm/zswap: invalidate old entry when store fail or !zswap_enabled

On 2024/2/8 07:06, Nhat Pham wrote:
> On Wed, Feb 7, 2024 at 3:54 AM <[email protected]> wrote:
>>
>> From: Chengming Zhou <[email protected]>
>>
>> We may encounter duplicate entry in the zswap_store():
>>
>> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate
>> the zswap entry, then got reused. This has been fixed.
>>
>> 2. !exclusive load mode, swapin folio will leave its zswap entry
>> on the tree, then swapout again. This has been removed.
>>
>> 3. one folio can be dirtied again after zswap_store(), so need to
>> zswap_store() again. This should be handled correctly.
>>
>> So we must invalidate the old duplicate entry before insert the
>> new one, which actually doesn't have to be done at the beginning
>> of zswap_store(). And this is a normal situation, we shouldn't
>> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want
>> to detect swap entry UAF problem? But not very necessary here.)
>>
>> The good point is that we don't need to lock tree twice in the
>> store success path.
>>
>> Note we still need to invalidate the old duplicate entry in the
>> store failure path, otherwise the new data in swapfile could be
>> overwrite by the old data in zswap pool when lru writeback.
>>
>> We have to do this even when !zswap_enabled since zswap can be
>> disabled anytime. If the folio store success before, then got
>> dirtied again but zswap disabled, we won't invalidate the old
>> duplicate entry in the zswap_store(). So later lru writeback
>> may overwrite the new data in swapfile.
>>
>> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
>> Cc: <[email protected]>
>> Acked-by: Johannes Weiner <[email protected]>
>> Acked-by: Yosry Ahmed <[email protected]>
>> Acked-by: Chris Li <[email protected]>
>> Signed-off-by: Chengming Zhou <[email protected]>
>
> Acked-by: Nhat Pham <[email protected]>
>
> Sorry for being late to the party, and thanks for fixing this, Chengming!

Thanks for your review! :)

2024-02-08 02:42:35

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH v4] mm/zswap: invalidate old entry when store fail or !zswap_enabled

On 2024/2/8 07:43, Andrew Morton wrote:
> On Wed, 7 Feb 2024 11:54:06 +0000 [email protected] wrote:
>
>> From: Chengming Zhou <[email protected]>
>>
>> We may encounter duplicate entry in the zswap_store():
>>
>> 1. swap slot that freed to per-cpu swap cache, doesn't invalidate
>> the zswap entry, then got reused. This has been fixed.
>>
>> 2. !exclusive load mode, swapin folio will leave its zswap entry
>> on the tree, then swapout again. This has been removed.
>>
>> 3. one folio can be dirtied again after zswap_store(), so need to
>> zswap_store() again. This should be handled correctly.
>>
>> So we must invalidate the old duplicate entry before insert the
>> new one, which actually doesn't have to be done at the beginning
>> of zswap_store(). And this is a normal situation, we shouldn't
>> WARN_ON(1) in this case, so delete it. (The WARN_ON(1) seems want
>> to detect swap entry UAF problem? But not very necessary here.)
>>
>> The good point is that we don't need to lock tree twice in the
>> store success path.
>>
>> Note we still need to invalidate the old duplicate entry in the
>> store failure path, otherwise the new data in swapfile could be
>> overwrite by the old data in zswap pool when lru writeback.
>>
>> We have to do this even when !zswap_enabled since zswap can be
>> disabled anytime. If the folio store success before, then got
>> dirtied again but zswap disabled, we won't invalidate the old
>> duplicate entry in the zswap_store(). So later lru writeback
>> may overwrite the new data in swapfile.
>>
>> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
>> Cc: <[email protected]>
>
> We have a patch ordering issue.
>
> As a cc:stable hotfix, this should be merged into 6.8-rcX and later
> backported into -stable trees. So it will go
> mm-hotfixes-unstable->mm-hotfixes-stable->mainline. So someone has to
> make this patch merge and work against latest mm-hotfixes-unstable.

Ah, right. I just sent a fix based on mm-hotfixes-unstable [1], which
is split from this patch to only include bugfix, so easy to backport.

This patch actually include two parts: bugfix and a little optimization
for the zswap_store() normal case.

Should I split this patch into two small patches and resend based on
mm-unstable?

[1] https://lore.kernel.org/all/[email protected]/

>
> The patch you sent appears to be based on linux-next, so it has
> dependencies upon mm-unstable patches which won't be merged into
> mainline until the next merge window.
>
> So can you please redo and retest this against mm.git's
> mm-hotfixes-unstable branch? Then I'll try to figure out how to merge
> the gigentic pile of mm-unstable zswap changes on top of that.
>
> Thanks.

2024-02-08 13:51:26

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH mm-hotfixes-unstable] mm/zswap: invalidate duplicate entry when !zswap_enabled

On Thu, Feb 08, 2024 at 02:32:54AM +0000, [email protected] wrote:
> From: Chengming Zhou <[email protected]>
>
> We have to invalidate any duplicate entry even when !zswap_enabled
> since zswap can be disabled anytime. If the folio store success before,
> then got dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback may
> overwrite the new data in swapfile.
>
> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
> Cc: <[email protected]>
> Signed-off-by: Chengming Zhou <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

Nice, this is easier to backport and should be less disruptive to
mm-unstable as well. It makes sense to me to put the optimization and
cleanup that was cut out into a separate patch on top of mm-unstable.

> mm/zswap.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index fe7ee2640c69..32633d0597dc 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!zswap_enabled || !tree)
> + if (!tree)
> return false;
>
> /*
> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> +
> + if (!zswap_enabled)
> + return false;
> +
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> memcg = get_mem_cgroup_from_objcg(objcg);

2024-02-08 21:16:19

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH mm-hotfixes-unstable] mm/zswap: invalidate duplicate entry when !zswap_enabled

On Thu, 8 Feb 2024 02:32:54 +0000 [email protected] wrote:

> From: Chengming Zhou <[email protected]>
>
> We have to invalidate any duplicate entry even when !zswap_enabled
> since zswap can be disabled anytime. If the folio store success before,
> then got dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback may
> overwrite the new data in swapfile.
>
> ...
>
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!zswap_enabled || !tree)
> + if (!tree)
> return false;
>
> /*
> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> +
> + if (!zswap_enabled)
> + return false;
> +
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> memcg = get_mem_cgroup_from_objcg(objcg);

OK, thanks.

I saw only one reject from mm-unstable patches. Your patch "mm/zswap:
make sure each swapfile always have zswap rb-tree" now does

--- a/mm/zswap.c~mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree
+++ a/mm/zswap.c
@@ -1518,9 +1518,6 @@ bool zswap_store(struct folio *folio)
if (folio_test_large(folio))
return false;

- if (!tree)
- return false;
-
/*
* If this is a duplicate, it must be removed before attempting to store
* it, otherwise, if the store fails the old page won't be removed from



2024-02-09 04:41:35

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH mm-unstable] mm/zswap: optimize and cleanup the invalidation of duplicate entry

From: Chengming Zhou <[email protected]>

We may encounter duplicate entry in the zswap_store():

1. swap slot that freed to per-cpu swap cache, doesn't invalidate
the zswap entry, then got reused. This has been fixed.

2. !exclusive load mode, swapin folio will leave its zswap entry
on the tree, then swapout again. This has been removed.

3. one folio can be dirtied again after zswap_store(), so need to
zswap_store() again. This should be handled correctly.

So we must invalidate the old duplicate entry before inserting the
new one, which actually doesn't have to be done at the beginning
of zswap_store().

The good point is that we don't need to lock the tree twice in the
normal store success path. And cleanup the loop as we are here.

Note we still need to invalidate the old duplicate entry when store
failed or zswap is disabled , otherwise the new data in swapfile
could be overwrite by the old data in zswap pool when lru writeback.

Acked-by: Johannes Weiner <[email protected]>
Acked-by: Yosry Ahmed <[email protected]>
Acked-by: Chris Li <[email protected]>
Acked-by: Nhat Pham <[email protected]>
Signed-off-by: Chengming Zhou <[email protected]>
---
mm/zswap.c | 34 ++++++++++++++++------------------
1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 96664cdee207..62fe307521c9 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1517,19 +1517,8 @@ bool zswap_store(struct folio *folio)
if (folio_test_large(folio))
return false;

- /*
- * If this is a duplicate, it must be removed before attempting to store
- * it, otherwise, if the store fails the old page won't be removed from
- * the tree, and it might be written back overriding the new data.
- */
- spin_lock(&tree->lock);
- entry = zswap_rb_search(&tree->rbroot, offset);
- if (entry)
- zswap_invalidate_entry(tree, entry);
- spin_unlock(&tree->lock);
-
if (!zswap_enabled)
- return false;
+ goto check_old;

objcg = get_obj_cgroup_from_folio(folio);
if (objcg && !obj_cgroup_may_zswap(objcg)) {
@@ -1609,14 +1598,12 @@ bool zswap_store(struct folio *folio)
/* map */
spin_lock(&tree->lock);
/*
- * A duplicate entry should have been removed at the beginning of this
- * function. Since the swap entry should be pinned, if a duplicate is
- * found again here it means that something went wrong in the swap
- * cache.
+ * The folio may have been dirtied again, invalidate the
+ * possibly stale entry before inserting the new entry.
*/
- while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
- WARN_ON(1);
+ if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
zswap_invalidate_entry(tree, dupentry);
+ WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry));
}
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
@@ -1639,6 +1626,17 @@ bool zswap_store(struct folio *folio)
reject:
if (objcg)
obj_cgroup_put(objcg);
+check_old:
+ /*
+ * If the zswap store fails or zswap is disabled, we must invalidate the
+ * possibly stale entry which was previously stored at this offset.
+ * Otherwise, writeback could overwrite the new data in the swapfile.
+ */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, offset);
+ if (entry)
+ zswap_invalidate_entry(tree, entry);
+ spin_unlock(&tree->lock);
return false;

shrink:
--
2.40.1


2024-02-09 04:51:06

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH mm-hotfixes-unstable] mm/zswap: invalidate duplicate entry when !zswap_enabled

On 2024/2/9 05:09, Andrew Morton wrote:
> On Thu, 8 Feb 2024 02:32:54 +0000 [email protected] wrote:
>
>> From: Chengming Zhou <[email protected]>
>>
>> We have to invalidate any duplicate entry even when !zswap_enabled
>> since zswap can be disabled anytime. If the folio store success before,
>> then got dirtied again but zswap disabled, we won't invalidate the old
>> duplicate entry in the zswap_store(). So later lru writeback may
>> overwrite the new data in swapfile.
>>
>> ...
>>
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
>> if (folio_test_large(folio))
>> return false;
>>
>> - if (!zswap_enabled || !tree)
>> + if (!tree)
>> return false;
>>
>> /*
>> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
>> zswap_invalidate_entry(tree, dupentry);
>> }
>> spin_unlock(&tree->lock);
>> +
>> + if (!zswap_enabled)
>> + return false;
>> +
>> objcg = get_obj_cgroup_from_folio(folio);
>> if (objcg && !obj_cgroup_may_zswap(objcg)) {
>> memcg = get_mem_cgroup_from_objcg(objcg);
>
> OK, thanks.
>
> I saw only one reject from mm-unstable patches. Your patch "mm/zswap:
> make sure each swapfile always have zswap rb-tree" now does

It's correct. Thanks!

The other patch that includes optimization and cleanup is updated based on
mm-unstable and just resend:

https://lore.kernel.org/all/[email protected]/

>
> --- a/mm/zswap.c~mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree
> +++ a/mm/zswap.c
> @@ -1518,9 +1518,6 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!tree)
> - return false;
> -
> /*
> * If this is a duplicate, it must be removed before attempting to store
> * it, otherwise, if the store fails the old page won't be removed from
>
>