2024-04-13 02:24:18

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 0/4] zswap same-filled and limit checking cleanups

Miscellaneous cleanups for limit checking and same-filled handling in
the store path. This series was broken out of the "zswap: store
zero-filled pages more efficiently" series [1]. It contains the cleanups
and drops the main functional changes.

[1]https://lore.kernel.org/lkml/[email protected]/

v2 -> v3:
- Dropped "mm: zswap: calculate limits only when updated" patch after
discussions with David Hildenbrand about the complexity of adding
notifiers for totalram_pages changes.
- Restored "mm: zswap: refactor limit checking from zswap_store()" to
the v1 version, and addressed Johannes's comment about deferring the
call to zswap_accept_thr_pages() to avoid the division when not
needed. Consequently, dropped the Ack and Review tags from v2.
- Added a separate comment header for same-filled pags in patch 2.
- Collected Acks and Review tags (thanks!).
- Rebased on top of the latest mm-unstable.

v2: https://lore.kernel.org/lkml/[email protected]/

Yosry Ahmed (4):
mm: zswap: always shrink in zswap_store() if zswap_pool_reached_full
mm: zswap: refactor limit checking from zswap_store()
mm: zswap: move more same-filled pages checks outside of zswap_store()
mm: zswap: remove same_filled module params

mm/zswap.c | 98 +++++++++++++++++++++++-------------------------------
1 file changed, 41 insertions(+), 57 deletions(-)

--
2.44.0.683.g7961c838ac-goog



2024-04-13 02:24:30

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 1/4] mm: zswap: always shrink in zswap_store() if zswap_pool_reached_full

The cleanup code in zswap_store() is not pretty, particularly the
'shrink' label at the bottom that ends up jumping between cleanup
labels.

Instead of having a dedicated label to shrink the pool, just use
zswap_pool_reached_full directly to figure out if the pool needs
shrinking. zswap_pool_reached_full should be true if and only if the
pool needs shrinking.

The only caveat is that the value of zswap_pool_reached_full may be
changed by concurrent zswap_store() calls between checking the limit and
testing zswap_pool_reached_full in the cleanup code. This is fine
because:
- If zswap_pool_reached_full was true during limit checking then became
false during the cleanup code, then someone else already took care of
shrinking the pool and there is no need to queue the worker. That
would be a good change.
- If zswap_pool_reached_full was false during limit checking then became
true during the cleanup code, then someone else hit the limit
meanwhile. In this case, both threads will try to queue the worker,
but it never gets queued more than once anyway. Also, calling
queue_work() multiple times when the limit is hit could already happen
today, so this isn't a significant change in any way.

Signed-off-by: Yosry Ahmed <[email protected]>
Reviewed-by: Nhat Pham <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
---
mm/zswap.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 741957f36f38b..77b6bb2099763 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1429,12 +1429,12 @@ bool zswap_store(struct folio *folio)
if (cur_pages >= max_pages) {
zswap_pool_limit_hit++;
zswap_pool_reached_full = true;
- goto shrink;
+ goto reject;
}

if (zswap_pool_reached_full) {
if (cur_pages > zswap_accept_thr_pages())
- goto shrink;
+ goto reject;
else
zswap_pool_reached_full = false;
}
@@ -1540,6 +1540,8 @@ bool zswap_store(struct folio *folio)
zswap_entry_cache_free(entry);
reject:
obj_cgroup_put(objcg);
+ if (zswap_pool_reached_full)
+ queue_work(shrink_wq, &zswap_shrink_work);
check_old:
/*
* If the zswap store fails or zswap is disabled, we must invalidate the
@@ -1550,10 +1552,6 @@ bool zswap_store(struct folio *folio)
if (entry)
zswap_entry_free(entry);
return false;
-
-shrink:
- queue_work(shrink_wq, &zswap_shrink_work);
- goto reject;
}

bool zswap_load(struct folio *folio)
--
2.44.0.683.g7961c838ac-goog


2024-04-13 02:24:37

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 2/4] mm: zswap: refactor limit checking from zswap_store()

Refactor limit and acceptance threshold checking outside of
zswap_store(). This code will be moved around in a following patch, so
it would be cleaner to move a function call around.

Signed-off-by: Yosry Ahmed <[email protected]>
---
mm/zswap.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 77b6bb2099763..7cddbca3ac62c 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -517,6 +517,21 @@ unsigned long zswap_total_pages(void)
return total;
}

+static bool zswap_check_limits(void)
+{
+ unsigned long cur_pages = zswap_total_pages();
+ unsigned long max_pages = zswap_max_pages();
+
+ if (cur_pages >= max_pages) {
+ zswap_pool_limit_hit++;
+ zswap_pool_reached_full = true;
+ } else if (zswap_pool_reached_full &&
+ cur_pages <= zswap_accept_thr_pages()) {
+ zswap_pool_reached_full = false;
+ }
+ return zswap_pool_reached_full;
+}
+
/*********************************
* param callbacks
**********************************/
@@ -1399,7 +1414,6 @@ bool zswap_store(struct folio *folio)
struct zswap_entry *entry, *old;
struct obj_cgroup *objcg = NULL;
struct mem_cgroup *memcg = NULL;
- unsigned long max_pages, cur_pages;

VM_WARN_ON_ONCE(!folio_test_locked(folio));
VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
@@ -1422,22 +1436,8 @@ bool zswap_store(struct folio *folio)
mem_cgroup_put(memcg);
}

- /* Check global limits */
- cur_pages = zswap_total_pages();
- max_pages = zswap_max_pages();
-
- if (cur_pages >= max_pages) {
- zswap_pool_limit_hit++;
- zswap_pool_reached_full = true;
+ if (zswap_check_limits())
goto reject;
- }
-
- if (zswap_pool_reached_full) {
- if (cur_pages > zswap_accept_thr_pages())
- goto reject;
- else
- zswap_pool_reached_full = false;
- }

/* allocate entry */
entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio));
--
2.44.0.683.g7961c838ac-goog


2024-04-13 02:24:54

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 3/4] mm: zswap: move more same-filled pages checks outside of zswap_store()

Currently, zswap_store() checks zswap_same_filled_pages_enabled, kmaps
the folio, then calls zswap_is_page_same_filled() to check the folio
contents. Move this logic into zswap_is_page_same_filled() as well (and
rename it to use 'folio' while we are at it).

This makes zswap_store() cleaner, and makes following changes to that
logic contained within the helper.

While we are at it:
- Rename the insert_entry label to store_entry to match xa_store().
- Add comment headers for same-filled functions and the main API
functions (load, store, invalidate, swapon, swapoff).

No functional change intended.

Signed-off-by: Yosry Ahmed <[email protected]>
Reviewed-by: Nhat Pham <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
---
mm/zswap.c | 45 +++++++++++++++++++++++++--------------------
1 file changed, 25 insertions(+), 20 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 7cddbca3ac62c..f1d3204c604bd 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1376,26 +1376,35 @@ static void shrink_worker(struct work_struct *w)
} while (zswap_total_pages() > thr);
}

-static int zswap_is_page_same_filled(void *ptr, unsigned long *value)
+/*********************************
+* same-filled functions
+**********************************/
+static bool zswap_is_folio_same_filled(struct folio *folio, unsigned long *value)
{
unsigned long *page;
unsigned long val;
unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1;
+ bool ret = false;

- page = (unsigned long *)ptr;
+ if (!zswap_same_filled_pages_enabled)
+ return false;
+
+ page = kmap_local_folio(folio, 0);
val = page[0];

if (val != page[last_pos])
- return 0;
+ goto out;

for (pos = 1; pos < last_pos; pos++) {
if (val != page[pos])
- return 0;
+ goto out;
}

*value = val;
-
- return 1;
+ ret = true;
+out:
+ kunmap_local(page);
+ return ret;
}

static void zswap_fill_page(void *ptr, unsigned long value)
@@ -1406,6 +1415,9 @@ static void zswap_fill_page(void *ptr, unsigned long value)
memset_l(page, value, PAGE_SIZE / sizeof(unsigned long));
}

+/*********************************
+* main API
+**********************************/
bool zswap_store(struct folio *folio)
{
swp_entry_t swp = folio->swap;
@@ -1414,6 +1426,7 @@ bool zswap_store(struct folio *folio)
struct zswap_entry *entry, *old;
struct obj_cgroup *objcg = NULL;
struct mem_cgroup *memcg = NULL;
+ unsigned long value;

VM_WARN_ON_ONCE(!folio_test_locked(folio));
VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
@@ -1446,19 +1459,11 @@ bool zswap_store(struct folio *folio)
goto reject;
}

- if (zswap_same_filled_pages_enabled) {
- unsigned long value;
- u8 *src;
-
- src = kmap_local_folio(folio, 0);
- if (zswap_is_page_same_filled(src, &value)) {
- kunmap_local(src);
- entry->length = 0;
- entry->value = value;
- atomic_inc(&zswap_same_filled_pages);
- goto insert_entry;
- }
- kunmap_local(src);
+ if (zswap_is_folio_same_filled(folio, &value)) {
+ entry->length = 0;
+ entry->value = value;
+ atomic_inc(&zswap_same_filled_pages);
+ goto store_entry;
}

if (!zswap_non_same_filled_pages_enabled)
@@ -1481,7 +1486,7 @@ bool zswap_store(struct folio *folio)
if (!zswap_compress(folio, entry))
goto put_pool;

-insert_entry:
+store_entry:
entry->swpentry = swp;
entry->objcg = objcg;

--
2.44.0.683.g7961c838ac-goog


2024-04-13 02:25:04

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 4/4] mm: zswap: remove same_filled module params

These knobs offer more fine-grained control to userspace than needed and
directly expose/influence kernel implementation; remove them.

For disabling same_filled handling, there is no logical reason to refuse
storing same-filled pages more efficiently and opt for compression.
Scanning pages for patterns may be an argument, but the page contents
will be read into the CPU cache anyway during compression. Also,
removing the same_filled handling code does not move the needle
significantly in terms of performance anyway [1].

For disabling non_same_filled handling, it was added when the compressed
pages in zswap were not being properly charged to memcgs, as workloads
could escape the accounting with compression [2]. This is no longer the
case after commit f4840ccfca25 ("zswap: memcg accounting"), and using
zswap without compression does not make much sense.

[1]https://lore.kernel.org/lkml/CAJD7tkaySFP2hBQw4pnZHJJwe3bMdjJ1t9VC2VJd=khn1_TXvA@mail.gmail.com/
[2]https://lore.kernel.org/lkml/[email protected]/

Cc: "Maciej S. Szmigiero" <[email protected]>
Signed-off-by: Yosry Ahmed <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Reviewed-by: Nhat Pham <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
---
mm/zswap.c | 19 -------------------
1 file changed, 19 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index f1d3204c604bd..243a7c202c6c7 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -123,19 +123,6 @@ static unsigned int zswap_accept_thr_percent = 90; /* of max pool size */
module_param_named(accept_threshold_percent, zswap_accept_thr_percent,
uint, 0644);

-/*
- * Enable/disable handling same-value filled pages (enabled by default).
- * If disabled every page is considered non-same-value filled.
- */
-static bool zswap_same_filled_pages_enabled = true;
-module_param_named(same_filled_pages_enabled, zswap_same_filled_pages_enabled,
- bool, 0644);
-
-/* Enable/disable handling non-same-value filled pages (enabled by default) */
-static bool zswap_non_same_filled_pages_enabled = true;
-module_param_named(non_same_filled_pages_enabled, zswap_non_same_filled_pages_enabled,
- bool, 0644);
-
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32

@@ -1386,9 +1373,6 @@ static bool zswap_is_folio_same_filled(struct folio *folio, unsigned long *value
unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1;
bool ret = false;

- if (!zswap_same_filled_pages_enabled)
- return false;
-
page = kmap_local_folio(folio, 0);
val = page[0];

@@ -1466,9 +1450,6 @@ bool zswap_store(struct folio *folio)
goto store_entry;
}

- if (!zswap_non_same_filled_pages_enabled)
- goto freepage;
-
/* if entry is successfully added, it keeps the reference */
entry->pool = zswap_pool_current_get();
if (!entry->pool)
--
2.44.0.683.g7961c838ac-goog


2024-04-13 21:15:13

by Maciej S. Szmigiero

[permalink] [raw]
Subject: Re: [PATCH v3 4/4] mm: zswap: remove same_filled module params

On 13.04.2024 04:24, Yosry Ahmed wrote:
> These knobs offer more fine-grained control to userspace than needed and
> directly expose/influence kernel implementation; remove them.
>
> For disabling same_filled handling, there is no logical reason to refuse
> storing same-filled pages more efficiently and opt for compression.
> Scanning pages for patterns may be an argument, but the page contents
> will be read into the CPU cache anyway during compression. Also,
> removing the same_filled handling code does not move the needle
> significantly in terms of performance anyway [1].
>
> For disabling non_same_filled handling, it was added when the compressed
> pages in zswap were not being properly charged to memcgs, as workloads
> could escape the accounting with compression [2]. This is no longer the
> case after commit f4840ccfca25 ("zswap: memcg accounting"), and using
> zswap without compression does not make much sense.
>
> [1]https://lore.kernel.org/lkml/CAJD7tkaySFP2hBQw4pnZHJJwe3bMdjJ1t9VC2VJd=khn1_TXvA@mail.gmail.com/
> [2]https://lore.kernel.org/lkml/[email protected]/
>
> Cc: "Maciej S. Szmigiero" <[email protected]>
> Signed-off-by: Yosry Ahmed <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Reviewed-by: Nhat Pham <[email protected]>
> Reviewed-by: Chengming Zhou <[email protected]>
> ---

I think you need to update zswap kernel docs, too.

Thanks,
Maciej


2024-04-14 21:07:53

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 4/4] mm: zswap: remove same_filled module params

On Sat, Apr 13, 2024 at 10:56:47PM +0200, Maciej S. Szmigiero wrote:
> On 13.04.2024 04:24, Yosry Ahmed wrote:
> > These knobs offer more fine-grained control to userspace than needed and
> > directly expose/influence kernel implementation; remove them.
> >
> > For disabling same_filled handling, there is no logical reason to refuse
> > storing same-filled pages more efficiently and opt for compression.
> > Scanning pages for patterns may be an argument, but the page contents
> > will be read into the CPU cache anyway during compression. Also,
> > removing the same_filled handling code does not move the needle
> > significantly in terms of performance anyway [1].
> >
> > For disabling non_same_filled handling, it was added when the compressed
> > pages in zswap were not being properly charged to memcgs, as workloads
> > could escape the accounting with compression [2]. This is no longer the
> > case after commit f4840ccfca25 ("zswap: memcg accounting"), and using
> > zswap without compression does not make much sense.
> >
> > [1]https://lore.kernel.org/lkml/CAJD7tkaySFP2hBQw4pnZHJJwe3bMdjJ1t9VC2VJd=khn1_TXvA@mail.gmail.com/
> > [2]https://lore.kernel.org/lkml/[email protected]/
> >
> > Cc: "Maciej S. Szmigiero" <[email protected]>
> > Signed-off-by: Yosry Ahmed <[email protected]>
> > Acked-by: Johannes Weiner <[email protected]>
> > Reviewed-by: Nhat Pham <[email protected]>
> > Reviewed-by: Chengming Zhou <[email protected]>
> > ---
>
> I think you need to update zswap kernel docs, too.

Ah yes, I had local changes to update the docs but apparently never
committed them. Thanks for catching this.

Andrew, could you please fold in the following patch or merge it as as
separate one? Whatever you prefer. Thanks.

From 6191c528fee01a4c79b43c53ccbd0273b705965e Mon Sep 17 00:00:00 2001
From: Yosry Ahmed <[email protected]>
Date: Sun, 14 Apr 2024 21:03:43 +0000
Subject: [PATCH] mm: zswap: remove same_filled_pages from docs

The module parameters are now removed, remove all references from kernel
docs.

Signed-off-by: Yosry Ahmed <[email protected]>
---
Documentation/admin-guide/mm/zswap.rst | 29 -------------------
.../driver-api/crypto/iaa/iaa-crypto.rst | 2 --
2 files changed, 31 deletions(-)

diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index b42132969e315..59783134afbe9 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -111,35 +111,6 @@ checked if it is a same-value filled page before compressing it. If true, the
compressed length of the page is set to zero and the pattern or same-filled
value is stored.

-Same-value filled pages identification feature is enabled by default and can be
-disabled at boot time by setting the ``same_filled_pages_enabled`` attribute
-to 0, e.g. ``zswap.same_filled_pages_enabled=0``. It can also be enabled and
-disabled at runtime using the sysfs ``same_filled_pages_enabled``
-attribute, e.g.::
-
- echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled
-
-When zswap same-filled page identification is disabled at runtime, it will stop
-checking for the same-value filled pages during store operation.
-In other words, every page will be then considered non-same-value filled.
-However, the existing pages which are marked as same-value filled pages remain
-stored unchanged in zswap until they are either loaded or invalidated.
-
-In some circumstances it might be advantageous to make use of just the zswap
-ability to efficiently store same-filled pages without enabling the whole
-compressed page storage.
-In this case the handling of non-same-value pages by zswap (enabled by default)
-can be disabled by setting the ``non_same_filled_pages_enabled`` attribute
-to 0, e.g. ``zswap.non_same_filled_pages_enabled=0``.
-It can also be enabled and disabled at runtime using the sysfs
-``non_same_filled_pages_enabled`` attribute, e.g.::
-
- echo 1 > /sys/module/zswap/parameters/non_same_filled_pages_enabled
-
-Disabling both ``zswap.same_filled_pages_enabled`` and
-``zswap.non_same_filled_pages_enabled`` effectively disables accepting any new
-pages by zswap.
-
To prevent zswap from shrinking pool when zswap is full and there's a high
pressure on swap (this will result in flipping pages in and out zswap pool
without any real benefit but with a performance drop for the system), a
diff --git a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
index de587cf9cbed4..4cb1d52ea6dc4 100644
--- a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
+++ b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
@@ -457,7 +457,6 @@ Use the following commands to enable zswap::
# echo deflate-iaa > /sys/module/zswap/parameters/compressor
# echo zsmalloc > /sys/module/zswap/parameters/zpool
# echo 1 > /sys/module/zswap/parameters/enabled
- # echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled
# echo 100 > /proc/sys/vm/swappiness
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# echo 1 > /proc/sys/vm/overcommit_memory
@@ -599,7 +598,6 @@ the 'fixed' compression mode::
echo deflate-iaa > /sys/module/zswap/parameters/compressor
echo zsmalloc > /sys/module/zswap/parameters/zpool
echo 1 > /sys/module/zswap/parameters/enabled
- echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled

echo 100 > /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/transparent_hugepage/enabled
--
2.44.0.683.g7961c838ac-goog



2024-04-16 00:29:39

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 2/4] mm: zswap: refactor limit checking from zswap_store()

On Fri, Apr 12, 2024 at 7:24 PM Yosry Ahmed <[email protected]> wrote:
>
> Refactor limit and acceptance threshold checking outside of
> zswap_store(). This code will be moved around in a following patch, so
> it would be cleaner to move a function call around.
>
> Signed-off-by: Yosry Ahmed <[email protected]>

LGTM!
Reviewed-by: Nhat Pham <[email protected]>