2023-10-17 23:22:24

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

Changelog:
v3:
* Add a patch to export per-cgroup zswap writeback counters
* Add a patch to update zswap's kselftest
* Separate the new list_lru functions into its own prep patch
* Do not start from the top of the hierarchy when encounter a memcg
that is not online for the global limit zswap writeback (patch 2)
(suggested by Yosry Ahmed)
* Do not remove the swap entry from list_lru in
__read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
* Removed a redundant zswap pool getting (patch 2)
(reported by Ryan Roberts)
* Use atomic for the nr_zswap_protected (instead of lruvec's lock)
(patch 5) (suggested by Yosry Ahmed)
* Remove the per-cgroup zswap shrinker knob (patch 5)
(suggested by Yosry Ahmed)
v2:
* Fix loongarch compiler errors
* Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM

There are currently several issues with zswap writeback:

1. There is only a single global LRU for zswap, making it impossible to
perform worload-specific shrinking - an memcg under memory pressure
cannot determine which pages in the pool it owns, and often ends up
writing pages from other memcgs. This issue has been previously
observed in practice and mitigated by simply disabling
memcg-initiated shrinking:

https://lore.kernel.org/all/[email protected]/T/#u

But this solution leaves a lot to be desired, as we still do not
have an avenue for an memcg to free up its own memory locked up in
the zswap pool.

2. We only shrink the zswap pool when the user-defined limit is hit.
This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed
ahead of time, as this depends on the workload (specifically, on
factors such as memory access patterns and compressibility of the
memory pages).

This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.

As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.

Domenico Cerasuolo (3):
zswap: make shrinking memcg-aware
mm: memcg: add per-memcg zswap writeback stat
selftests: cgroup: update per-memcg zswap writeback selftest

Nhat Pham (2):
mm: list_lru: allow external numa node and cgroup tracking
zswap: shrinks zswap pool based on memory pressure

Documentation/admin-guide/mm/zswap.rst | 7 +
include/linux/list_lru.h | 38 +++
include/linux/memcontrol.h | 7 +
include/linux/mmzone.h | 14 +
mm/list_lru.c | 43 ++-
mm/memcontrol.c | 15 +
mm/mmzone.c | 3 +
mm/swap.h | 3 +-
mm/swap_state.c | 38 ++-
mm/zswap.c | 335 ++++++++++++++++----
tools/testing/selftests/cgroup/test_zswap.c | 74 +++--
11 files changed, 485 insertions(+), 92 deletions(-)

--
2.34.1


2023-10-17 23:22:47

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure

Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).

This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.

Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:

* Estimate the number of warm pages residing in zswap, and attempt to
protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
saving factor. The better zswap compresses the data, the fewer pages
we will evict to swap (as we will otherwise incur IO for relatively
small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
brought into memory, the shrinker will cautiously terminate its
shrinking action, as this is a sign that it is touching the warmer
region of the zswap LRU.

As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.

Signed-off-by: Nhat Pham <[email protected]>
---
Documentation/admin-guide/mm/zswap.rst | 7 ++
include/linux/mmzone.h | 14 +++
mm/mmzone.c | 3 +
mm/swap_state.c | 21 +++-
mm/zswap.c | 161 +++++++++++++++++++++++--
5 files changed, 196 insertions(+), 10 deletions(-)

diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..522ae22ccb84 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,13 @@ attribute, e. g.::

Setting this parameter to 100 will disable the hysteresis.

+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it as follows:
+
+ echo Y > /sys/module/zswap/parameters/shrinker_enabled
+
A debugfs interface is provided for various statistic about pool size, number
of pages stored, same-value filled pages and various counters for the reasons
pages are rejected.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 486587fcd27f..8947a1bfbe9c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -637,6 +637,20 @@ struct lruvec {
#ifdef CONFIG_MEMCG
struct pglist_data *pgdat;
#endif
+#ifdef CONFIG_ZSWAP
+ /*
+ * Number of pages in zswap that should be protected from the shrinker.
+ * This number is an estimate of the following counts:
+ *
+ * a) Recent page faults.
+ * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+ * as well as recent zswap LRU rotations.
+ *
+ * These pages are likely to be warm, and might incur IO if the are written
+ * to swap.
+ */
+ atomic_long_t nr_zswap_protected;
+#endif
};

/* Isolate for asynchronous migration */
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..4137f3ac42cd 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)

memset(lruvec, 0, sizeof(struct lruvec));
spin_lock_init(&lruvec->lru_lock);
+#ifdef CONFIG_ZSWAP
+ atomic_long_set(&lruvec->nr_zswap_protected, 0);
+#endif

for_each_lru(lru)
INIT_LIST_HEAD(&lruvec->lists[lru]);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0356df52b06a..a60197b55a28 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
lru_add_drain(); /* Push any new pages onto the LRU now */
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+ page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page) {
+ struct lruvec *lruvec = folio_lruvec(page_folio(page));
+
+ atomic_long_inc(&lruvec->nr_zswap_protected);
+ }
+#endif
+ return page;
}

int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
lru_add_drain();
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
- NULL);
+ page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page) {
+ struct lruvec *lruvec = folio_lruvec(page_folio(page));
+
+ atomic_long_inc(&lruvec->nr_zswap_protected);
+ }
+#endif
+ return page;
}

/**
diff --git a/mm/zswap.c b/mm/zswap.c
index 15485427e3fa..1d1fe75a5237 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32

+/* Enable/disable memory pressure-based shrinker. */
+static bool zswap_shrinker_enabled;
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+
/*********************************
* data structures
**********************************/
@@ -174,6 +178,8 @@ struct zswap_pool {
char tfm_name[CRYPTO_MAX_ALG_NAME];
struct list_lru list_lru;
struct mem_cgroup *next_shrink;
+ struct shrinker *shrinker;
+ atomic_t nr_stored;
};

/*
@@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
}

+static u64 get_zswap_pool_size(struct zswap_pool *pool)
+{
+ u64 pool_size = 0;
+ int i;
+
+ for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
+ pool_size += zpool_get_total_size(pool->zpools[i]);
+
+ return pool_size;
+}
+
static void zswap_update_total_size(void)
{
struct zswap_pool *pool;
u64 total = 0;
- int i;

rcu_read_lock();

list_for_each_entry_rcu(pool, &zswap_pools, list)
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- total += zpool_get_total_size(pool->zpools[i]);
+ total += get_zswap_pool_size(pool);

rcu_read_unlock();

@@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
{
struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
- bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
-
+ int nid = entry_to_nid(entry);
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+ bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
+ unsigned long lru_size, old, new;
+
+ if (added) {
+ lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
+ old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
+
+ /*
+ * Decay to avoid overflow and adapt to changing workloads.
+ * This is based on LRU reclaim cost decaying heuristics.
+ */
+ do {
+ new = old > lru_size / 4 ? old / 2 : old;
+ } while (
+ !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
+ }
mem_cgroup_put(memcg);
return added;
}
@@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
else {
zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
+ atomic_dec(&entry->pool->nr_stored);
zswap_pool_put(entry->pool);
}
zswap_entry_cache_free(entry);
@@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
return entry;
}

+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+ unsigned long shrink_ret, nr_protected, lru_size;
+ struct zswap_pool *pool = shrinker->private_data;
+ bool encountered_page_in_swapcache = false;
+
+ nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
+ lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+
+ /*
+ * Abort if the shrinker is disabled or if we are shrinking into the
+ * protected region.
+ */
+ if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
+ shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+ &encountered_page_in_swapcache);
+
+ if (encountered_page_in_swapcache)
+ return SHRINK_STOP;
+
+ return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = shrinker->private_data;
+ struct mem_cgroup *memcg = sc->memcg;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+ unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
+
+#ifdef CONFIG_MEMCG_KMEM
+ cgroup_rstat_flush(memcg->css.cgroup);
+ nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+ nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+#else
+ /* use pool stats instead of memcg stats */
+ nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
+ nr_stored = atomic_read(&pool->nr_stored);
+#endif
+
+ if (!zswap_shrinker_enabled || !nr_stored)
+ return 0;
+
+ nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
+ nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+ /*
+ * Subtract the lru size by an estimate of the number of pages
+ * that should be protected.
+ */
+ nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
+
+ /*
+ * Scale the number of freeable pages by the memory saving factor.
+ * This ensures that the better zswap compresses memory, the fewer
+ * pages we will evict to swap (as it will otherwise incur IO for
+ * relatively small memory saving).
+ */
+ return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static void zswap_alloc_shrinker(struct zswap_pool *pool)
+{
+ pool->shrinker =
+ shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
+ if (!pool->shrinker)
+ return;
+
+ pool->shrinker->private_data = pool;
+ pool->shrinker->scan_objects = zswap_shrinker_scan;
+ pool->shrinker->count_objects = zswap_shrinker_count;
+ pool->shrinker->batch = 0;
+ pool->shrinker->seeks = DEFAULT_SEEKS;
+}
+
/*********************************
* per-cpu code
**********************************/
@@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
spinlock_t *lock, void *arg)
{
struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ bool *encountered_page_in_swapcache = (bool *)arg;
struct mem_cgroup *memcg;
struct zswap_tree *tree;
+ struct lruvec *lruvec;
pgoff_t swpoffset;
enum lru_status ret = LRU_REMOVED_RETRY;
int writeback_result;
@@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
/* we cannot use zswap_lru_add here, because it increments node's lru count */
list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
spin_unlock(lock);
- mem_cgroup_put(memcg);
ret = LRU_RETRY;
+
+ /*
+ * Encountering a page already in swap cache is a sign that we are shrinking
+ * into the warmer region. We should terminate shrinking (if we're in the dynamic
+ * shrinker context).
+ */
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+ ret = LRU_SKIP;
+ *encountered_page_in_swapcache = true;
+ }
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
+ /* Increment the protection area to account for the LRU rotation. */
+ atomic_long_inc(&lruvec->nr_zswap_protected);
+
+ mem_cgroup_put(memcg);
goto put_unlock;
}
zswap_written_back_pages++;
@@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
&pool->node);
if (ret)
goto error;
+
+ zswap_alloc_shrinker(pool);
+ if (!pool->shrinker)
+ goto error;
+
pr_debug("using %s compressor\n", pool->tfm_name);

/* being the current pool takes 1 ref; this func expects the
@@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- list_lru_init_memcg(&pool->list_lru, NULL);
+ if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
+ goto lru_fail;
+ shrinker_register(pool->shrinker);
INIT_WORK(&pool->shrink_work, shrink_worker);

zswap_pool_debug("created", pool);

return pool;

+lru_fail:
+ list_lru_destroy(&pool->list_lru);
+ shrinker_free(pool->shrinker);
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)

zswap_pool_debug("destroying", pool);

+ shrinker_free(pool->shrinker);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
list_lru_destroy(&pool->list_lru);
@@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
zswap_lru_add(&pool->list_lru, entry);
+ atomic_inc(&pool->nr_stored);
}
spin_unlock(&tree->lock);

--
2.34.1

2023-10-17 23:22:49

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

From: Domenico Cerasuolo <[email protected]>

Since zswap now writes back pages from memcg-specific LRUs, we now need a
new stat to show writebacks count for each memcg.

Suggested-by: Nhat Pham <[email protected]>
Signed-off-by: Domenico Cerasuolo <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
include/linux/memcontrol.h | 2 ++
mm/memcontrol.c | 15 +++++++++++++++
mm/zswap.c | 3 +++
3 files changed, 20 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3de10fabea0f..7868b1e00bf5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -38,6 +38,7 @@ enum memcg_stat_item {
MEMCG_KMEM,
MEMCG_ZSWAP_B,
MEMCG_ZSWAPPED,
+ MEMCG_ZSWAP_WB,
MEMCG_NR_STAT,
};

@@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
+void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
#else
static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1bde67b29287..a9118871e5a6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
{ "zswap", MEMCG_ZSWAP_B },
{ "zswapped", MEMCG_ZSWAPPED },
+ { "zswap_wb", MEMCG_ZSWAP_WB },
#endif
{ "file_mapped", NR_FILE_MAPPED },
{ "file_dirty", NR_FILE_DIRTY },
@@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
switch (item) {
case MEMCG_PERCPU_B:
case MEMCG_ZSWAP_B:
+ case MEMCG_ZSWAP_WB:
case NR_SLAB_RECLAIMABLE_B:
case NR_SLAB_UNRECLAIMABLE_B:
case WORKINGSET_REFAULT_ANON:
@@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
rcu_read_unlock();
}

+void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
+{
+ struct mem_cgroup *memcg;
+
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ return;
+
+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(objcg);
+ mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
+ rcu_read_unlock();
+}
+
static u64 zswap_current_read(struct cgroup_subsys_state *css,
struct cftype *cft)
{
diff --git a/mm/zswap.c b/mm/zswap.c
index d2989ad11814..15485427e3fa 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
}
zswap_written_back_pages++;

+ if (entry->objcg)
+ obj_cgroup_report_zswap_wb(entry->objcg);
+
/*
* Writeback started successfully, the page now belongs to the
* swapcache. Drop the entry from zswap - unless invalidate already
--
2.34.1

2023-10-17 23:22:51

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest

From: Domenico Cerasuolo <[email protected]>

The memcg-zswap self test is updated to adjust to the behavior change
implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
where zswap performs writeback for specific memcg.

Signed-off-by: Domenico Cerasuolo <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
1 file changed, 50 insertions(+), 24 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
index 49def87a909b..11271fabeffc 100644
--- a/tools/testing/selftests/cgroup/test_zswap.c
+++ b/tools/testing/selftests/cgroup/test_zswap.c
@@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
return read_int("/sys/kernel/debug/zswap/stored_pages", value);
}

-static int get_zswap_written_back_pages(size_t *value)
+static int get_cg_wb_count(const char *cg)
{
- return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
+ return cg_read_key_long(cg, "memory.stat", "zswap_wb");
}

static int allocate_bytes(const char *cgroup, void *arg)
@@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
return 0;
}

+static char *setup_test_group_1M(const char *root, const char *name)
+{
+ char *group_name = cg_name(root, name);
+
+ if (!group_name)
+ return NULL;
+ if (cg_create(group_name))
+ goto fail;
+ if (cg_write(group_name, "memory.max", "1M")) {
+ cg_destroy(group_name);
+ goto fail;
+ }
+ return group_name;
+fail:
+ free(group_name);
+ return NULL;
+}
+
/*
* When trying to store a memcg page in zswap, if the memcg hits its memory
- * limit in zswap, writeback should not be triggered.
- *
- * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
- * not zswap"). Needs to be revised when a per memcg writeback mechanism is
- * implemented.
+ * limit in zswap, writeback should affect only the zswapped pages of that
+ * memcg.
*/
static int test_no_invasive_cgroup_shrink(const char *root)
{
- size_t written_back_before, written_back_after;
int ret = KSFT_FAIL;
- char *test_group;
+ size_t control_allocation_size = MB(10);
+ char *control_allocation, *wb_group = NULL, *control_group = NULL;

/* Set up */
- test_group = cg_name(root, "no_shrink_test");
- if (!test_group)
- goto out;
- if (cg_create(test_group))
+ wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
+ if (!wb_group)
+ return KSFT_FAIL;
+ if (cg_write(wb_group, "memory.zswap.max", "10K"))
goto out;
- if (cg_write(test_group, "memory.max", "1M"))
+ control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
+ if (!control_group)
goto out;
- if (cg_write(test_group, "memory.zswap.max", "10K"))
+
+ /* Push some test_group2 memory into zswap */
+ if (cg_enter_current(control_group))
goto out;
- if (get_zswap_written_back_pages(&written_back_before))
+ control_allocation = malloc(control_allocation_size);
+ for (int i = 0; i < control_allocation_size; i += 4095)
+ control_allocation[i] = 'a';
+ if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
goto out;

- /* Allocate 10x memory.max to push memory into zswap */
- if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
+ /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
+ if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
goto out;

- /* Verify that no writeback happened because of the memcg allocation */
- if (get_zswap_written_back_pages(&written_back_after))
- goto out;
- if (written_back_after == written_back_before)
+ /* Verify that only zswapped memory from gwb_group has been written back */
+ if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
ret = KSFT_PASS;
out:
- cg_destroy(test_group);
- free(test_group);
+ cg_enter_current(root);
+ if (control_group) {
+ cg_destroy(control_group);
+ free(control_group);
+ }
+ cg_destroy(wb_group);
+ free(wb_group);
+ if (control_allocation)
+ free(control_allocation);
return ret;
}

--
2.34.1

2023-10-17 23:35:15

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
>
> From: Domenico Cerasuolo <[email protected]>
>
> The memcg-zswap self test is updated to adjust to the behavior change
> implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
> where zswap performs writeback for specific memcg.
>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>

/s/Signed-off/Acked
This is Domenico's work :) I used the wrong tag here. Should be:
Acked-by: Nhat Pham <[email protected]>

> ---
> tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
> 1 file changed, 50 insertions(+), 24 deletions(-)
>
> diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
> index 49def87a909b..11271fabeffc 100644
> --- a/tools/testing/selftests/cgroup/test_zswap.c
> +++ b/tools/testing/selftests/cgroup/test_zswap.c
> @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
> return read_int("/sys/kernel/debug/zswap/stored_pages", value);
> }
>
> -static int get_zswap_written_back_pages(size_t *value)
> +static int get_cg_wb_count(const char *cg)
> {
> - return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
> + return cg_read_key_long(cg, "memory.stat", "zswap_wb");
> }
>
> static int allocate_bytes(const char *cgroup, void *arg)
> @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
> return 0;
> }
>
> +static char *setup_test_group_1M(const char *root, const char *name)
> +{
> + char *group_name = cg_name(root, name);
> +
> + if (!group_name)
> + return NULL;
> + if (cg_create(group_name))
> + goto fail;
> + if (cg_write(group_name, "memory.max", "1M")) {
> + cg_destroy(group_name);
> + goto fail;
> + }
> + return group_name;
> +fail:
> + free(group_name);
> + return NULL;
> +}
> +
> /*
> * When trying to store a memcg page in zswap, if the memcg hits its memory
> - * limit in zswap, writeback should not be triggered.
> - *
> - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
> - * not zswap"). Needs to be revised when a per memcg writeback mechanism is
> - * implemented.
> + * limit in zswap, writeback should affect only the zswapped pages of that
> + * memcg.
> */
> static int test_no_invasive_cgroup_shrink(const char *root)
> {
> - size_t written_back_before, written_back_after;
> int ret = KSFT_FAIL;
> - char *test_group;
> + size_t control_allocation_size = MB(10);
> + char *control_allocation, *wb_group = NULL, *control_group = NULL;
>
> /* Set up */
> - test_group = cg_name(root, "no_shrink_test");
> - if (!test_group)
> - goto out;
> - if (cg_create(test_group))
> + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
> + if (!wb_group)
> + return KSFT_FAIL;
> + if (cg_write(wb_group, "memory.zswap.max", "10K"))
> goto out;
> - if (cg_write(test_group, "memory.max", "1M"))
> + control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
> + if (!control_group)
> goto out;
> - if (cg_write(test_group, "memory.zswap.max", "10K"))
> +
> + /* Push some test_group2 memory into zswap */
> + if (cg_enter_current(control_group))
> goto out;
> - if (get_zswap_written_back_pages(&written_back_before))
> + control_allocation = malloc(control_allocation_size);
> + for (int i = 0; i < control_allocation_size; i += 4095)
> + control_allocation[i] = 'a';
> + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> goto out;
>
> - /* Allocate 10x memory.max to push memory into zswap */
> - if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
> + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
> + if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
> goto out;
>
> - /* Verify that no writeback happened because of the memcg allocation */
> - if (get_zswap_written_back_pages(&written_back_after))
> - goto out;
> - if (written_back_after == written_back_before)
> + /* Verify that only zswapped memory from gwb_group has been written back */
> + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
> ret = KSFT_PASS;
> out:
> - cg_destroy(test_group);
> - free(test_group);
> + cg_enter_current(root);
> + if (control_group) {
> + cg_destroy(control_group);
> + free(control_group);
> + }
> + cg_destroy(wb_group);
> + free(wb_group);
> + if (control_allocation)
> + free(control_allocation);
> return ret;
> }
>
> --
> 2.34.1

2023-10-17 23:35:58

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
>
> From: Domenico Cerasuolo <[email protected]>
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham <[email protected]>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>

/s/Signed-off/Acked
This is Domenico's work :) I used the wrong tag here. Should be:
Acked-by: Nhat Pham <[email protected]>

> ---
> include/linux/memcontrol.h | 2 ++
> mm/memcontrol.c | 15 +++++++++++++++
> mm/zswap.c | 3 +++
> 3 files changed, 20 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3de10fabea0f..7868b1e00bf5 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -38,6 +38,7 @@ enum memcg_stat_item {
> MEMCG_KMEM,
> MEMCG_ZSWAP_B,
> MEMCG_ZSWAPPED,
> + MEMCG_ZSWAP_WB,
> MEMCG_NR_STAT,
> };
>
> @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
> bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
> void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
> void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
> #else
> static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
> {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 1bde67b29287..a9118871e5a6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> { "zswap", MEMCG_ZSWAP_B },
> { "zswapped", MEMCG_ZSWAPPED },
> + { "zswap_wb", MEMCG_ZSWAP_WB },
> #endif
> { "file_mapped", NR_FILE_MAPPED },
> { "file_dirty", NR_FILE_DIRTY },
> @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
> switch (item) {
> case MEMCG_PERCPU_B:
> case MEMCG_ZSWAP_B:
> + case MEMCG_ZSWAP_WB:
> case NR_SLAB_RECLAIMABLE_B:
> case NR_SLAB_UNRECLAIMABLE_B:
> case WORKINGSET_REFAULT_ANON:
> @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
> rcu_read_unlock();
> }
>
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> +{
> + struct mem_cgroup *memcg;
> +
> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> + return;
> +
> + rcu_read_lock();
> + memcg = obj_cgroup_memcg(objcg);
> + mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> + rcu_read_unlock();
> +}
> +
> static u64 zswap_current_read(struct cgroup_subsys_state *css,
> struct cftype *cft)
> {
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d2989ad11814..15485427e3fa 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> }
> zswap_written_back_pages++;
>
> + if (entry->objcg)
> + obj_cgroup_report_zswap_wb(entry->objcg);
> +
> /*
> * Writeback started successfully, the page now belongs to the
> * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1

2023-10-17 23:38:34

by Jeff Johnson

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

On 10/17/2023 4:35 PM, Nhat Pham wrote:
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
>>
>> From: Domenico Cerasuolo <[email protected]>
>>
>> Since zswap now writes back pages from memcg-specific LRUs, we now need a
>> new stat to show writebacks count for each memcg.
>>
>> Suggested-by: Nhat Pham <[email protected]>
>> Signed-off-by: Domenico Cerasuolo <[email protected]>
>> Signed-off-by: Nhat Pham <[email protected]>
>
> /s/Signed-off/Acked
> This is Domenico's work :) I used the wrong tag here. Should be:
> Acked-by: Nhat Pham <[email protected]>

no, since you are posting the patch, you have to sign off on it.
Signed-off-by is correct

2023-10-17 23:41:09

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

On Tue, Oct 17, 2023 at 4:38 PM Jeff Johnson <[email protected]> wrote:
>
> On 10/17/2023 4:35 PM, Nhat Pham wrote:
> > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
> >>
> >> From: Domenico Cerasuolo <[email protected]>
> >>
> >> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> >> new stat to show writebacks count for each memcg.
> >>
> >> Suggested-by: Nhat Pham <[email protected]>
> >> Signed-off-by: Domenico Cerasuolo <[email protected]>
> >> Signed-off-by: Nhat Pham <[email protected]>
> >
> > /s/Signed-off/Acked
> > This is Domenico's work :) I used the wrong tag here. Should be:
> > Acked-by: Nhat Pham <[email protected]>
>
> no, since you are posting the patch, you have to sign off on it.
> Signed-off-by is correct
>

Ah so past Nhat was right all along. Ignore my comments then.
Thanks for letting me know, Jeff!

2023-10-17 23:44:38

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest

On Tue, Oct 17, 2023 at 4:34 PM Nhat Pham <[email protected]> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
> >
> > From: Domenico Cerasuolo <[email protected]>
> >
> > The memcg-zswap self test is updated to adjust to the behavior change
> > implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
> > where zswap performs writeback for specific memcg.
> >
> > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > Signed-off-by: Nhat Pham <[email protected]>
>
> /s/Signed-off/Acked
> This is Domenico's work :) I used the wrong tag here. Should be:
> Acked-by: Nhat Pham <[email protected]>

Please ignore this comment - it was pointed out to me that Signed-off is
the appropriate tag here.

>
> > ---
> > tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
> > 1 file changed, 50 insertions(+), 24 deletions(-)
> >
> > diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
> > index 49def87a909b..11271fabeffc 100644
> > --- a/tools/testing/selftests/cgroup/test_zswap.c
> > +++ b/tools/testing/selftests/cgroup/test_zswap.c
> > @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
> > return read_int("/sys/kernel/debug/zswap/stored_pages", value);
> > }
> >
> > -static int get_zswap_written_back_pages(size_t *value)
> > +static int get_cg_wb_count(const char *cg)
> > {
> > - return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
> > + return cg_read_key_long(cg, "memory.stat", "zswap_wb");
> > }
> >
> > static int allocate_bytes(const char *cgroup, void *arg)
> > @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
> > return 0;
> > }
> >
> > +static char *setup_test_group_1M(const char *root, const char *name)
> > +{
> > + char *group_name = cg_name(root, name);
> > +
> > + if (!group_name)
> > + return NULL;
> > + if (cg_create(group_name))
> > + goto fail;
> > + if (cg_write(group_name, "memory.max", "1M")) {
> > + cg_destroy(group_name);
> > + goto fail;
> > + }
> > + return group_name;
> > +fail:
> > + free(group_name);
> > + return NULL;
> > +}
> > +
> > /*
> > * When trying to store a memcg page in zswap, if the memcg hits its memory
> > - * limit in zswap, writeback should not be triggered.
> > - *
> > - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
> > - * not zswap"). Needs to be revised when a per memcg writeback mechanism is
> > - * implemented.
> > + * limit in zswap, writeback should affect only the zswapped pages of that
> > + * memcg.
> > */
> > static int test_no_invasive_cgroup_shrink(const char *root)
> > {
> > - size_t written_back_before, written_back_after;
> > int ret = KSFT_FAIL;
> > - char *test_group;
> > + size_t control_allocation_size = MB(10);
> > + char *control_allocation, *wb_group = NULL, *control_group = NULL;
> >
> > /* Set up */
> > - test_group = cg_name(root, "no_shrink_test");
> > - if (!test_group)
> > - goto out;
> > - if (cg_create(test_group))
> > + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
> > + if (!wb_group)
> > + return KSFT_FAIL;
> > + if (cg_write(wb_group, "memory.zswap.max", "10K"))
> > goto out;
> > - if (cg_write(test_group, "memory.max", "1M"))
> > + control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
> > + if (!control_group)
> > goto out;
> > - if (cg_write(test_group, "memory.zswap.max", "10K"))
> > +
> > + /* Push some test_group2 memory into zswap */
> > + if (cg_enter_current(control_group))
> > goto out;
> > - if (get_zswap_written_back_pages(&written_back_before))
> > + control_allocation = malloc(control_allocation_size);
> > + for (int i = 0; i < control_allocation_size; i += 4095)
> > + control_allocation[i] = 'a';
> > + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> > goto out;
> >
> > - /* Allocate 10x memory.max to push memory into zswap */
> > - if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
> > + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
> > + if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
> > goto out;
> >
> > - /* Verify that no writeback happened because of the memcg allocation */
> > - if (get_zswap_written_back_pages(&written_back_after))
> > - goto out;
> > - if (written_back_after == written_back_before)
> > + /* Verify that only zswapped memory from gwb_group has been written back */
> > + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
> > ret = KSFT_PASS;
> > out:
> > - cg_destroy(test_group);
> > - free(test_group);
> > + cg_enter_current(root);
> > + if (control_group) {
> > + cg_destroy(control_group);
> > + free(control_group);
> > + }
> > + cg_destroy(wb_group);
> > + free(wb_group);
> > + if (control_allocation)
> > + free(control_allocation);
> > return ret;
> > }
> >
> > --
> > 2.34.1

2023-10-18 23:25:50

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
>
> From: Domenico Cerasuolo <[email protected]>
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham <[email protected]>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> include/linux/memcontrol.h | 2 ++
> mm/memcontrol.c | 15 +++++++++++++++
> mm/zswap.c | 3 +++
> 3 files changed, 20 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3de10fabea0f..7868b1e00bf5 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -38,6 +38,7 @@ enum memcg_stat_item {
> MEMCG_KMEM,
> MEMCG_ZSWAP_B,
> MEMCG_ZSWAPPED,
> + MEMCG_ZSWAP_WB,
> MEMCG_NR_STAT,
> };
>
> @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
> bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
> void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
> void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
> #else
> static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
> {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 1bde67b29287..a9118871e5a6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> { "zswap", MEMCG_ZSWAP_B },
> { "zswapped", MEMCG_ZSWAPPED },
> + { "zswap_wb", MEMCG_ZSWAP_WB },

zswap_writeback would be more consistent with file_writeback below.

Taking a step back, this is not really a "state". We increment it by 1
every time and never decrement it. Sounds awfully similar to events :)

You can also use count_objcg_event() directly and avoid the need for
obj_cgroup_report_zswap_wb() below.

> #endif
> { "file_mapped", NR_FILE_MAPPED },
> { "file_dirty", NR_FILE_DIRTY },
> @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
> switch (item) {
> case MEMCG_PERCPU_B:
> case MEMCG_ZSWAP_B:
> + case MEMCG_ZSWAP_WB:
> case NR_SLAB_RECLAIMABLE_B:
> case NR_SLAB_UNRECLAIMABLE_B:
> case WORKINGSET_REFAULT_ANON:
> @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
> rcu_read_unlock();
> }
>
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> +{
> + struct mem_cgroup *memcg;
> +
> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> + return;
> +
> + rcu_read_lock();
> + memcg = obj_cgroup_memcg(objcg);
> + mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> + rcu_read_unlock();
> +}
> +
> static u64 zswap_current_read(struct cgroup_subsys_state *css,
> struct cftype *cft)
> {
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d2989ad11814..15485427e3fa 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> }
> zswap_written_back_pages++;
>
> + if (entry->objcg)
> + obj_cgroup_report_zswap_wb(entry->objcg);
> +
> /*
> * Writeback started successfully, the page now belongs to the
> * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1

2023-10-18 23:37:38

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
>
> Currently, we only shrink the zswap pool when the user-defined limit is
> hit. This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed ahead
> of time, as this depends on the workload (specifically, on factors such
> as memory access patterns and compressibility of the memory pages).
>
> This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> is initiated when there is memory pressure. The shrinker does not
> have any parameter that must be tuned by the user, and can be opted in
> or out on a per-memcg basis.
>
> Furthermore, to make it more robust for many workloads and prevent
> overshrinking (i.e evicting warm pages that might be refaulted into
> memory), we build in the following heuristics:
>
> * Estimate the number of warm pages residing in zswap, and attempt to
> protect this region of the zswap LRU.
> * Scale the number of freeable objects by an estimate of the memory
> saving factor. The better zswap compresses the data, the fewer pages
> we will evict to swap (as we will otherwise incur IO for relatively
> small memory saving).
> * During reclaim, if the shrinker encounters a page that is also being
> brought into memory, the shrinker will cautiously terminate its
> shrinking action, as this is a sign that it is touching the warmer
> region of the zswap LRU.

I really hope someone with more familiarity with reclaim heuristics
makes sure this makes sense.

>
> As a proof of concept, we ran the following synthetic benchmark:
> build the linux kernel in a memory-limited cgroup, and allocate some
> cold data in tmpfs to see if the shrinker could write them out and
> improved the overall performance. Depending on the amount of cold data
> generated, we observe from 14% to 35% reduction in kernel CPU time used
> in the kernel builds.
>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> Documentation/admin-guide/mm/zswap.rst | 7 ++
> include/linux/mmzone.h | 14 +++
> mm/mmzone.c | 3 +
> mm/swap_state.c | 21 +++-
> mm/zswap.c | 161 +++++++++++++++++++++++--
> 5 files changed, 196 insertions(+), 10 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> index 45b98390e938..522ae22ccb84 100644
> --- a/Documentation/admin-guide/mm/zswap.rst
> +++ b/Documentation/admin-guide/mm/zswap.rst
> @@ -153,6 +153,13 @@ attribute, e. g.::
>
> Setting this parameter to 100 will disable the hysteresis.
>
> +When there is a sizable amount of cold memory residing in the zswap pool, it
> +can be advantageous to proactively write these cold pages to swap and reclaim
> +the memory for other use cases. By default, the zswap shrinker is disabled.
> +User can enable it as follows:
> +
> + echo Y > /sys/module/zswap/parameters/shrinker_enabled
> +
> A debugfs interface is provided for various statistic about pool size, number
> of pages stored, same-value filled pages and various counters for the reasons
> pages are rejected.
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 486587fcd27f..8947a1bfbe9c 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -637,6 +637,20 @@ struct lruvec {
> #ifdef CONFIG_MEMCG
> struct pglist_data *pgdat;
> #endif
> +#ifdef CONFIG_ZSWAP
> + /*
> + * Number of pages in zswap that should be protected from the shrinker.
> + * This number is an estimate of the following counts:
> + *
> + * a) Recent page faults.
> + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> + * as well as recent zswap LRU rotations.
> + *
> + * These pages are likely to be warm, and might incur IO if the are written
> + * to swap.
> + */
> + atomic_long_t nr_zswap_protected;
> +#endif

Instead of the ifdef's all over the code, we can define
nr_zswap_protected inside a struct and helpers that
increment/initialize nr_zswap_protected in zswap.h, and have a single
ifdef there. All the code would be oblivious to the existence of
nr_zswap_protected.

Something like:

#ifdef CONFIG_ZSWAP

struct zswap_lruvec_state {
/* insert large comment */
atomic_long_t nr_zswap_protected;
};

static inline void zswap_lruvec_init(..)
{
atomic_long_set(&lruvec->nr_zswap_protected, 0);
}

static inline void zswap_lruvec_swapin(..)
{
if (page) {
struct lruvec *lruvec = folio_lruvec(page_folio(page));

atomic_long_inc(&lruvec->nr_zswap_protected);
}
}

#else

/* empty struct and functions

#endif

> };
>
> /* Isolate for asynchronous migration */
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index 68e1511be12d..4137f3ac42cd 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)
>
> memset(lruvec, 0, sizeof(struct lruvec));
> spin_lock_init(&lruvec->lru_lock);
> +#ifdef CONFIG_ZSWAP
> + atomic_long_set(&lruvec->nr_zswap_protected, 0);
> +#endif
>
> for_each_lru(lru)
> INIT_LIST_HEAD(&lruvec->lists[lru]);
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 0356df52b06a..a60197b55a28 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> lru_add_drain(); /* Push any new pages onto the LRU now */
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> +#ifdef CONFIG_ZSWAP
> + if (page) {
> + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> +
> + atomic_long_inc(&lruvec->nr_zswap_protected);
> + }
> +#endif
> + return page;
> }
>
> int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> lru_add_drain();
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> - NULL);
> + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> +#ifdef CONFIG_ZSWAP
> + if (page) {
> + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> +
> + atomic_long_inc(&lruvec->nr_zswap_protected);
> + }
> +#endif
> + return page;
> }
>
> /**
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 15485427e3fa..1d1fe75a5237 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> /* Number of zpools in zswap_pool (empirically determined for scalability) */
> #define ZSWAP_NR_ZPOOLS 32
>
> +/* Enable/disable memory pressure-based shrinker. */
> +static bool zswap_shrinker_enabled;
> +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> +
> /*********************************
> * data structures
> **********************************/
> @@ -174,6 +178,8 @@ struct zswap_pool {
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> struct list_lru list_lru;
> struct mem_cgroup *next_shrink;
> + struct shrinker *shrinker;
> + atomic_t nr_stored;
> };
>
> /*
> @@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
> DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> }
>
> +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> +{
> + u64 pool_size = 0;
> + int i;
> +
> + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> + pool_size += zpool_get_total_size(pool->zpools[i]);
> +
> + return pool_size;
> +}
> +
> static void zswap_update_total_size(void)
> {
> struct zswap_pool *pool;
> u64 total = 0;
> - int i;
>
> rcu_read_lock();
>
> list_for_each_entry_rcu(pool, &zswap_pools, list)
> - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> - total += zpool_get_total_size(pool->zpools[i]);
> + total += get_zswap_pool_size(pool);
>
> rcu_read_unlock();
>
> @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> {
> struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> - bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> -
> + int nid = entry_to_nid(entry);
> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> + bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
> + unsigned long lru_size, old, new;
> +
> + if (added) {
> + lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
> + old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
> +
> + /*
> + * Decay to avoid overflow and adapt to changing workloads.
> + * This is based on LRU reclaim cost decaying heuristics.
> + */
> + do {
> + new = old > lru_size / 4 ? old / 2 : old;
> + } while (
> + !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
> + }
> mem_cgroup_put(memcg);
> return added;
> }
> @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> else {
> zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> + atomic_dec(&entry->pool->nr_stored);
> zswap_pool_put(entry->pool);
> }
> zswap_entry_cache_free(entry);
> @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> return entry;
> }
>
> +/*********************************
> +* shrinker functions
> +**********************************/
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg);
> +
> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> + unsigned long shrink_ret, nr_protected, lru_size;
> + struct zswap_pool *pool = shrinker->private_data;
> + bool encountered_page_in_swapcache = false;
> +
> + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> +
> + /*
> + * Abort if the shrinker is disabled or if we are shrinking into the
> + * protected region.
> + */
> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> + sc->nr_scanned = 0;
> + return SHRINK_STOP;
> + }
> +
> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> + &encountered_page_in_swapcache);
> +
> + if (encountered_page_in_swapcache)
> + return SHRINK_STOP;
> +
> + return shrink_ret ? shrink_ret : SHRINK_STOP;
> +}
> +
> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct zswap_pool *pool = shrinker->private_data;
> + struct mem_cgroup *memcg = sc->memcg;
> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> +
> +#ifdef CONFIG_MEMCG_KMEM
> + cgroup_rstat_flush(memcg->css.cgroup);
> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> +#else
> + /* use pool stats instead of memcg stats */
> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> + nr_stored = atomic_read(&pool->nr_stored);
> +#endif
> +
> + if (!zswap_shrinker_enabled || !nr_stored)
> + return 0;
> +
> + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> + /*
> + * Subtract the lru size by an estimate of the number of pages
> + * that should be protected.
> + */
> + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> +
> + /*
> + * Scale the number of freeable pages by the memory saving factor.
> + * This ensures that the better zswap compresses memory, the fewer
> + * pages we will evict to swap (as it will otherwise incur IO for
> + * relatively small memory saving).
> + */
> + return mult_frac(nr_freeable, nr_backing, nr_stored);
> +}
> +
> +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> +{
> + pool->shrinker =
> + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> + if (!pool->shrinker)
> + return;
> +
> + pool->shrinker->private_data = pool;
> + pool->shrinker->scan_objects = zswap_shrinker_scan;
> + pool->shrinker->count_objects = zswap_shrinker_count;
> + pool->shrinker->batch = 0;
> + pool->shrinker->seeks = DEFAULT_SEEKS;
> +}
> +
> /*********************************
> * per-cpu code
> **********************************/
> @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> spinlock_t *lock, void *arg)
> {
> struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> + bool *encountered_page_in_swapcache = (bool *)arg;
> struct mem_cgroup *memcg;
> struct zswap_tree *tree;
> + struct lruvec *lruvec;
> pgoff_t swpoffset;
> enum lru_status ret = LRU_REMOVED_RETRY;
> int writeback_result;
> @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> /* we cannot use zswap_lru_add here, because it increments node's lru count */
> list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> spin_unlock(lock);
> - mem_cgroup_put(memcg);
> ret = LRU_RETRY;
> +
> + /*
> + * Encountering a page already in swap cache is a sign that we are shrinking
> + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> + * shrinker context).
> + */
> + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> + ret = LRU_SKIP;
> + *encountered_page_in_swapcache = true;
> + }
> + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
> + /* Increment the protection area to account for the LRU rotation. */
> + atomic_long_inc(&lruvec->nr_zswap_protected);
> +
> + mem_cgroup_put(memcg);
> goto put_unlock;
> }
> zswap_written_back_pages++;
> @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> &pool->node);
> if (ret)
> goto error;
> +
> + zswap_alloc_shrinker(pool);
> + if (!pool->shrinker)
> + goto error;
> +
> pr_debug("using %s compressor\n", pool->tfm_name);
>
> /* being the current pool takes 1 ref; this func expects the
> @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - list_lru_init_memcg(&pool->list_lru, NULL);
> + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> + goto lru_fail;
> + shrinker_register(pool->shrinker);
> INIT_WORK(&pool->shrink_work, shrink_worker);
>
> zswap_pool_debug("created", pool);
>
> return pool;
>
> +lru_fail:
> + list_lru_destroy(&pool->list_lru);
> + shrinker_free(pool->shrinker);
> error:
> if (pool->acomp_ctx)
> free_percpu(pool->acomp_ctx);
> @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> zswap_pool_debug("destroying", pool);
>
> + shrinker_free(pool->shrinker);
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> list_lru_destroy(&pool->list_lru);
> @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> zswap_lru_add(&pool->list_lru, entry);
> + atomic_inc(&pool->nr_stored);
> }
> spin_unlock(&tree->lock);
>
> --
> 2.34.1

2023-10-18 23:51:42

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat

On Wed, Oct 18, 2023 at 4:25 PM Yosry Ahmed <[email protected]> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
> >
> > From: Domenico Cerasuolo <[email protected]>
> >
> > Since zswap now writes back pages from memcg-specific LRUs, we now need a
> > new stat to show writebacks count for each memcg.
> >
> > Suggested-by: Nhat Pham <[email protected]>
> > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
> > include/linux/memcontrol.h | 2 ++
> > mm/memcontrol.c | 15 +++++++++++++++
> > mm/zswap.c | 3 +++
> > 3 files changed, 20 insertions(+)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 3de10fabea0f..7868b1e00bf5 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -38,6 +38,7 @@ enum memcg_stat_item {
> > MEMCG_KMEM,
> > MEMCG_ZSWAP_B,
> > MEMCG_ZSWAPPED,
> > + MEMCG_ZSWAP_WB,
> > MEMCG_NR_STAT,
> > };
> >
> > @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
> > bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
> > void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
> > void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> > +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
> > #else
> > static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
> > {
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 1bde67b29287..a9118871e5a6 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
> > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > { "zswap", MEMCG_ZSWAP_B },
> > { "zswapped", MEMCG_ZSWAPPED },
> > + { "zswap_wb", MEMCG_ZSWAP_WB },
>
> zswap_writeback would be more consistent with file_writeback below.
>
> Taking a step back, this is not really a "state". We increment it by 1
> every time and never decrement it. Sounds awfully similar to events :)

Ah yeah, this is probably closer to zswpin/zswpout counters :)
We can probably re-use that piece of logic.

>
> You can also use count_objcg_event() directly and avoid the need for
> obj_cgroup_report_zswap_wb() below.
>
> > #endif
> > { "file_mapped", NR_FILE_MAPPED },
> > { "file_dirty", NR_FILE_DIRTY },
> > @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
> > switch (item) {
> > case MEMCG_PERCPU_B:
> > case MEMCG_ZSWAP_B:
> > + case MEMCG_ZSWAP_WB:
> > case NR_SLAB_RECLAIMABLE_B:
> > case NR_SLAB_UNRECLAIMABLE_B:
> > case WORKINGSET_REFAULT_ANON:
> > @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
> > rcu_read_unlock();
> > }
> >
> > +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> > +{
> > + struct mem_cgroup *memcg;
> > +
> > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > + return;
> > +
> > + rcu_read_lock();
> > + memcg = obj_cgroup_memcg(objcg);
> > + mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> > + rcu_read_unlock();
> > +}
> > +
> > static u64 zswap_current_read(struct cgroup_subsys_state *css,
> > struct cftype *cft)
> > {
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index d2989ad11814..15485427e3fa 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > }
> > zswap_written_back_pages++;
> >
> > + if (entry->objcg)
> > + obj_cgroup_report_zswap_wb(entry->objcg);
> > +
> > /*
> > * Writeback started successfully, the page now belongs to the
> > * swapcache. Drop the entry from zswap - unless invalidate already
> > --
> > 2.34.1

2023-10-19 17:12:13

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <[email protected]> wrote:

> Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

We're at -rc6 and I'd prefer to drop this series from mm.git, have
another go during the next cycle.

However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
without vma" has syntactic dependencies on this series and will need
rework, so I'd like to make that decision soon.

Do we feel that this series can be made into a mergeable state within
the next few days?

Thanks.

2023-10-19 17:34:35

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

On Thu, Oct 19, 2023 at 10:12 AM Andrew Morton
<[email protected]> wrote:
>
> On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <[email protected]> wrote:
>
> > Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
>
> We're at -rc6 and I'd prefer to drop this series from mm.git, have
> another go during the next cycle.
>
> However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
> without vma" has syntactic dependencies on this series and will need
> rework, so I'd like to make that decision soon.
>
> Do we feel that this series can be made into a mergeable state within
> the next few days?

There are parts of the code that I would feel more comfortable if
someone took a look at (which I mentioned in individual patches). So
unless this happens in the next few days I wouldn't say so.

>
> Thanks.

2023-10-19 18:31:40

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

On Thu, Oct 19, 2023 at 10:34 AM Yosry Ahmed <[email protected]> wrote:
>
> On Thu, Oct 19, 2023 at 10:12 AM Andrew Morton
> <[email protected]> wrote:
> >
> > On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <[email protected]> wrote:
> >
> > > Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
> >
> > We're at -rc6 and I'd prefer to drop this series from mm.git, have
> > another go during the next cycle.
> >
> > However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
> > without vma" has syntactic dependencies on this series and will need
> > rework, so I'd like to make that decision soon.
> >
> > Do we feel that this series can be made into a mergeable state within
> > the next few days?
>
> There are parts of the code that I would feel more comfortable if
> someone took a look at (which I mentioned in individual patches). So
> unless this happens in the next few days I wouldn't say so.
>

I'm not super familiar with the other series. How big is the dependency?
Looks like it's just a small part in the swapcache code right?

If this is the case, I feel like the best course of action is to rebase
the mempolicy patch series on top of mm-unstable, and resolve
this merge conflict. I will then send out v4 of the zswap shrinker,
rebased on top of the mempolicy patch series.

If this is not the case, one thing we can do is:

a) Fix bugs (there's one kernel test robot it seems)
b) Fix user-visible details (writeback counter for e.g)

and just merge the series for now. FWIW, this is an optional
feature and disabled by default. So performance optimization
and aesthetics change (list_lru_add() renaming etc.) can wait.

We can push out v4 by the end of today and early tomorrow
if all goes well. Then everyone can review and comment on it.

How does everyone feel about this strategy?

> >
> > Thanks.

2023-10-19 18:36:47

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

On Thu, 19 Oct 2023 11:31:17 -0700 Nhat Pham <[email protected]> wrote:

> > There are parts of the code that I would feel more comfortable if
> > someone took a look at (which I mentioned in individual patches). So
> > unless this happens in the next few days I wouldn't say so.
> >
>
> I'm not super familiar with the other series. How big is the dependency?
> Looks like it's just a small part in the swapcache code right?
>
> If this is the case, I feel like the best course of action is to rebase
> the mempolicy patch series on top of mm-unstable, and resolve
> this merge conflict.

OK, thanks.

Hugh, do you have time to look at rebasing on the mm-stable which I
pushed out 15 minutes ago?

> I will then send out v4 of the zswap shrinker,
> rebased on top of the mempolicy patch series.
>
> If this is not the case, one thing we can do is:
>
> a) Fix bugs (there's one kernel test robot it seems)
> b) Fix user-visible details (writeback counter for e.g)
>
> and just merge the series for now. FWIW, this is an optional
> feature and disabled by default. So performance optimization
> and aesthetics change (list_lru_add() renaming etc.) can wait.
>
> We can push out v4 by the end of today and early tomorrow
> if all goes well. Then everyone can review and comment on it.
>

2023-10-19 19:24:12

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

On Thu, 19 Oct 2023, Andrew Morton wrote:
> On Thu, 19 Oct 2023 11:31:17 -0700 Nhat Pham <[email protected]> wrote:
>
> > > There are parts of the code that I would feel more comfortable if
> > > someone took a look at (which I mentioned in individual patches). So
> > > unless this happens in the next few days I wouldn't say so.
> > >
> >
> > I'm not super familiar with the other series. How big is the dependency?
> > Looks like it's just a small part in the swapcache code right?
> >
> > If this is the case, I feel like the best course of action is to rebase
> > the mempolicy patch series on top of mm-unstable, and resolve
> > this merge conflict.
>
> OK, thanks.
>
> Hugh, do you have time to look at rebasing on the mm-stable which I
> pushed out 15 minutes ago?

Okay, I'm on it - but (unless you insist otherwise) it's only a v3 of
the 10/12 "mempolicy: alloc_pages_mpol() for NUMA policy without vma"
that I'm expecting to send you - the rest should just cherry-pick in
cleanly. I'll check that of course, but I'm afraid of losing details
(e.g. any Acks you've meanwhile added) if I resend the lot.

Hugh

>
> > I will then send out v4 of the zswap shrinker,
> > rebased on top of the mempolicy patch series.
> >
> > If this is not the case, one thing we can do is:
> >
> > a) Fix bugs (there's one kernel test robot it seems)
> > b) Fix user-visible details (writeback counter for e.g)
> >
> > and just merge the series for now. FWIW, this is an optional
> > feature and disabled by default. So performance optimization
> > and aesthetics change (list_lru_add() renaming etc.) can wait.
> >
> > We can push out v4 by the end of today and early tomorrow
> > if all goes well. Then everyone can review and comment on it.

2023-10-20 19:16:13

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure

On Wed, Oct 18, 2023 at 4:36 PM Yosry Ahmed <[email protected]> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <[email protected]> wrote:
> >
> > Currently, we only shrink the zswap pool when the user-defined limit is
> > hit. This means that if we set the limit too high, cold data that are
> > unlikely to be used again will reside in the pool, wasting precious
> > memory. It is hard to predict how much zswap space will be needed ahead
> > of time, as this depends on the workload (specifically, on factors such
> > as memory access patterns and compressibility of the memory pages).
> >
> > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > is initiated when there is memory pressure. The shrinker does not
> > have any parameter that must be tuned by the user, and can be opted in
> > or out on a per-memcg basis.
> >
> > Furthermore, to make it more robust for many workloads and prevent
> > overshrinking (i.e evicting warm pages that might be refaulted into
> > memory), we build in the following heuristics:
> >
> > * Estimate the number of warm pages residing in zswap, and attempt to
> > protect this region of the zswap LRU.
> > * Scale the number of freeable objects by an estimate of the memory
> > saving factor. The better zswap compresses the data, the fewer pages
> > we will evict to swap (as we will otherwise incur IO for relatively
> > small memory saving).
> > * During reclaim, if the shrinker encounters a page that is also being
> > brought into memory, the shrinker will cautiously terminate its
> > shrinking action, as this is a sign that it is touching the warmer
> > region of the zswap LRU.
>
> I really hope someone with more familiarity with reclaim heuristics
> makes sure this makes sense.
>
> >
> > As a proof of concept, we ran the following synthetic benchmark:
> > build the linux kernel in a memory-limited cgroup, and allocate some
> > cold data in tmpfs to see if the shrinker could write them out and
> > improved the overall performance. Depending on the amount of cold data
> > generated, we observe from 14% to 35% reduction in kernel CPU time used
> > in the kernel builds.
> >
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
> > Documentation/admin-guide/mm/zswap.rst | 7 ++
> > include/linux/mmzone.h | 14 +++
> > mm/mmzone.c | 3 +
> > mm/swap_state.c | 21 +++-
> > mm/zswap.c | 161 +++++++++++++++++++++++--
> > 5 files changed, 196 insertions(+), 10 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > index 45b98390e938..522ae22ccb84 100644
> > --- a/Documentation/admin-guide/mm/zswap.rst
> > +++ b/Documentation/admin-guide/mm/zswap.rst
> > @@ -153,6 +153,13 @@ attribute, e. g.::
> >
> > Setting this parameter to 100 will disable the hysteresis.
> >
> > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > +can be advantageous to proactively write these cold pages to swap and reclaim
> > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > +User can enable it as follows:
> > +
> > + echo Y > /sys/module/zswap/parameters/shrinker_enabled
> > +
> > A debugfs interface is provided for various statistic about pool size, number
> > of pages stored, same-value filled pages and various counters for the reasons
> > pages are rejected.
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 486587fcd27f..8947a1bfbe9c 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -637,6 +637,20 @@ struct lruvec {
> > #ifdef CONFIG_MEMCG
> > struct pglist_data *pgdat;
> > #endif
> > +#ifdef CONFIG_ZSWAP
> > + /*
> > + * Number of pages in zswap that should be protected from the shrinker.
> > + * This number is an estimate of the following counts:
> > + *
> > + * a) Recent page faults.
> > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > + * as well as recent zswap LRU rotations.
> > + *
> > + * These pages are likely to be warm, and might incur IO if the are written
> > + * to swap.
> > + */
> > + atomic_long_t nr_zswap_protected;
> > +#endif
>
> Instead of the ifdef's all over the code, we can define
> nr_zswap_protected inside a struct and helpers that
> increment/initialize nr_zswap_protected in zswap.h, and have a single
> ifdef there. All the code would be oblivious to the existence of
> nr_zswap_protected.
>
> Something like:
>
> #ifdef CONFIG_ZSWAP
>
> struct zswap_lruvec_state {
> /* insert large comment */
> atomic_long_t nr_zswap_protected;
> };
>
> static inline void zswap_lruvec_init(..)
> {
> atomic_long_set(&lruvec->nr_zswap_protected, 0);
> }
>
> static inline void zswap_lruvec_swapin(..)
> {
> if (page) {
> struct lruvec *lruvec = folio_lruvec(page_folio(page));
>
> atomic_long_inc(&lruvec->nr_zswap_protected);
> }
> }
>
> #else
>
> /* empty struct and functions
>
> #endif
>
> > };
> >
> > /* Isolate for asynchronous migration */
> > diff --git a/mm/mmzone.c b/mm/mmzone.c
> > index 68e1511be12d..4137f3ac42cd 100644
> > --- a/mm/mmzone.c
> > +++ b/mm/mmzone.c
> > @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)
> >
> > memset(lruvec, 0, sizeof(struct lruvec));
> > spin_lock_init(&lruvec->lru_lock);
> > +#ifdef CONFIG_ZSWAP
> > + atomic_long_set(&lruvec->nr_zswap_protected, 0);
> > +#endif
> >
> > for_each_lru(lru)
> > INIT_LIST_HEAD(&lruvec->lists[lru]);
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 0356df52b06a..a60197b55a28 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > lru_add_drain(); /* Push any new pages onto the LRU now */
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > +#ifdef CONFIG_ZSWAP
> > + if (page) {
> > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > +
> > + atomic_long_inc(&lruvec->nr_zswap_protected);
> > + }
> > +#endif
> > + return page;
> > }
> >
> > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > lru_add_drain();
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > - NULL);
> > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > +#ifdef CONFIG_ZSWAP
> > + if (page) {
> > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > +
> > + atomic_long_inc(&lruvec->nr_zswap_protected);
> > + }
> > +#endif
> > + return page;
> > }
> >
> > /**
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 15485427e3fa..1d1fe75a5237 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > #define ZSWAP_NR_ZPOOLS 32
> >
> > +/* Enable/disable memory pressure-based shrinker. */
> > +static bool zswap_shrinker_enabled;
> > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > +
> > /*********************************
> > * data structures
> > **********************************/
> > @@ -174,6 +178,8 @@ struct zswap_pool {
> > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > struct list_lru list_lru;
> > struct mem_cgroup *next_shrink;
> > + struct shrinker *shrinker;
> > + atomic_t nr_stored;
> > };
> >
> > /*
> > @@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
> > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > }
> >
> > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > +{
> > + u64 pool_size = 0;
> > + int i;
> > +
> > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > +
> > + return pool_size;
> > +}
> > +
> > static void zswap_update_total_size(void)
> > {
> > struct zswap_pool *pool;
> > u64 total = 0;
> > - int i;
> >
> > rcu_read_lock();
> >
> > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > - total += zpool_get_total_size(pool->zpools[i]);
> > + total += get_zswap_pool_size(pool);
> >
> > rcu_read_unlock();
> >
> > @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > {
> > struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > - bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > -
> > + int nid = entry_to_nid(entry);
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> > + bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
> > + unsigned long lru_size, old, new;
> > +
> > + if (added) {
> > + lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
> > + old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
> > +
> > + /*
> > + * Decay to avoid overflow and adapt to changing workloads.
> > + * This is based on LRU reclaim cost decaying heuristics.
> > + */
> > + do {
> > + new = old > lru_size / 4 ? old / 2 : old;
> > + } while (
> > + !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
> > + }
> > mem_cgroup_put(memcg);
> > return added;
> > }
> > @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > else {
> > zswap_lru_del(&entry->pool->list_lru, entry);
> > zpool_free(zswap_find_zpool(entry), entry->handle);
> > + atomic_dec(&entry->pool->nr_stored);
> > zswap_pool_put(entry->pool);
> > }
> > zswap_entry_cache_free(entry);
> > @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > return entry;
> > }
> >
> > +/*********************************
> > +* shrinker functions
> > +**********************************/
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg);
> > +
> > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > + unsigned long shrink_ret, nr_protected, lru_size;
> > + struct zswap_pool *pool = shrinker->private_data;
> > + bool encountered_page_in_swapcache = false;
> > +
> > + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > +
> > + /*
> > + * Abort if the shrinker is disabled or if we are shrinking into the
> > + * protected region.
> > + */
> > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > + sc->nr_scanned = 0;
> > + return SHRINK_STOP;
> > + }
> > +
> > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > + &encountered_page_in_swapcache);
> > +
> > + if (encountered_page_in_swapcache)
> > + return SHRINK_STOP;
> > +
> > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > +}
> > +
> > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct zswap_pool *pool = shrinker->private_data;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > + cgroup_rstat_flush(memcg->css.cgroup);
> > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > +#else
> > + /* use pool stats instead of memcg stats */
> > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > + nr_stored = atomic_read(&pool->nr_stored);
> > +#endif
> > +
> > + if (!zswap_shrinker_enabled || !nr_stored)
> > + return 0;
> > +
> > + nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > + /*
> > + * Subtract the lru size by an estimate of the number of pages
> > + * that should be protected.
> > + */
> > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > +
> > + /*
> > + * Scale the number of freeable pages by the memory saving factor.
> > + * This ensures that the better zswap compresses memory, the fewer
> > + * pages we will evict to swap (as it will otherwise incur IO for
> > + * relatively small memory saving).
> > + */
> > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > +}
> > +
> > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > +{
> > + pool->shrinker =
> > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > + if (!pool->shrinker)
> > + return;
> > +
> > + pool->shrinker->private_data = pool;
> > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > + pool->shrinker->count_objects = zswap_shrinker_count;
> > + pool->shrinker->batch = 0;
> > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > +}
> > +
> > /*********************************
> > * per-cpu code
> > **********************************/
> > @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > spinlock_t *lock, void *arg)
> > {
> > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > + bool *encountered_page_in_swapcache = (bool *)arg;
> > struct mem_cgroup *memcg;
> > struct zswap_tree *tree;
> > + struct lruvec *lruvec;
> > pgoff_t swpoffset;
> > enum lru_status ret = LRU_REMOVED_RETRY;
> > int writeback_result;
> > @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > spin_unlock(lock);
> > - mem_cgroup_put(memcg);
> > ret = LRU_RETRY;
> > +
> > + /*
> > + * Encountering a page already in swap cache is a sign that we are shrinking
> > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > + * shrinker context).
> > + */
> > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > + ret = LRU_SKIP;
> > + *encountered_page_in_swapcache = true;
> > + }
> > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
> > + /* Increment the protection area to account for the LRU rotation. */
> > + atomic_long_inc(&lruvec->nr_zswap_protected);
> > +
> > + mem_cgroup_put(memcg);
> > goto put_unlock;
> > }
> > zswap_written_back_pages++;
> > @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > &pool->node);
> > if (ret)
> > goto error;
> > +
> > + zswap_alloc_shrinker(pool);
> > + if (!pool->shrinker)
> > + goto error;
> > +
> > pr_debug("using %s compressor\n", pool->tfm_name);
> >
> > /* being the current pool takes 1 ref; this func expects the
> > @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > */
> > kref_init(&pool->kref);
> > INIT_LIST_HEAD(&pool->list);
> > - list_lru_init_memcg(&pool->list_lru, NULL);
> > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > + goto lru_fail;
> > + shrinker_register(pool->shrinker);
> > INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> > zswap_pool_debug("created", pool);
> >
> > return pool;
> >
> > +lru_fail:
> > + list_lru_destroy(&pool->list_lru);
> > + shrinker_free(pool->shrinker);
> > error:
> > if (pool->acomp_ctx)
> > free_percpu(pool->acomp_ctx);
> > @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> > zswap_pool_debug("destroying", pool);
> >
> > + shrinker_free(pool->shrinker);
> > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > free_percpu(pool->acomp_ctx);
> > list_lru_destroy(&pool->list_lru);
> > @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
> > if (entry->length) {
> > INIT_LIST_HEAD(&entry->lru);
> > zswap_lru_add(&pool->list_lru, entry);
> > + atomic_inc(&pool->nr_stored);
> > }
> > spin_unlock(&tree->lock);
> >
> > --
> > 2.34.1

I like this. And FWIW, if we have more states to store
(i.e if the shrinker heuristics needs to change), we just need
to update things in zswap.h.

Will be present in v4 of the patch series. Thanks for the suggestion,
Yosry!