Changelog:
v4:
* Rename list_lru_add to list_lru_add_obj and __list_lru_add to
list_lru_add (patch 1) (suggested by Johannes Weiner and
Yosry Ahmed)
* Some cleanups on the memcg aware LRU patch (patch 2)
(suggested by Yosry Ahmed)
* Use event interface for the new per-cgroup writeback counters.
(patch 3) (suggested by Yosry Ahmed)
* Abstract zswap's lruvec states and handling into
zswap_lruvec_state (patch 5) (suggested by Yosry Ahmed)
v3:
* Add a patch to export per-cgroup zswap writeback counters
* Add a patch to update zswap's kselftest
* Separate the new list_lru functions into its own prep patch
* Do not start from the top of the hierarchy when encounter a memcg
that is not online for the global limit zswap writeback (patch 2)
(suggested by Yosry Ahmed)
* Do not remove the swap entry from list_lru in
__read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
* Removed a redundant zswap pool getting (patch 2)
(reported by Ryan Roberts)
* Use atomic for the nr_zswap_protected (instead of lruvec's lock)
(patch 5) (suggested by Yosry Ahmed)
* Remove the per-cgroup zswap shrinker knob (patch 5)
(suggested by Yosry Ahmed)
v2:
* Fix loongarch compiler errors
* Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM
There are currently several issues with zswap writeback:
1. There is only a single global LRU for zswap, making it impossible to
perform worload-specific shrinking - an memcg under memory pressure
cannot determine which pages in the pool it owns, and often ends up
writing pages from other memcgs. This issue has been previously
observed in practice and mitigated by simply disabling
memcg-initiated shrinking:
https://lore.kernel.org/all/[email protected]/T/#u
But this solution leaves a lot to be desired, as we still do not
have an avenue for an memcg to free up its own memory locked up in
the zswap pool.
2. We only shrink the zswap pool when the user-defined limit is hit.
This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed
ahead of time, as this depends on the workload (specifically, on
factors such as memory access patterns and compressibility of the
memory pages).
This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.
As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.
Domenico Cerasuolo (3):
zswap: make shrinking memcg-aware
mm: memcg: add per-memcg zswap writeback stat
selftests: cgroup: update per-memcg zswap writeback selftest
Nhat Pham (2):
list_lru: allows explicit memcg and NUMA node selection
zswap: shrinks zswap pool based on memory pressure
Documentation/admin-guide/mm/zswap.rst | 7 +
drivers/android/binder_alloc.c | 5 +-
fs/dcache.c | 8 +-
fs/gfs2/quota.c | 6 +-
fs/inode.c | 4 +-
fs/nfs/nfs42xattr.c | 8 +-
fs/nfsd/filecache.c | 4 +-
fs/xfs/xfs_buf.c | 6 +-
fs/xfs/xfs_dquot.c | 2 +-
fs/xfs/xfs_qm.c | 2 +-
include/linux/list_lru.h | 46 ++-
include/linux/memcontrol.h | 5 +
include/linux/mmzone.h | 2 +
include/linux/vm_event_item.h | 1 +
include/linux/zswap.h | 25 +-
mm/list_lru.c | 48 ++-
mm/memcontrol.c | 1 +
mm/mmzone.c | 1 +
mm/swap.h | 3 +-
mm/swap_state.c | 25 +-
mm/vmstat.c | 1 +
mm/workingset.c | 4 +-
mm/zswap.c | 365 ++++++++++++++++----
tools/testing/selftests/cgroup/test_zswap.c | 74 ++--
24 files changed, 526 insertions(+), 127 deletions(-)
--
2.34.1
From: Domenico Cerasuolo <[email protected]>
The memcg-zswap self test is updated to adjust to the behavior change
implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
where zswap performs writeback for specific memcg.
Signed-off-by: Domenico Cerasuolo <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
1 file changed, 50 insertions(+), 24 deletions(-)
diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
index 49def87a909b..753a3b9de1ad 100644
--- a/tools/testing/selftests/cgroup/test_zswap.c
+++ b/tools/testing/selftests/cgroup/test_zswap.c
@@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
return read_int("/sys/kernel/debug/zswap/stored_pages", value);
}
-static int get_zswap_written_back_pages(size_t *value)
+static int get_cg_wb_count(const char *cg)
{
- return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
+ return cg_read_key_long(cg, "memory.stat", "zswp_wb");
}
static int allocate_bytes(const char *cgroup, void *arg)
@@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
return 0;
}
+static char *setup_test_group_1M(const char *root, const char *name)
+{
+ char *group_name = cg_name(root, name);
+
+ if (!group_name)
+ return NULL;
+ if (cg_create(group_name))
+ goto fail;
+ if (cg_write(group_name, "memory.max", "1M")) {
+ cg_destroy(group_name);
+ goto fail;
+ }
+ return group_name;
+fail:
+ free(group_name);
+ return NULL;
+}
+
/*
* When trying to store a memcg page in zswap, if the memcg hits its memory
- * limit in zswap, writeback should not be triggered.
- *
- * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
- * not zswap"). Needs to be revised when a per memcg writeback mechanism is
- * implemented.
+ * limit in zswap, writeback should affect only the zswapped pages of that
+ * memcg.
*/
static int test_no_invasive_cgroup_shrink(const char *root)
{
- size_t written_back_before, written_back_after;
int ret = KSFT_FAIL;
- char *test_group;
+ size_t control_allocation_size = MB(10);
+ char *control_allocation, *wb_group = NULL, *control_group = NULL;
/* Set up */
- test_group = cg_name(root, "no_shrink_test");
- if (!test_group)
- goto out;
- if (cg_create(test_group))
+ wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
+ if (!wb_group)
+ return KSFT_FAIL;
+ if (cg_write(wb_group, "memory.zswap.max", "10K"))
goto out;
- if (cg_write(test_group, "memory.max", "1M"))
+ control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
+ if (!control_group)
goto out;
- if (cg_write(test_group, "memory.zswap.max", "10K"))
+
+ /* Push some test_group2 memory into zswap */
+ if (cg_enter_current(control_group))
goto out;
- if (get_zswap_written_back_pages(&written_back_before))
+ control_allocation = malloc(control_allocation_size);
+ for (int i = 0; i < control_allocation_size; i += 4095)
+ control_allocation[i] = 'a';
+ if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
goto out;
- /* Allocate 10x memory.max to push memory into zswap */
- if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
+ /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
+ if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
goto out;
- /* Verify that no writeback happened because of the memcg allocation */
- if (get_zswap_written_back_pages(&written_back_after))
- goto out;
- if (written_back_after == written_back_before)
+ /* Verify that only zswapped memory from gwb_group has been written back */
+ if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
ret = KSFT_PASS;
out:
- cg_destroy(test_group);
- free(test_group);
+ cg_enter_current(root);
+ if (control_group) {
+ cg_destroy(control_group);
+ free(control_group);
+ }
+ cg_destroy(wb_group);
+ free(wb_group);
+ if (control_allocation)
+ free(control_allocation);
return ret;
}
--
2.34.1
Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).
This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.
Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:
* Estimate the number of warm pages residing in zswap, and attempt to
protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
saving factor. The better zswap compresses the data, the fewer pages
we will evict to swap (as we will otherwise incur IO for relatively
small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
brought into memory, the shrinker will cautiously terminate its
shrinking action, as this is a sign that it is touching the warmer
region of the zswap LRU.
As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.
Signed-off-by: Nhat Pham <[email protected]>
---
Documentation/admin-guide/mm/zswap.rst | 7 +
include/linux/mmzone.h | 2 +
include/linux/zswap.h | 25 +++-
mm/mmzone.c | 1 +
mm/swap_state.c | 2 +
mm/zswap.c | 178 ++++++++++++++++++++++++-
6 files changed, 208 insertions(+), 7 deletions(-)
diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..522ae22ccb84 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,13 @@ attribute, e. g.::
Setting this parameter to 100 will disable the hysteresis.
+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it as follows:
+
+ echo Y > /sys/module/zswap/parameters/shrinker_enabled
+
A debugfs interface is provided for various statistic about pool size, number
of pages stored, same-value filled pages and various counters for the reasons
pages are rejected.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 12f31633be05..633afdb96c40 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -22,6 +22,7 @@
#include <linux/mm_types.h>
#include <linux/page-flags.h>
#include <linux/local_lock.h>
+#include <linux/zswap.h>
#include <asm/page.h>
/* Free memory management - zoned buddy allocator. */
@@ -637,6 +638,7 @@ struct lruvec {
#ifdef CONFIG_MEMCG
struct pglist_data *pgdat;
#endif
+ struct zswap_lruvec_state zswap_lruvec_state;
};
/* Isolate for asynchronous migration */
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index 2a60ce39cfde..64d0c3644185 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -5,19 +5,39 @@
#include <linux/types.h>
#include <linux/mm_types.h>
+struct lruvec;
+
extern u64 zswap_pool_total_size;
extern atomic_t zswap_stored_pages;
#ifdef CONFIG_ZSWAP
+struct zswap_lruvec_state {
+ /*
+ * Number of pages in zswap that should be protected from the shrinker.
+ * This number is an estimate of the following counts:
+ *
+ * a) Recent page faults.
+ * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+ * as well as recent zswap LRU rotations.
+ *
+ * These pages are likely to be warm, and might incur IO if the are written
+ * to swap.
+ */
+ atomic_long_t nr_zswap_protected;
+};
+
bool zswap_store(struct folio *folio);
bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
void zswap_swapon(int type);
void zswap_swapoff(int type);
-
+void zswap_lruvec_state_init(struct lruvec *lruvec);
+void zswap_lruvec_swapin(struct page *page);
#else
+struct zswap_lruvec_state {};
+
static inline bool zswap_store(struct folio *folio)
{
return false;
@@ -31,7 +51,8 @@ static inline bool zswap_load(struct folio *folio)
static inline void zswap_invalidate(int type, pgoff_t offset) {}
static inline void zswap_swapon(int type) {}
static inline void zswap_swapoff(int type) {}
-
+static inline void zswap_lruvec_init(struct lruvec *lruvec) {}
+static inline void zswap_lruvec_swapin(struct page *page) {}
#endif
#endif /* _LINUX_ZSWAP_H */
diff --git a/mm/mmzone.c b/mm/mmzone.c
index b594d3f268fe..c01896eca736 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -78,6 +78,7 @@ void lruvec_init(struct lruvec *lruvec)
memset(lruvec, 0, sizeof(struct lruvec));
spin_lock_init(&lruvec->lru_lock);
+ zswap_lruvec_state_init(lruvec);
for_each_lru(lru)
INIT_LIST_HEAD(&lruvec->lists[lru]);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 040639e1c77e..38ce9981b0be 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -686,6 +686,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
&page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
+ zswap_lruvec_swapin(page);
return page;
}
@@ -861,6 +862,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
&page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
+ zswap_lruvec_swapin(page);
return page;
}
diff --git a/mm/zswap.c b/mm/zswap.c
index b87311e48de9..c40697f07ba3 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -146,6 +146,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32
+/* Enable/disable memory pressure-based shrinker. */
+static bool zswap_shrinker_enabled;
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+
/*********************************
* data structures
**********************************/
@@ -175,6 +179,8 @@ struct zswap_pool {
char tfm_name[CRYPTO_MAX_ALG_NAME];
struct list_lru list_lru;
struct mem_cgroup *next_shrink;
+ struct shrinker *shrinker;
+ atomic_t nr_stored;
};
/*
@@ -273,17 +279,26 @@ static bool zswap_can_accept(void)
DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
}
+static u64 get_zswap_pool_size(struct zswap_pool *pool)
+{
+ u64 pool_size = 0;
+ int i;
+
+ for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
+ pool_size += zpool_get_total_size(pool->zpools[i]);
+
+ return pool_size;
+}
+
static void zswap_update_total_size(void)
{
struct zswap_pool *pool;
u64 total = 0;
- int i;
rcu_read_lock();
list_for_each_entry_rcu(pool, &zswap_pools, list)
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- total += zpool_get_total_size(pool->zpools[i]);
+ total += get_zswap_pool_size(pool);
rcu_read_unlock();
@@ -321,6 +336,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
+/*********************************
+* zswap lruvec functions
+**********************************/
+void zswap_lruvec_state_init(struct lruvec *lruvec)
+{
+ atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0);
+}
+
+void zswap_lruvec_swapin(struct page *page)
+{
+ struct lruvec *lruvec;
+
+ if (page) {
+ lruvec = folio_lruvec(page_folio(page));
+ atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ }
+}
+
/*********************************
* lru functions
**********************************/
@@ -328,8 +361,24 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
{
struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
int nid = entry_to_nid(entry);
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
+ atomic_long_t *nr_zswap_protected =
+ &lruvec->zswap_lruvec_state.nr_zswap_protected;
+ unsigned long lru_size, old, new;
+ if (added) {
+ lru_size = list_lru_count_one(list_lru, nid, memcg);
+ old = atomic_long_inc_return(nr_zswap_protected);
+
+ /*
+ * Decay to avoid overflow and adapt to changing workloads.
+ * This is based on LRU reclaim cost decaying heuristics.
+ */
+ do {
+ new = old > lru_size / 4 ? old / 2 : old;
+ } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new));
+ }
mem_cgroup_put(memcg);
return added;
}
@@ -430,6 +479,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
else {
zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
+ atomic_dec(&entry->pool->nr_stored);
zswap_pool_put(entry->pool);
}
zswap_entry_cache_free(entry);
@@ -471,6 +521,95 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
return entry;
}
+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+ unsigned long shrink_ret, nr_protected, lru_size;
+ struct zswap_pool *pool = shrinker->private_data;
+ bool encountered_page_in_swapcache = false;
+
+ nr_protected =
+ atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+
+ /*
+ * Abort if the shrinker is disabled or if we are shrinking into the
+ * protected region.
+ */
+ if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
+ shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+ &encountered_page_in_swapcache);
+
+ if (encountered_page_in_swapcache)
+ return SHRINK_STOP;
+
+ return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = shrinker->private_data;
+ struct mem_cgroup *memcg = sc->memcg;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+ unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
+
+#ifdef CONFIG_MEMCG_KMEM
+ cgroup_rstat_flush(memcg->css.cgroup);
+ nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+ nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+#else
+ /* use pool stats instead of memcg stats */
+ nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
+ nr_stored = atomic_read(&pool->nr_stored);
+#endif
+
+ if (!zswap_shrinker_enabled || !nr_stored)
+ return 0;
+
+ nr_protected =
+ atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+ /*
+ * Subtract the lru size by an estimate of the number of pages
+ * that should be protected.
+ */
+ nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
+
+ /*
+ * Scale the number of freeable pages by the memory saving factor.
+ * This ensures that the better zswap compresses memory, the fewer
+ * pages we will evict to swap (as it will otherwise incur IO for
+ * relatively small memory saving).
+ */
+ return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static void zswap_alloc_shrinker(struct zswap_pool *pool)
+{
+ pool->shrinker =
+ shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
+ if (!pool->shrinker)
+ return;
+
+ pool->shrinker->private_data = pool;
+ pool->shrinker->scan_objects = zswap_shrinker_scan;
+ pool->shrinker->count_objects = zswap_shrinker_count;
+ pool->shrinker->batch = 0;
+ pool->shrinker->seeks = DEFAULT_SEEKS;
+}
+
/*********************************
* per-cpu code
**********************************/
@@ -666,8 +805,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
spinlock_t *lock, void *arg)
{
struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ bool *encountered_page_in_swapcache = (bool *)arg;
struct mem_cgroup *memcg;
struct zswap_tree *tree;
+ struct lruvec *lruvec;
pgoff_t swpoffset;
enum lru_status ret = LRU_REMOVED_RETRY;
int writeback_result;
@@ -705,8 +846,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
/* we cannot use zswap_lru_add here, because it increments node's lru count */
list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
spin_unlock(lock);
- mem_cgroup_put(memcg);
ret = LRU_RETRY;
+
+ /*
+ * Encountering a page already in swap cache is a sign that we are shrinking
+ * into the warmer region. We should terminate shrinking (if we're in the dynamic
+ * shrinker context).
+ */
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+ ret = LRU_SKIP;
+ *encountered_page_in_swapcache = true;
+ }
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
+ /* Increment the protection area to account for the LRU rotation. */
+ atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+
+ mem_cgroup_put(memcg);
goto put_unlock;
}
zswap_written_back_pages++;
@@ -826,6 +981,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
&pool->node);
if (ret)
goto error;
+
+ zswap_alloc_shrinker(pool);
+ if (!pool->shrinker)
+ goto error;
+
pr_debug("using %s compressor\n", pool->tfm_name);
/* being the current pool takes 1 ref; this func expects the
@@ -833,13 +993,19 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- list_lru_init_memcg(&pool->list_lru, NULL);
+ if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
+ goto lru_fail;
+ shrinker_register(pool->shrinker);
INIT_WORK(&pool->shrink_work, shrink_worker);
+ atomic_set(&pool->nr_stored, 0);
zswap_pool_debug("created", pool);
return pool;
+lru_fail:
+ list_lru_destroy(&pool->list_lru);
+ shrinker_free(pool->shrinker);
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -897,6 +1063,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
zswap_pool_debug("destroying", pool);
+ shrinker_free(pool->shrinker);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
list_lru_destroy(&pool->list_lru);
@@ -1443,6 +1610,7 @@ bool zswap_store(struct folio *folio)
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
zswap_lru_add(&entry->pool->list_lru, entry);
+ atomic_inc(&entry->pool->nr_stored);
}
spin_unlock(&tree->lock);
--
2.34.1
The interface of list_lru is based on the assumption that the list node
and the data it represents belong to the same allocated on the correct
node/memcg. While this assumption is valid for existing slab objects LRU
such as dentries and inodes, it is undocumented, and rather inflexible
for certain potential list_lru users (such as the upcoming zswap
shrinker and the THP shrinker). It has caused us a lot of issues during
our development.
This patch changes list_lru interface so that the caller must explicitly
specify numa node and memcg when adding and removing objects. The old
list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
list_lru_del_obj(), respectively.
It also extends the list_lru API with a new function, list_lru_putback,
which undoes a previous list_lru_isolate call. Unlike list_lru_add, it
does not increment the LRU node count (as list_lru_isolate does not
decrement the node count). list_lru_putback also allows for explicit
memcg and NUMA node selection.
Suggested-by: Johannes Weiner <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
drivers/android/binder_alloc.c | 5 ++--
fs/dcache.c | 8 +++---
fs/gfs2/quota.c | 6 ++---
fs/inode.c | 4 +--
fs/nfs/nfs42xattr.c | 8 +++---
fs/nfsd/filecache.c | 4 +--
fs/xfs/xfs_buf.c | 6 ++---
fs/xfs/xfs_dquot.c | 2 +-
fs/xfs/xfs_qm.c | 2 +-
include/linux/list_lru.h | 46 +++++++++++++++++++++++++++++---
mm/list_lru.c | 48 ++++++++++++++++++++++++++++------
mm/workingset.c | 4 +--
12 files changed, 108 insertions(+), 35 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 138f6d43d13b..e80669d4e037 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -285,7 +285,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
trace_binder_free_lru_start(alloc, index);
- ret = list_lru_add(&binder_alloc_lru, &page->lru);
+ ret = list_lru_add_obj(&binder_alloc_lru, &page->lru);
WARN_ON(!ret);
trace_binder_free_lru_end(alloc, index);
@@ -848,7 +848,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
if (!alloc->pages[i].page_ptr)
continue;
- on_lru = list_lru_del(&binder_alloc_lru,
+ on_lru = list_lru_del_obj(&binder_alloc_lru,
&alloc->pages[i].lru);
page_addr = alloc->buffer + i * PAGE_SIZE;
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
@@ -1287,4 +1287,3 @@ int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
dest, bytes);
}
-
diff --git a/fs/dcache.c b/fs/dcache.c
index 25ac74d30bff..482d1b34d88d 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -428,7 +428,8 @@ static void d_lru_add(struct dentry *dentry)
this_cpu_inc(nr_dentry_unused);
if (d_is_negative(dentry))
this_cpu_inc(nr_dentry_negative);
- WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
+ WARN_ON_ONCE(!list_lru_add_obj(
+ &dentry->d_sb->s_dentry_lru, &dentry->d_lru));
}
static void d_lru_del(struct dentry *dentry)
@@ -438,7 +439,8 @@ static void d_lru_del(struct dentry *dentry)
this_cpu_dec(nr_dentry_unused);
if (d_is_negative(dentry))
this_cpu_dec(nr_dentry_negative);
- WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
+ WARN_ON_ONCE(!list_lru_del_obj(
+ &dentry->d_sb->s_dentry_lru, &dentry->d_lru));
}
static void d_shrink_del(struct dentry *dentry)
@@ -1240,7 +1242,7 @@ static enum lru_status dentry_lru_isolate(struct list_head *item,
*
* This is guaranteed by the fact that all LRU management
* functions are intermediated by the LRU API calls like
- * list_lru_add and list_lru_del. List movement in this file
+ * list_lru_add_obj and list_lru_del_obj. List movement in this file
* only ever occur through this functions or through callbacks
* like this one, that are called from the LRU API.
*
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 2f1328af34f4..72015594bc83 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -271,7 +271,7 @@ static struct gfs2_quota_data *gfs2_qd_search_bucket(unsigned int hash,
if (qd->qd_sbd != sdp)
continue;
if (lockref_get_not_dead(&qd->qd_lockref)) {
- list_lru_del(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru);
return qd;
}
}
@@ -344,7 +344,7 @@ static void qd_put(struct gfs2_quota_data *qd)
}
qd->qd_lockref.count = 0;
- list_lru_add(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_add_obj(&gfs2_qd_lru, &qd->qd_lru);
spin_unlock(&qd->qd_lockref.lock);
}
@@ -1508,7 +1508,7 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp)
lockref_mark_dead(&qd->qd_lockref);
spin_unlock(&qd->qd_lockref.lock);
- list_lru_del(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru);
list_add(&qd->qd_lru, &dispose);
}
spin_unlock(&qd_lock);
diff --git a/fs/inode.c b/fs/inode.c
index 84bc3c76e5cc..f889ba8dccd9 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -462,7 +462,7 @@ static void __inode_add_lru(struct inode *inode, bool rotate)
if (!mapping_shrinkable(&inode->i_data))
return;
- if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru))
+ if (list_lru_add_obj(&inode->i_sb->s_inode_lru, &inode->i_lru))
this_cpu_inc(nr_unused);
else if (rotate)
inode->i_state |= I_REFERENCED;
@@ -480,7 +480,7 @@ void inode_add_lru(struct inode *inode)
static void inode_lru_list_del(struct inode *inode)
{
- if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru))
+ if (list_lru_del_obj(&inode->i_sb->s_inode_lru, &inode->i_lru))
this_cpu_dec(nr_unused);
}
diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
index 2ad66a8922f4..49aaf28a6950 100644
--- a/fs/nfs/nfs42xattr.c
+++ b/fs/nfs/nfs42xattr.c
@@ -132,7 +132,7 @@ nfs4_xattr_entry_lru_add(struct nfs4_xattr_entry *entry)
lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ?
&nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru;
- return list_lru_add(lru, &entry->lru);
+ return list_lru_add_obj(lru, &entry->lru);
}
static bool
@@ -143,7 +143,7 @@ nfs4_xattr_entry_lru_del(struct nfs4_xattr_entry *entry)
lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ?
&nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru;
- return list_lru_del(lru, &entry->lru);
+ return list_lru_del_obj(lru, &entry->lru);
}
/*
@@ -349,7 +349,7 @@ nfs4_xattr_cache_unlink(struct inode *inode)
oldcache = nfsi->xattr_cache;
if (oldcache != NULL) {
- list_lru_del(&nfs4_xattr_cache_lru, &oldcache->lru);
+ list_lru_del_obj(&nfs4_xattr_cache_lru, &oldcache->lru);
oldcache->inode = NULL;
}
nfsi->xattr_cache = NULL;
@@ -474,7 +474,7 @@ nfs4_xattr_get_cache(struct inode *inode, int add)
kref_get(&cache->ref);
nfsi->xattr_cache = cache;
cache->inode = inode;
- list_lru_add(&nfs4_xattr_cache_lru, &cache->lru);
+ list_lru_add_obj(&nfs4_xattr_cache_lru, &cache->lru);
}
spin_unlock(&inode->i_lock);
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9c62b4502539..82352c100b49 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -322,7 +322,7 @@ nfsd_file_check_writeback(struct nfsd_file *nf)
static bool nfsd_file_lru_add(struct nfsd_file *nf)
{
set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags);
- if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) {
+ if (list_lru_add_obj(&nfsd_file_lru, &nf->nf_lru)) {
trace_nfsd_file_lru_add(nf);
return true;
}
@@ -331,7 +331,7 @@ static bool nfsd_file_lru_add(struct nfsd_file *nf)
static bool nfsd_file_lru_remove(struct nfsd_file *nf)
{
- if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) {
+ if (list_lru_del_obj(&nfsd_file_lru, &nf->nf_lru)) {
trace_nfsd_file_lru_del(nf);
return true;
}
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 9e7ba04572db..9c2654a8d24b 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -169,7 +169,7 @@ xfs_buf_stale(
atomic_set(&bp->b_lru_ref, 0);
if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&
- (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru)))
+ (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru)))
atomic_dec(&bp->b_hold);
ASSERT(atomic_read(&bp->b_hold) >= 1);
@@ -1047,7 +1047,7 @@ xfs_buf_rele(
* buffer for the LRU and clear the (now stale) dispose list
* state flag
*/
- if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) {
+ if (list_lru_add_obj(&bp->b_target->bt_lru, &bp->b_lru)) {
bp->b_state &= ~XFS_BSTATE_DISPOSE;
atomic_inc(&bp->b_hold);
}
@@ -1060,7 +1060,7 @@ xfs_buf_rele(
* was on was the disposal list
*/
if (!(bp->b_state & XFS_BSTATE_DISPOSE)) {
- list_lru_del(&bp->b_target->bt_lru, &bp->b_lru);
+ list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru);
} else {
ASSERT(list_empty(&bp->b_lru));
}
diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
index ac6ba646624d..49f619f5aa96 100644
--- a/fs/xfs/xfs_dquot.c
+++ b/fs/xfs/xfs_dquot.c
@@ -1064,7 +1064,7 @@ xfs_qm_dqput(
struct xfs_quotainfo *qi = dqp->q_mount->m_quotainfo;
trace_xfs_dqput_free(dqp);
- if (list_lru_add(&qi->qi_lru, &dqp->q_lru))
+ if (list_lru_add_obj(&qi->qi_lru, &dqp->q_lru))
XFS_STATS_INC(dqp->q_mount, xs_qm_dquot_unused);
}
xfs_dqunlock(dqp);
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index 94a7932ac570..67d0a8564ff3 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -171,7 +171,7 @@ xfs_qm_dqpurge(
* hits zero, so it really should be on the freelist here.
*/
ASSERT(!list_empty(&dqp->q_lru));
- list_lru_del(&qi->qi_lru, &dqp->q_lru);
+ list_lru_del_obj(&qi->qi_lru, &dqp->q_lru);
XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot_unused);
xfs_qm_dqdestroy(dqp);
diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index b35968ee9fb5..5ef217443299 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -75,6 +75,8 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
* list_lru_add: add an element to the lru list's tail
* @list_lru: the lru pointer
* @item: the item to be added.
+ * @memcg: the cgroup of the sublist to add the item to.
+ * @nid: the node id of the sublist to add the item to.
*
* If the element is already part of a list, this function returns doing
* nothing. Therefore the caller does not need to keep state about whether or
@@ -87,12 +89,28 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
*
* Return value: true if the list was updated, false otherwise
*/
-bool list_lru_add(struct list_lru *lru, struct list_head *item);
+bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
/**
- * list_lru_del: delete an element to the lru list
+ * list_lru_add_obj: add an element to the lru list's tail
+ * @list_lru: the lru pointer
+ * @item: the item to be added.
+ *
+ * This function is similar to list_lru_add(), but the NUMA node and the
+ * memcg of the sublist is determined by @item list_head. This assumption is
+ * valid for slab objects LRU such as dentries, inodes, etc.
+ *
+ * Return value: true if the list was updated, false otherwise
+ */
+bool list_lru_add_obj(struct list_lru *lru, struct list_head *item);
+
+/**
+ * list_lru_del: delete an element from the lru list
* @list_lru: the lru pointer
* @item: the item to be deleted.
+ * @memcg: the cgroup of the sublist to delete the item from.
+ * @nid: the node id of the sublist to delete the item from.
*
* This function works analogously as list_lru_add in terms of list
* manipulation. The comments about an element already pertaining to
@@ -100,7 +118,21 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
*
* Return value: true if the list was updated, false otherwise
*/
-bool list_lru_del(struct list_lru *lru, struct list_head *item);
+bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
+/**
+ * list_lru_del_obj: delete an element from the lru list
+ * @list_lru: the lru pointer
+ * @item: the item to be deleted.
+ *
+ * This function is similar to list_lru_del(), but the NUMA node and the
+ * memcg of the sublist is determined by @item list_head. This assumption is
+ * valid for slab objects LRU such as dentries, inodes, etc.
+ *
+ * Return value: true if the list was updated, false otherwise.
+ */
+bool list_lru_del_obj(struct list_lru *lru, struct list_head *item);
/**
* list_lru_count_one: return the number of objects currently held by @lru
@@ -136,6 +168,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru)
void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head);
+/*
+ * list_lru_putback: undo list_lru_isolate.
+ *
+ * Since we might have dropped the LRU lock in between, recompute list_lru_one
+ * from the node's id and memcg.
+ */
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
diff --git a/mm/list_lru.c b/mm/list_lru.c
index a05e5bef3b40..fcca67ac26ec 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -116,21 +116,19 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
}
#endif /* CONFIG_MEMCG_KMEM */
-bool list_lru_add(struct list_lru *lru, struct list_head *item)
+bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
{
- int nid = page_to_nid(virt_to_page(item));
struct list_lru_node *nlru = &lru->node[nid];
- struct mem_cgroup *memcg;
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, &memcg);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_add_tail(item, &l->list);
/* Set shrinker bit if the first element was added */
if (!l->nr_items++)
- set_shrinker_bit(memcg, nid,
- lru_shrinker_id(lru));
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
nlru->nr_items++;
spin_unlock(&nlru->lock);
return true;
@@ -140,15 +138,25 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
}
EXPORT_SYMBOL_GPL(list_lru_add);
-bool list_lru_del(struct list_lru *lru, struct list_head *item)
+bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return list_lru_add(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_add_obj);
+
+bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (!list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, NULL);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_del_init(item);
l->nr_items--;
nlru->nr_items--;
@@ -160,6 +168,16 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
}
EXPORT_SYMBOL_GPL(list_lru_del);
+bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
+{
+ int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return list_lru_del(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_del_obj);
+
void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
{
list_del_init(item);
@@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
}
EXPORT_SYMBOL_GPL(list_lru_isolate_move);
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
+ struct list_lru_one *list =
+ list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
+
+ if (list_empty(item)) {
+ list_add_tail(item, &list->list);
+ if (!list->nr_items++)
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
+ }
+}
+EXPORT_SYMBOL_GPL(list_lru_putback);
+
unsigned long list_lru_count_one(struct list_lru *lru,
int nid, struct mem_cgroup *memcg)
{
diff --git a/mm/workingset.c b/mm/workingset.c
index 11045febc383..7d3dacab8451 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -631,12 +631,12 @@ void workingset_update_node(struct xa_node *node)
if (node->count && node->count == node->nr_values) {
if (list_empty(&node->private_list)) {
- list_lru_add(&shadow_nodes, &node->private_list);
+ list_lru_add_obj(&shadow_nodes, &node->private_list);
__inc_lruvec_kmem_state(node, WORKINGSET_NODES);
}
} else {
if (!list_empty(&node->private_list)) {
- list_lru_del(&shadow_nodes, &node->private_list);
+ list_lru_del_obj(&shadow_nodes, &node->private_list);
__dec_lruvec_kmem_state(node, WORKINGSET_NODES);
}
}
--
2.34.1
From: Domenico Cerasuolo <[email protected]>
Since zswap now writes back pages from memcg-specific LRUs, we now need a
new stat to show writebacks count for each memcg.
Suggested-by: Nhat Pham <[email protected]>
Signed-off-by: Domenico Cerasuolo <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
include/linux/vm_event_item.h | 1 +
mm/memcontrol.c | 1 +
mm/vmstat.c | 1 +
mm/zswap.c | 3 +++
4 files changed, 6 insertions(+)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 8abfa1240040..3153359c3841 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -145,6 +145,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#ifdef CONFIG_ZSWAP
ZSWPIN,
ZSWPOUT,
+ ZSWP_WB,
#endif
#ifdef CONFIG_X86
DIRECT_MAP_LEVEL2_SPLIT,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 61c0c46c2d62..568d9d037a59 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -593,6 +593,7 @@ static const unsigned int memcg_vm_event_stat[] = {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
ZSWPIN,
ZSWPOUT,
+ ZSWP_WB,
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
THP_FAULT_ALLOC,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 359460deb377..5e5572f3b456 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_ZSWAP
"zswpin",
"zswpout",
+ "zswp_wb",
#endif
#ifdef CONFIG_X86
"direct_map_level2_splits",
diff --git a/mm/zswap.c b/mm/zswap.c
index ee8e227e7b0b..b87311e48de9 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -711,6 +711,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
}
zswap_written_back_pages++;
+ if (entry->objcg)
+ count_objcg_event(entry->objcg, ZSWP_WB);
+
/*
* Writeback started successfully, the page now belongs to the
* swapcache. Drop the entry from zswap - unless invalidate already
--
2.34.1
From: Domenico Cerasuolo <[email protected]>
Currently, we only have a single global LRU for zswap. This makes it
impossible to perform worload-specific shrinking - an memcg cannot
determine which pages in the pool it owns, and often ends up writing
pages from other memcgs. This issue has been previously observed in
practice and mitigated by simply disabling memcg-initiated shrinking:
https://lore.kernel.org/all/[email protected]/T/#u
This patch fully resolves the issue by replacing the global zswap LRU
with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
a) When a store attempt hits an memcg limit, it now triggers a
synchronous reclaim attempt that, if successful, allows the new
hotter page to be accepted by zswap.
b) If the store attempt instead hits the global zswap limit, it will
trigger an asynchronous reclaim attempt, in which an memcg is
selected for reclaim in a round-robin-like fashion.
Signed-off-by: Domenico Cerasuolo <[email protected]>
Co-developed-by: Nhat Pham <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
include/linux/memcontrol.h | 5 +
mm/swap.h | 3 +-
mm/swap_state.c | 23 +++--
mm/zswap.c | 188 ++++++++++++++++++++++++++-----------
4 files changed, 156 insertions(+), 63 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6edd3ec4d8d5..c1846e57011b 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
return NULL;
}
+static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
+{
+ return NULL;
+}
+
static inline bool folio_memcg_kmem(struct folio *folio)
{
return false;
diff --git a/mm/swap.h b/mm/swap.h
index 73c332ee4d91..c0dc73e10e91 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct swap_iocb **plug);
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct mempolicy *mpol, pgoff_t ilx,
- bool *new_page_allocated);
+ bool *new_page_allocated,
+ bool skip_if_exists);
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
struct mempolicy *mpol, pgoff_t ilx);
struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 85d9e5806a6a..040639e1c77e 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct mempolicy *mpol, pgoff_t ilx,
- bool *new_page_allocated)
+ bool *new_page_allocated,
+ bool skip_if_exists)
{
struct swap_info_struct *si;
struct folio *folio;
@@ -470,6 +471,16 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
if (err != -EEXIST)
goto fail_put_swap;
+ /* Protect against a recursive call to __read_swap_cache_async()
+ * on the same entry waiting forever here because SWAP_HAS_CACHE
+ * is set but the folio is not the swap cache yet. This can
+ * happen today if mem_cgroup_swapin_charge_folio() below
+ * triggers reclaim through zswap, which may call
+ * __read_swap_cache_async() in the writeback path.
+ */
+ if (skip_if_exists)
+ goto fail_put_swap;
+
/*
* We might race against __delete_from_swap_cache(), and
* stumble across a swap_map entry whose SWAP_HAS_CACHE
@@ -537,7 +548,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
mpol = get_vma_policy(vma, addr, 0, &ilx);
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
mpol_cond_put(mpol);
if (page_allocated)
@@ -654,7 +665,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
/* Ok, do the async read-ahead now */
page = __read_swap_cache_async(
swp_entry(swp_type(entry), offset),
- gfp_mask, mpol, ilx, &page_allocated);
+ gfp_mask, mpol, ilx, &page_allocated, false);
if (!page)
continue;
if (page_allocated) {
@@ -672,7 +683,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
skip:
/* The page was likely read above, so no need for plugging here */
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
return page;
@@ -827,7 +838,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
pte_unmap(pte);
pte = NULL;
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
if (!page)
continue;
if (page_allocated) {
@@ -847,7 +858,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
skip:
/* The page was likely read above, so no need for plugging here */
page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
- &page_allocated);
+ &page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
return page;
diff --git a/mm/zswap.c b/mm/zswap.c
index 2e691cd1a466..ee8e227e7b0b 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -35,6 +35,7 @@
#include <linux/writeback.h>
#include <linux/pagemap.h>
#include <linux/workqueue.h>
+#include <linux/list_lru.h>
#include "swap.h"
#include "internal.h"
@@ -172,8 +173,8 @@ struct zswap_pool {
struct work_struct shrink_work;
struct hlist_node node;
char tfm_name[CRYPTO_MAX_ALG_NAME];
- struct list_head lru;
- spinlock_t lru_lock;
+ struct list_lru list_lru;
+ struct mem_cgroup *next_shrink;
};
/*
@@ -289,15 +290,25 @@ static void zswap_update_total_size(void)
zswap_pool_total_size = total;
}
+static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
+{
+ return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+}
+
+static inline int entry_to_nid(struct zswap_entry *entry)
+{
+ return page_to_nid(virt_to_page(entry));
+}
+
/*********************************
* zswap entry functions
**********************************/
static struct kmem_cache *zswap_entry_cache;
-static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
{
struct zswap_entry *entry;
- entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+ entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
if (!entry)
return NULL;
entry->refcount = 1;
@@ -310,6 +321,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
+/*********************************
+* lru functions
+**********************************/
+static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
+ int nid = entry_to_nid(entry);
+ bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return added;
+}
+
+static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
+ int nid = entry_to_nid(entry);
+ bool removed = list_lru_del(list_lru, &entry->lru, nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return removed;
+}
+
/*********************************
* rbtree functions
**********************************/
@@ -394,9 +428,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
if (!entry->length)
atomic_dec(&zswap_same_filled_pages);
else {
- spin_lock(&entry->pool->lru_lock);
- list_del(&entry->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
zswap_pool_put(entry->pool);
}
@@ -630,21 +662,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
zswap_entry_put(tree, entry);
}
-static int zswap_reclaim_entry(struct zswap_pool *pool)
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg)
{
- struct zswap_entry *entry;
+ struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ struct mem_cgroup *memcg;
struct zswap_tree *tree;
pgoff_t swpoffset;
- int ret;
+ enum lru_status ret = LRU_REMOVED_RETRY;
+ int writeback_result;
- /* Get an entry off the LRU */
- spin_lock(&pool->lru_lock);
- if (list_empty(&pool->lru)) {
- spin_unlock(&pool->lru_lock);
- return -EINVAL;
- }
- entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
- list_del_init(&entry->lru);
/*
* Once the lru lock is dropped, the entry might get freed. The
* swpoffset is copied to the stack, and entry isn't deref'd again
@@ -652,28 +679,37 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
*/
swpoffset = swp_offset(entry->swpentry);
tree = zswap_trees[swp_type(entry->swpentry)];
- spin_unlock(&pool->lru_lock);
+ list_lru_isolate(l, item);
+ /*
+ * It's safe to drop the lock here because we return either
+ * LRU_REMOVED_RETRY or LRU_RETRY.
+ */
+ spin_unlock(lock);
/* Check for invalidate() race */
spin_lock(&tree->lock);
- if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
- ret = -EAGAIN;
+ if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
goto unlock;
- }
+
/* Hold a reference to prevent a free during writeback */
zswap_entry_get(entry);
spin_unlock(&tree->lock);
- ret = zswap_writeback_entry(entry, tree);
+ writeback_result = zswap_writeback_entry(entry, tree);
spin_lock(&tree->lock);
- if (ret) {
- /* Writeback failed, put entry back on LRU */
- spin_lock(&pool->lru_lock);
- list_move(&entry->lru, &pool->lru);
- spin_unlock(&pool->lru_lock);
+ if (writeback_result) {
+ zswap_reject_reclaim_fail++;
+ memcg = get_mem_cgroup_from_entry(entry);
+ spin_lock(lock);
+ /* we cannot use zswap_lru_add here, because it increments node's lru count */
+ list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
+ spin_unlock(lock);
+ mem_cgroup_put(memcg);
+ ret = LRU_RETRY;
goto put_unlock;
}
+ zswap_written_back_pages++;
/*
* Writeback started successfully, the page now belongs to the
@@ -687,7 +723,34 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
zswap_entry_put(tree, entry);
unlock:
spin_unlock(&tree->lock);
- return ret ? -EAGAIN : 0;
+ spin_lock(lock);
+ return ret;
+}
+
+static int shrink_memcg(struct mem_cgroup *memcg)
+{
+ struct zswap_pool *pool;
+ int nid, shrunk = 0;
+
+ /*
+ * Skip zombies because their LRUs are reparented and we would be
+ * reclaiming from the parent instead of the dead memcg.
+ */
+ if (memcg && !mem_cgroup_online(memcg))
+ return -ENOENT;
+
+ pool = zswap_pool_current_get();
+ if (!pool)
+ return -EINVAL;
+
+ for_each_node_state(nid, N_NORMAL_MEMORY) {
+ unsigned long nr_to_walk = 1;
+
+ shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
+ &shrink_memcg_cb, NULL, &nr_to_walk);
+ }
+ zswap_pool_put(pool);
+ return shrunk ? 0 : -EAGAIN;
}
static void shrink_worker(struct work_struct *w)
@@ -696,15 +759,17 @@ static void shrink_worker(struct work_struct *w)
shrink_work);
int ret, failures = 0;
+ /* global reclaim will select cgroup in a round-robin fashion. */
do {
- ret = zswap_reclaim_entry(pool);
- if (ret) {
- zswap_reject_reclaim_fail++;
- if (ret != -EAGAIN)
- break;
- if (++failures == MAX_RECLAIM_RETRIES)
- break;
- }
+ pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+
+ ret = shrink_memcg(pool->next_shrink);
+
+ if (ret == -EINVAL)
+ break;
+ if (ret && ++failures == MAX_RECLAIM_RETRIES)
+ break;
+
cond_resched();
} while (!zswap_can_accept());
zswap_pool_put(pool);
@@ -765,8 +830,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- INIT_LIST_HEAD(&pool->lru);
- spin_lock_init(&pool->lru_lock);
+ list_lru_init_memcg(&pool->list_lru, NULL);
INIT_WORK(&pool->shrink_work, shrink_worker);
zswap_pool_debug("created", pool);
@@ -832,6 +896,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
+ list_lru_destroy(&pool->list_lru);
+ if (pool->next_shrink)
+ mem_cgroup_put(pool->next_shrink);
for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
zpool_destroy_pool(pool->zpools[i]);
kfree(pool);
@@ -1079,7 +1146,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
/* try to allocate swap cache page */
mpol = get_task_policy(current);
page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
- NO_INTERLEAVE_INDEX, &page_was_allocated);
+ NO_INTERLEAVE_INDEX, &page_was_allocated, true);
if (!page) {
ret = -ENOMEM;
goto fail;
@@ -1145,7 +1212,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
/* start writeback */
__swap_writepage(page, &wbc);
put_page(page);
- zswap_written_back_pages++;
return ret;
@@ -1202,8 +1268,10 @@ bool zswap_store(struct folio *folio)
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct obj_cgroup *objcg = NULL;
+ struct mem_cgroup *memcg = NULL;
struct zswap_pool *pool;
struct zpool *zpool;
+ int lru_alloc_ret;
unsigned int dlen = PAGE_SIZE;
unsigned long handle, value;
char *buf;
@@ -1233,15 +1301,15 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
spin_unlock(&tree->lock);
-
- /*
- * XXX: zswap reclaim does not work with cgroups yet. Without a
- * cgroup-aware entry LRU, we will push out entries system-wide based on
- * local cgroup limits.
- */
objcg = get_obj_cgroup_from_folio(folio);
- if (objcg && !obj_cgroup_may_zswap(objcg))
- goto reject;
+ if (objcg && !obj_cgroup_may_zswap(objcg)) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ if (shrink_memcg(memcg)) {
+ mem_cgroup_put(memcg);
+ goto reject;
+ }
+ mem_cgroup_put(memcg);
+ }
/* reclaim space if needed */
if (zswap_is_full()) {
@@ -1258,7 +1326,7 @@ bool zswap_store(struct folio *folio)
}
/* allocate entry */
- entry = zswap_entry_cache_alloc(GFP_KERNEL);
+ entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
if (!entry) {
zswap_reject_kmemcache_fail++;
goto reject;
@@ -1285,6 +1353,15 @@ bool zswap_store(struct folio *folio)
if (!entry->pool)
goto freepage;
+ if (objcg) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ lru_alloc_ret = memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL);
+ mem_cgroup_put(memcg);
+
+ if (lru_alloc_ret)
+ goto put_pool;
+ }
+
/* compress */
acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
@@ -1361,9 +1438,8 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_add(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ INIT_LIST_HEAD(&entry->lru);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
spin_unlock(&tree->lock);
@@ -1376,6 +1452,7 @@ bool zswap_store(struct folio *folio)
put_dstmem:
mutex_unlock(acomp_ctx->mutex);
+put_pool:
zswap_pool_put(entry->pool);
freepage:
zswap_entry_cache_free(entry);
@@ -1470,9 +1547,8 @@ bool zswap_load(struct folio *folio)
zswap_invalidate_entry(tree, entry);
folio_mark_dirty(folio);
} else if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_move(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
zswap_entry_put(tree, entry);
spin_unlock(&tree->lock);
--
2.34.1
On Tue, Oct 24, 2023 at 1:33 PM Nhat Pham <[email protected]> wrote:
>
> From: Domenico Cerasuolo <[email protected]>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/[email protected]/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.
>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Co-developed-by: Nhat Pham <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> include/linux/memcontrol.h | 5 +
> mm/swap.h | 3 +-
> mm/swap_state.c | 23 +++--
> mm/zswap.c | 188 ++++++++++++++++++++++++++-----------
> 4 files changed, 156 insertions(+), 63 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 6edd3ec4d8d5..c1846e57011b 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> + return NULL;
> +}
> +
> static inline bool folio_memcg_kmem(struct folio *folio)
> {
> return false;
> diff --git a/mm/swap.h b/mm/swap.h
> index 73c332ee4d91..c0dc73e10e91 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct swap_iocb **plug);
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated);
> + bool *new_page_allocated,
> + bool skip_if_exists);
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 85d9e5806a6a..040639e1c77e 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
>
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated)
> + bool *new_page_allocated,
> + bool skip_if_exists)
> {
> struct swap_info_struct *si;
> struct folio *folio;
> @@ -470,6 +471,16 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> if (err != -EEXIST)
> goto fail_put_swap;
>
> + /* Protect against a recursive call to __read_swap_cache_async()
nit: insert new line before "Protect", see surrounding comments.
> + * on the same entry waiting forever here because SWAP_HAS_CACHE
> + * is set but the folio is not the swap cache yet. This can
> + * happen today if mem_cgroup_swapin_charge_folio() below
> + * triggers reclaim through zswap, which may call
> + * __read_swap_cache_async() in the writeback path.
> + */
> + if (skip_if_exists)
> + goto fail_put_swap;
> +
> /*
> * We might race against __delete_from_swap_cache(), and
> * stumble across a swap_map entry whose SWAP_HAS_CACHE
[..]
> +/*********************************
> +* lru functions
> +**********************************/
> +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> + int nid = entry_to_nid(entry);
> + bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
> +
> + mem_cgroup_put(memcg);
Still not fond of the get/put pattern but okay..
> + return added;
> +}
> +
> +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> + int nid = entry_to_nid(entry);
> + bool removed = list_lru_del(list_lru, &entry->lru, nid, memcg);
> +
> + mem_cgroup_put(memcg);
> + return removed;
> +}
> +
> /*********************************
> * rbtree functions
> **********************************/
[..]
> @@ -652,28 +679,37 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> */
> swpoffset = swp_offset(entry->swpentry);
> tree = zswap_trees[swp_type(entry->swpentry)];
> - spin_unlock(&pool->lru_lock);
> + list_lru_isolate(l, item);
> + /*
> + * It's safe to drop the lock here because we return either
> + * LRU_REMOVED_RETRY or LRU_RETRY.
> + */
> + spin_unlock(lock);
>
> /* Check for invalidate() race */
> spin_lock(&tree->lock);
> - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> - ret = -EAGAIN;
> + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> goto unlock;
> - }
> +
> /* Hold a reference to prevent a free during writeback */
> zswap_entry_get(entry);
> spin_unlock(&tree->lock);
>
> - ret = zswap_writeback_entry(entry, tree);
> + writeback_result = zswap_writeback_entry(entry, tree);
>
> spin_lock(&tree->lock);
> - if (ret) {
> - /* Writeback failed, put entry back on LRU */
> - spin_lock(&pool->lru_lock);
> - list_move(&entry->lru, &pool->lru);
> - spin_unlock(&pool->lru_lock);
> + if (writeback_result) {
> + zswap_reject_reclaim_fail++;
> + memcg = get_mem_cgroup_from_entry(entry);
Can this return NULL? Seems like we don't check the return in most/all places.
> + spin_lock(lock);
> + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
Perhaps we can move this call with the memcg get/put to a helper like
add/del? (e.g. zswap_lru_putback)
We would need to move get_mem_cgroup_from_entry() into the lock but I
think that's okay.
> + spin_unlock(lock);
> + mem_cgroup_put(memcg);
> + ret = LRU_RETRY;
> goto put_unlock;
> }
> + zswap_written_back_pages++;
>
> /*
> * Writeback started successfully, the page now belongs to the
> @@ -687,7 +723,34 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> zswap_entry_put(tree, entry);
> unlock:
> spin_unlock(&tree->lock);
> - return ret ? -EAGAIN : 0;
> + spin_lock(lock);
> + return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> + int nid, shrunk = 0;
> +
> + /*
> + * Skip zombies because their LRUs are reparented and we would be
> + * reclaiming from the parent instead of the dead memcg.
> + */
> + if (memcg && !mem_cgroup_online(memcg))
> + return -ENOENT;
> +
> + pool = zswap_pool_current_get();
> + if (!pool)
> + return -EINVAL;
> +
> + for_each_node_state(nid, N_NORMAL_MEMORY) {
> + unsigned long nr_to_walk = 1;
> +
> + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> + &shrink_memcg_cb, NULL, &nr_to_walk);
> + }
> + zswap_pool_put(pool);
> + return shrunk ? 0 : -EAGAIN;
> }
>
> static void shrink_worker(struct work_struct *w)
> @@ -696,15 +759,17 @@ static void shrink_worker(struct work_struct *w)
> shrink_work);
> int ret, failures = 0;
>
> + /* global reclaim will select cgroup in a round-robin fashion. */
> do {
> - ret = zswap_reclaim_entry(pool);
> - if (ret) {
> - zswap_reject_reclaim_fail++;
> - if (ret != -EAGAIN)
> - break;
> - if (++failures == MAX_RECLAIM_RETRIES)
> - break;
> - }
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
I think this can be a problem. We hold a ref to a memcg here until the
next time we shrink, which can be a long time IIUC. This can cause the
memcg to linger as a zombie. I understand it is one memcg per-zswap
pool, but I am still unsure about it.
MGLRU maintains a memcg LRU for global reclaim that gets properly
cleaned up when a memcg is going away, so that's one option, although
complicated.
A second option would be to hold a pointer to the objcg instead, which
should be less problematic (although we are still holding that objcg
hostage indefinitely). The problem here is that if the objcg gets
reparented, next time we will start at the parent of the memcg we
stopped at last time, which tbh doesn't sound bad at all to me.
A third option would be to flag the memcg such that when it is getting
offlined we can call into zswap to reset pool->next_shrink (or move it
to the parent) and drop the ref. Although synchronization can get
hairy when racing with offlining.
Not sure what's the right solution, but I prefer we don't hold any
memcgs hostages indefinitely. I also think if we end up using
mem_cgroup_iter() then there should be a mem_cgroup_iter_break()
somewhere if/when breaking the iteration.
> +
> + ret = shrink_memcg(pool->next_shrink);
> +
> + if (ret == -EINVAL)
> + break;
> + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> + break;
> +
> cond_resched();
> } while (!zswap_can_accept());
> zswap_pool_put(pool);
[..]
> @@ -1233,15 +1301,15 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> -
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
Here we choose to replicate mem_cgroup_put().
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1258,7 +1326,7 @@ bool zswap_store(struct folio *folio)
> }
>
> /* allocate entry */
> - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> if (!entry) {
> zswap_reject_kmemcache_fail++;
> goto reject;
> @@ -1285,6 +1353,15 @@ bool zswap_store(struct folio *folio)
> if (!entry->pool)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + lru_alloc_ret = memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL);
> + mem_cgroup_put(memcg);
> +
> + if (lru_alloc_ret)
> + goto put_pool;
> + }
Yet here we choose to have a single mem_cgroup_put() and stash the
output in a variable.
Consistency would be nice.
> +
> /* compress */
> acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
>
> @@ -1361,9 +1438,8 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_add(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + INIT_LIST_HEAD(&entry->lru);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> spin_unlock(&tree->lock);
>
> @@ -1376,6 +1452,7 @@ bool zswap_store(struct folio *folio)
>
> put_dstmem:
> mutex_unlock(acomp_ctx->mutex);
> +put_pool:
> zswap_pool_put(entry->pool);
> freepage:
> zswap_entry_cache_free(entry);
> @@ -1470,9 +1547,8 @@ bool zswap_load(struct folio *folio)
> zswap_invalidate_entry(tree, entry);
> folio_mark_dirty(folio);
> } else if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_move(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> + zswap_lru_add(&entry->pool->list_lru, entry);
Can we use list_move_tail() here? (perhaps wrapped in a helper if needed).
> }
> zswap_entry_put(tree, entry);
> spin_unlock(&tree->lock);
> --
> 2.34.1
On Tue, Oct 24, 2023 at 8:17 PM Yosry Ahmed <[email protected]> wrote:
>
> On Tue, Oct 24, 2023 at 1:33 PM Nhat Pham <[email protected]> wrote:
> >
> > From: Domenico Cerasuolo <[email protected]>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/[email protected]/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
> >
> > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > Co-developed-by: Nhat Pham <[email protected]>
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
> > include/linux/memcontrol.h | 5 +
> > mm/swap.h | 3 +-
> > mm/swap_state.c | 23 +++--
> > mm/zswap.c | 188 ++++++++++++++++++++++++++-----------
> > 4 files changed, 156 insertions(+), 63 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 6edd3ec4d8d5..c1846e57011b 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > return NULL;
> > }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > + return NULL;
> > +}
> > +
> > static inline bool folio_memcg_kmem(struct folio *folio)
> > {
> > return false;
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 73c332ee4d91..c0dc73e10e91 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct swap_iocb **plug);
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated);
> > + bool *new_page_allocated,
> > + bool skip_if_exists);
> > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > struct mempolicy *mpol, pgoff_t ilx);
> > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 85d9e5806a6a..040639e1c77e 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> >
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated)
> > + bool *new_page_allocated,
> > + bool skip_if_exists)
> > {
> > struct swap_info_struct *si;
> > struct folio *folio;
> > @@ -470,6 +471,16 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > if (err != -EEXIST)
> > goto fail_put_swap;
> >
> > + /* Protect against a recursive call to __read_swap_cache_async()
>
> nit: insert new line before "Protect", see surrounding comments.
>
> > + * on the same entry waiting forever here because SWAP_HAS_CACHE
> > + * is set but the folio is not the swap cache yet. This can
> > + * happen today if mem_cgroup_swapin_charge_folio() below
> > + * triggers reclaim through zswap, which may call
> > + * __read_swap_cache_async() in the writeback path.
> > + */
> > + if (skip_if_exists)
> > + goto fail_put_swap;
> > +
> > /*
> > * We might race against __delete_from_swap_cache(), and
> > * stumble across a swap_map entry whose SWAP_HAS_CACHE
> [..]
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > + int nid = entry_to_nid(entry);
> > + bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
> > +
> > + mem_cgroup_put(memcg);
>
> Still not fond of the get/put pattern but okay..
Actually, Johannes and I took another look to see if we can replace
the memcg reference getting with just rcu_read_lock().
It seems there might be a race between zswap LRU manipulation
and memcg offlining - not just with the rcu_read_lock() idea, but also
with our current implementation!
I'll shoot another email with more details later when I'm sure of it
one way or another...
>
> > + return added;
> > +}
> > +
> > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > + int nid = entry_to_nid(entry);
> > + bool removed = list_lru_del(list_lru, &entry->lru, nid, memcg);
> > +
> > + mem_cgroup_put(memcg);
> > + return removed;
> > +}
> > +
> > /*********************************
> > * rbtree functions
> > **********************************/
> [..]
> > @@ -652,28 +679,37 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > */
> > swpoffset = swp_offset(entry->swpentry);
> > tree = zswap_trees[swp_type(entry->swpentry)];
> > - spin_unlock(&pool->lru_lock);
> > + list_lru_isolate(l, item);
> > + /*
> > + * It's safe to drop the lock here because we return either
> > + * LRU_REMOVED_RETRY or LRU_RETRY.
> > + */
> > + spin_unlock(lock);
> >
> > /* Check for invalidate() race */
> > spin_lock(&tree->lock);
> > - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > - ret = -EAGAIN;
> > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > goto unlock;
> > - }
> > +
> > /* Hold a reference to prevent a free during writeback */
> > zswap_entry_get(entry);
> > spin_unlock(&tree->lock);
> >
> > - ret = zswap_writeback_entry(entry, tree);
> > + writeback_result = zswap_writeback_entry(entry, tree);
> >
> > spin_lock(&tree->lock);
> > - if (ret) {
> > - /* Writeback failed, put entry back on LRU */
> > - spin_lock(&pool->lru_lock);
> > - list_move(&entry->lru, &pool->lru);
> > - spin_unlock(&pool->lru_lock);
> > + if (writeback_result) {
> > + zswap_reject_reclaim_fail++;
> > + memcg = get_mem_cgroup_from_entry(entry);
>
> Can this return NULL? Seems like we don't check the return in most/all places.
I believe so, but memcg experts should fact check me on this.
It's roughly the same pattern as zswap charging/uncharging:
obj_cgroup_uncharge_zswap(entry->objcg, entry->length)
-> getting memcg (under rcu_read_lock())
>
> > + spin_lock(lock);
> > + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
>
> Perhaps we can move this call with the memcg get/put to a helper like
> add/del? (e.g. zswap_lru_putback)
>
> We would need to move get_mem_cgroup_from_entry() into the lock but I
> think that's okay.
We probably could, but that sounds like extra code for not a lot of gains, no?
>
> > + spin_unlock(lock);
> > + mem_cgroup_put(memcg);
> > + ret = LRU_RETRY;
> > goto put_unlock;
> > }
> > + zswap_written_back_pages++;
> >
> > /*
> > * Writeback started successfully, the page now belongs to the
> > @@ -687,7 +723,34 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > zswap_entry_put(tree, entry);
> > unlock:
> > spin_unlock(&tree->lock);
> > - return ret ? -EAGAIN : 0;
> > + spin_lock(lock);
> > + return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > + int nid, shrunk = 0;
> > +
> > + /*
> > + * Skip zombies because their LRUs are reparented and we would be
> > + * reclaiming from the parent instead of the dead memcg.
> > + */
> > + if (memcg && !mem_cgroup_online(memcg))
> > + return -ENOENT;
> > +
> > + pool = zswap_pool_current_get();
> > + if (!pool)
> > + return -EINVAL;
> > +
> > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > + unsigned long nr_to_walk = 1;
> > +
> > + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> > + &shrink_memcg_cb, NULL, &nr_to_walk);
> > + }
> > + zswap_pool_put(pool);
> > + return shrunk ? 0 : -EAGAIN;
> > }
> >
> > static void shrink_worker(struct work_struct *w)
> > @@ -696,15 +759,17 @@ static void shrink_worker(struct work_struct *w)
> > shrink_work);
> > int ret, failures = 0;
> >
> > + /* global reclaim will select cgroup in a round-robin fashion. */
> > do {
> > - ret = zswap_reclaim_entry(pool);
> > - if (ret) {
> > - zswap_reject_reclaim_fail++;
> > - if (ret != -EAGAIN)
> > - break;
> > - if (++failures == MAX_RECLAIM_RETRIES)
> > - break;
> > - }
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
>
> I think this can be a problem. We hold a ref to a memcg here until the
> next time we shrink, which can be a long time IIUC. This can cause the
> memcg to linger as a zombie. I understand it is one memcg per-zswap
> pool, but I am still unsure about it.
>
> MGLRU maintains a memcg LRU for global reclaim that gets properly
> cleaned up when a memcg is going away, so that's one option, although
> complicated.
>
> A second option would be to hold a pointer to the objcg instead, which
> should be less problematic (although we are still holding that objcg
> hostage indefinitely). The problem here is that if the objcg gets
> reparented, next time we will start at the parent of the memcg we
> stopped at last time, which tbh doesn't sound bad at all to me.
>
> A third option would be to flag the memcg such that when it is getting
> offlined we can call into zswap to reset pool->next_shrink (or move it
> to the parent) and drop the ref. Although synchronization can get
> hairy when racing with offlining.
>
> Not sure what's the right solution, but I prefer we don't hold any
> memcgs hostages indefinitely. I also think if we end up using
> mem_cgroup_iter() then there should be a mem_cgroup_iter_break()
> somewhere if/when breaking the iteration.
>
I'm not sure if this is that big of a problem in the first place, but
if it is, doing something similar to MGLRU is probably the cleanest:
when the memcg is freed, trigger the zswap_exit_memcg() callback,
which will loop through all the zswap pools and update pool->next_shrink
where appropriate.
Note that we only have one pool per (compression algorithm x allocator)
combinations, so there cannot be that many pools, correct?
Johannes suggests this idea to me (my apologies if I butcher it)
during one of our conversations. That sounds relatively easy IIUC.
> > +
> > + ret = shrink_memcg(pool->next_shrink);
> > +
> > + if (ret == -EINVAL)
> > + break;
> > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > + break;
> > +
> > cond_resched();
> > } while (!zswap_can_accept());
> > zswap_pool_put(pool);
> [..]
> > @@ -1233,15 +1301,15 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > spin_unlock(&tree->lock);
> > -
> > - /*
> > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > - * local cgroup limits.
> > - */
> > objcg = get_obj_cgroup_from_folio(folio);
> > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > - goto reject;
> > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (shrink_memcg(memcg)) {
> > + mem_cgroup_put(memcg);
> > + goto reject;
> > + }
> > + mem_cgroup_put(memcg);
>
> Here we choose to replicate mem_cgroup_put().
>
> > + }
> >
> > /* reclaim space if needed */
> > if (zswap_is_full()) {
> > @@ -1258,7 +1326,7 @@ bool zswap_store(struct folio *folio)
> > }
> >
> > /* allocate entry */
> > - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > if (!entry) {
> > zswap_reject_kmemcache_fail++;
> > goto reject;
> > @@ -1285,6 +1353,15 @@ bool zswap_store(struct folio *folio)
> > if (!entry->pool)
> > goto freepage;
> >
> > + if (objcg) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + lru_alloc_ret = memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL);
> > + mem_cgroup_put(memcg);
> > +
> > + if (lru_alloc_ret)
> > + goto put_pool;
> > + }
>
> Yet here we choose to have a single mem_cgroup_put() and stash the
> output in a variable.
>
> Consistency would be nice.
No strong opinions here, but yeah we should fix it to make it
consistent.
>
> > +
> > /* compress */
> > acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> >
> > @@ -1361,9 +1438,8 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_add(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + INIT_LIST_HEAD(&entry->lru);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > spin_unlock(&tree->lock);
> >
> > @@ -1376,6 +1452,7 @@ bool zswap_store(struct folio *folio)
> >
> > put_dstmem:
> > mutex_unlock(acomp_ctx->mutex);
> > +put_pool:
> > zswap_pool_put(entry->pool);
> > freepage:
> > zswap_entry_cache_free(entry);
> > @@ -1470,9 +1547,8 @@ bool zswap_load(struct folio *folio)
> > zswap_invalidate_entry(tree, entry);
> > folio_mark_dirty(folio);
> > } else if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_move(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
>
> Can we use list_move_tail() here? (perhaps wrapped in a helper if needed).
Maybe zswap_lru_move_tail()? :)
FWIW, list_lru() interface does not have a move_tail function, (weird, I know).
So this seems to be the common pattern for LRU rotation with list_lru.
>
> > }
> > zswap_entry_put(tree, entry);
> > spin_unlock(&tree->lock);
> > --
> > 2.34.1
On Fri, Oct 27, 2023 at 2:10 PM Nhat Pham <[email protected]> wrote:
>
> On Tue, Oct 24, 2023 at 8:17 PM Yosry Ahmed <[email protected]> wrote:
> >
> > On Tue, Oct 24, 2023 at 1:33 PM Nhat Pham <[email protected]> wrote:
> > >
> > > From: Domenico Cerasuolo <[email protected]>
> > >
> > > Currently, we only have a single global LRU for zswap. This makes it
> > > impossible to perform worload-specific shrinking - an memcg cannot
> > > determine which pages in the pool it owns, and often ends up writing
> > > pages from other memcgs. This issue has been previously observed in
> > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > >
> > > https://lore.kernel.org/all/[email protected]/T/#u
> > >
> > > This patch fully resolves the issue by replacing the global zswap LRU
> > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > >
> > > a) When a store attempt hits an memcg limit, it now triggers a
> > > synchronous reclaim attempt that, if successful, allows the new
> > > hotter page to be accepted by zswap.
> > > b) If the store attempt instead hits the global zswap limit, it will
> > > trigger an asynchronous reclaim attempt, in which an memcg is
> > > selected for reclaim in a round-robin-like fashion.
> > >
> > > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > > Co-developed-by: Nhat Pham <[email protected]>
> > > Signed-off-by: Nhat Pham <[email protected]>
> > > ---
> > > include/linux/memcontrol.h | 5 +
> > > mm/swap.h | 3 +-
> > > mm/swap_state.c | 23 +++--
> > > mm/zswap.c | 188 ++++++++++++++++++++++++++-----------
> > > 4 files changed, 156 insertions(+), 63 deletions(-)
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 6edd3ec4d8d5..c1846e57011b 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -1187,6 +1187,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > > return NULL;
> > > }
> > >
> > > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > > +{
> > > + return NULL;
> > > +}
> > > +
> > > static inline bool folio_memcg_kmem(struct folio *folio)
> > > {
> > > return false;
> > > diff --git a/mm/swap.h b/mm/swap.h
> > > index 73c332ee4d91..c0dc73e10e91 100644
> > > --- a/mm/swap.h
> > > +++ b/mm/swap.h
> > > @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > struct swap_iocb **plug);
> > > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > struct mempolicy *mpol, pgoff_t ilx,
> > > - bool *new_page_allocated);
> > > + bool *new_page_allocated,
> > > + bool skip_if_exists);
> > > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > > struct mempolicy *mpol, pgoff_t ilx);
> > > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > index 85d9e5806a6a..040639e1c77e 100644
> > > --- a/mm/swap_state.c
> > > +++ b/mm/swap_state.c
> > > @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> > >
> > > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > struct mempolicy *mpol, pgoff_t ilx,
> > > - bool *new_page_allocated)
> > > + bool *new_page_allocated,
> > > + bool skip_if_exists)
> > > {
> > > struct swap_info_struct *si;
> > > struct folio *folio;
> > > @@ -470,6 +471,16 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > if (err != -EEXIST)
> > > goto fail_put_swap;
> > >
> > > + /* Protect against a recursive call to __read_swap_cache_async()
> >
> > nit: insert new line before "Protect", see surrounding comments.
> >
> > > + * on the same entry waiting forever here because SWAP_HAS_CACHE
> > > + * is set but the folio is not the swap cache yet. This can
> > > + * happen today if mem_cgroup_swapin_charge_folio() below
> > > + * triggers reclaim through zswap, which may call
> > > + * __read_swap_cache_async() in the writeback path.
> > > + */
> > > + if (skip_if_exists)
> > > + goto fail_put_swap;
> > > +
> > > /*
> > > * We might race against __delete_from_swap_cache(), and
> > > * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > [..]
> > > +/*********************************
> > > +* lru functions
> > > +**********************************/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > > + int nid = entry_to_nid(entry);
> > > + bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
> > > +
> > > + mem_cgroup_put(memcg);
> >
> > Still not fond of the get/put pattern but okay..
>
> Actually, Johannes and I took another look to see if we can replace
> the memcg reference getting with just rcu_read_lock().
>
> It seems there might be a race between zswap LRU manipulation
> and memcg offlining - not just with the rcu_read_lock() idea, but also
> with our current implementation!
>
> I'll shoot another email with more details later when I'm sure of it
> one way or another...
>
> >
> > > + return added;
> > > +}
> > > +
> > > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > > + int nid = entry_to_nid(entry);
> > > + bool removed = list_lru_del(list_lru, &entry->lru, nid, memcg);
> > > +
> > > + mem_cgroup_put(memcg);
> > > + return removed;
> > > +}
> > > +
> > > /*********************************
> > > * rbtree functions
> > > **********************************/
> > [..]
> > > @@ -652,28 +679,37 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > */
> > > swpoffset = swp_offset(entry->swpentry);
> > > tree = zswap_trees[swp_type(entry->swpentry)];
> > > - spin_unlock(&pool->lru_lock);
> > > + list_lru_isolate(l, item);
> > > + /*
> > > + * It's safe to drop the lock here because we return either
> > > + * LRU_REMOVED_RETRY or LRU_RETRY.
> > > + */
> > > + spin_unlock(lock);
> > >
> > > /* Check for invalidate() race */
> > > spin_lock(&tree->lock);
> > > - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > > - ret = -EAGAIN;
> > > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > > goto unlock;
> > > - }
> > > +
> > > /* Hold a reference to prevent a free during writeback */
> > > zswap_entry_get(entry);
> > > spin_unlock(&tree->lock);
> > >
> > > - ret = zswap_writeback_entry(entry, tree);
> > > + writeback_result = zswap_writeback_entry(entry, tree);
> > >
> > > spin_lock(&tree->lock);
> > > - if (ret) {
> > > - /* Writeback failed, put entry back on LRU */
> > > - spin_lock(&pool->lru_lock);
> > > - list_move(&entry->lru, &pool->lru);
> > > - spin_unlock(&pool->lru_lock);
> > > + if (writeback_result) {
> > > + zswap_reject_reclaim_fail++;
> > > + memcg = get_mem_cgroup_from_entry(entry);
> >
> > Can this return NULL? Seems like we don't check the return in most/all places.
>
> I believe so, but memcg experts should fact check me on this.
> It's roughly the same pattern as zswap charging/uncharging:
>
> obj_cgroup_uncharge_zswap(entry->objcg, entry->length)
> -> getting memcg (under rcu_read_lock())
>
> >
> > > + spin_lock(lock);
> > > + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> >
> > Perhaps we can move this call with the memcg get/put to a helper like
> > add/del? (e.g. zswap_lru_putback)
> >
> > We would need to move get_mem_cgroup_from_entry() into the lock but I
> > think that's okay.
>
> We probably could, but that sounds like extra code for not a lot of gains, no?
>
> >
> > > + spin_unlock(lock);
> > > + mem_cgroup_put(memcg);
> > > + ret = LRU_RETRY;
> > > goto put_unlock;
> > > }
> > > + zswap_written_back_pages++;
> > >
> > > /*
> > > * Writeback started successfully, the page now belongs to the
> > > @@ -687,7 +723,34 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > zswap_entry_put(tree, entry);
> > > unlock:
> > > spin_unlock(&tree->lock);
> > > - return ret ? -EAGAIN : 0;
> > > + spin_lock(lock);
> > > + return ret;
> > > +}
> > > +
> > > +static int shrink_memcg(struct mem_cgroup *memcg)
> > > +{
> > > + struct zswap_pool *pool;
> > > + int nid, shrunk = 0;
> > > +
> > > + /*
> > > + * Skip zombies because their LRUs are reparented and we would be
> > > + * reclaiming from the parent instead of the dead memcg.
> > > + */
> > > + if (memcg && !mem_cgroup_online(memcg))
> > > + return -ENOENT;
> > > +
> > > + pool = zswap_pool_current_get();
> > > + if (!pool)
> > > + return -EINVAL;
> > > +
> > > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > > + unsigned long nr_to_walk = 1;
> > > +
> > > + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> > > + &shrink_memcg_cb, NULL, &nr_to_walk);
> > > + }
> > > + zswap_pool_put(pool);
> > > + return shrunk ? 0 : -EAGAIN;
> > > }
> > >
> > > static void shrink_worker(struct work_struct *w)
> > > @@ -696,15 +759,17 @@ static void shrink_worker(struct work_struct *w)
> > > shrink_work);
> > > int ret, failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > - if (ret) {
> > > - zswap_reject_reclaim_fail++;
> > > - if (ret != -EAGAIN)
> > > - break;
> > > - if (++failures == MAX_RECLAIM_RETRIES)
> > > - break;
> > > - }
> > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> >
> > I think this can be a problem. We hold a ref to a memcg here until the
> > next time we shrink, which can be a long time IIUC. This can cause the
> > memcg to linger as a zombie. I understand it is one memcg per-zswap
> > pool, but I am still unsure about it.
> >
> > MGLRU maintains a memcg LRU for global reclaim that gets properly
> > cleaned up when a memcg is going away, so that's one option, although
> > complicated.
> >
> > A second option would be to hold a pointer to the objcg instead, which
> > should be less problematic (although we are still holding that objcg
> > hostage indefinitely). The problem here is that if the objcg gets
> > reparented, next time we will start at the parent of the memcg we
> > stopped at last time, which tbh doesn't sound bad at all to me.
> >
> > A third option would be to flag the memcg such that when it is getting
> > offlined we can call into zswap to reset pool->next_shrink (or move it
> > to the parent) and drop the ref. Although synchronization can get
> > hairy when racing with offlining.
> :
> > Not sure what's the right solution, but I prefer we don't hold any
> > memcgs hostages indefinitely. I also think if we end up using
> > mem_cgroup_iter() then there should be a mem_cgroup_iter_break()
> > somewhere if/when breaking the iteration.
> >
>
> I'm not sure if this is that big of a problem in the first place, but
> if it is, doing something similar to MGLRU is probably the cleanest:
> when the memcg is freed, trigger the zswap_exit_memcg() callback,
> which will loop through all the zswap pools and update pool->next_shrink
> where appropriate.
Correction: I think I should have said "when the memcg is offlined" (which
I assume is when we want to drop the memcg's reference so that
zswap will not hold the cgroup hostage in the zombie stage).
The remainder of the idea still holds of course.
>
> Note that we only have one pool per (compression algorithm x allocator)
> combinations, so there cannot be that many pools, correct?
>
> Johannes suggests this idea to me (my apologies if I butcher it)
> during one of our conversations. That sounds relatively easy IIUC.
>
> > > +
> > > + ret = shrink_memcg(pool->next_shrink);
> > > +
> > > + if (ret == -EINVAL)
> > > + break;
> > > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > > + break;
> > > +
> > > cond_resched();
> > > } while (!zswap_can_accept());
> > > zswap_pool_put(pool);
> > [..]
> > > @@ -1233,15 +1301,15 @@ bool zswap_store(struct folio *folio)
> > > zswap_invalidate_entry(tree, dupentry);
> > > }
> > > spin_unlock(&tree->lock);
> > > -
> > > - /*
> > > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > - * local cgroup limits.
> > > - */
> > > objcg = get_obj_cgroup_from_folio(folio);
> > > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > > - goto reject;
> > > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> > > + if (shrink_memcg(memcg)) {
> > > + mem_cgroup_put(memcg);
> > > + goto reject;
> > > + }
> > > + mem_cgroup_put(memcg);
> >
> > Here we choose to replicate mem_cgroup_put().
> >
> > > + }
> > >
> > > /* reclaim space if needed */
> > > if (zswap_is_full()) {
> > > @@ -1258,7 +1326,7 @@ bool zswap_store(struct folio *folio)
> > > }
> > >
> > > /* allocate entry */
> > > - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > > + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > > if (!entry) {
> > > zswap_reject_kmemcache_fail++;
> > > goto reject;
> > > @@ -1285,6 +1353,15 @@ bool zswap_store(struct folio *folio)
> > > if (!entry->pool)
> > > goto freepage;
> > >
> > > + if (objcg) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> > > + lru_alloc_ret = memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL);
> > > + mem_cgroup_put(memcg);
> > > +
> > > + if (lru_alloc_ret)
> > > + goto put_pool;
> > > + }
> >
> > Yet here we choose to have a single mem_cgroup_put() and stash the
> > output in a variable.
> >
> > Consistency would be nice.
>
> No strong opinions here, but yeah we should fix it to make it
> consistent.
>
> >
> > > +
> > > /* compress */
> > > acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > >
> > > @@ -1361,9 +1438,8 @@ bool zswap_store(struct folio *folio)
> > > zswap_invalidate_entry(tree, dupentry);
> > > }
> > > if (entry->length) {
> > > - spin_lock(&entry->pool->lru_lock);
> > > - list_add(&entry->lru, &entry->pool->lru);
> > > - spin_unlock(&entry->pool->lru_lock);
> > > + INIT_LIST_HEAD(&entry->lru);
> > > + zswap_lru_add(&entry->pool->list_lru, entry);
> > > }
> > > spin_unlock(&tree->lock);
> > >
> > > @@ -1376,6 +1452,7 @@ bool zswap_store(struct folio *folio)
> > >
> > > put_dstmem:
> > > mutex_unlock(acomp_ctx->mutex);
> > > +put_pool:
> > > zswap_pool_put(entry->pool);
> > > freepage:
> > > zswap_entry_cache_free(entry);
> > > @@ -1470,9 +1547,8 @@ bool zswap_load(struct folio *folio)
> > > zswap_invalidate_entry(tree, entry);
> > > folio_mark_dirty(folio);
> > > } else if (entry->length) {
> > > - spin_lock(&entry->pool->lru_lock);
> > > - list_move(&entry->lru, &entry->pool->lru);
> > > - spin_unlock(&entry->pool->lru_lock);
> > > + zswap_lru_del(&entry->pool->list_lru, entry);
> > > + zswap_lru_add(&entry->pool->list_lru, entry);
> >
> > Can we use list_move_tail() here? (perhaps wrapped in a helper if needed).
>
> Maybe zswap_lru_move_tail()? :)
>
> FWIW, list_lru() interface does not have a move_tail function, (weird, I know).
> So this seems to be the common pattern for LRU rotation with list_lru.
>
> >
> > > }
> > > zswap_entry_put(tree, entry);
> > > spin_unlock(&tree->lock);
> > > --
> > > 2.34.1
> > [..]
> > > +/*********************************
> > > +* lru functions
> > > +**********************************/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > + struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > > + int nid = entry_to_nid(entry);
> > > + bool added = list_lru_add(list_lru, &entry->lru, nid, memcg);
> > > +
> > > + mem_cgroup_put(memcg);
> >
> > Still not fond of the get/put pattern but okay..
>
> Actually, Johannes and I took another look to see if we can replace
> the memcg reference getting with just rcu_read_lock().
>
> It seems there might be a race between zswap LRU manipulation
> and memcg offlining - not just with the rcu_read_lock() idea, but also
> with our current implementation!
>
> I'll shoot another email with more details later when I'm sure of it
> one way or another...
>
Interesting, well at least something came out of my complaining :)
> > [..]
> > > @@ -652,28 +679,37 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > */
> > > swpoffset = swp_offset(entry->swpentry);
> > > tree = zswap_trees[swp_type(entry->swpentry)];
> > > - spin_unlock(&pool->lru_lock);
> > > + list_lru_isolate(l, item);
> > > + /*
> > > + * It's safe to drop the lock here because we return either
> > > + * LRU_REMOVED_RETRY or LRU_RETRY.
> > > + */
> > > + spin_unlock(lock);
> > >
> > > /* Check for invalidate() race */
> > > spin_lock(&tree->lock);
> > > - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > > - ret = -EAGAIN;
> > > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > > goto unlock;
> > > - }
> > > +
> > > /* Hold a reference to prevent a free during writeback */
> > > zswap_entry_get(entry);
> > > spin_unlock(&tree->lock);
> > >
> > > - ret = zswap_writeback_entry(entry, tree);
> > > + writeback_result = zswap_writeback_entry(entry, tree);
> > >
> > > spin_lock(&tree->lock);
> > > - if (ret) {
> > > - /* Writeback failed, put entry back on LRU */
> > > - spin_lock(&pool->lru_lock);
> > > - list_move(&entry->lru, &pool->lru);
> > > - spin_unlock(&pool->lru_lock);
> > > + if (writeback_result) {
> > > + zswap_reject_reclaim_fail++;
> > > + memcg = get_mem_cgroup_from_entry(entry);
> >
> > Can this return NULL? Seems like we don't check the return in most/all places.
>
> I believe so, but memcg experts should fact check me on this.
If that's the case, there should be NULL checks, no?
> It's roughly the same pattern as zswap charging/uncharging:
>
> obj_cgroup_uncharge_zswap(entry->objcg, entry->length)
> -> getting memcg (under rcu_read_lock())
>
> >
> > > + spin_lock(lock);
> > > + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > + list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> >
> > Perhaps we can move this call with the memcg get/put to a helper like
> > add/del? (e.g. zswap_lru_putback)
> >
> > We would need to move get_mem_cgroup_from_entry() into the lock but I
> > think that's okay.
>
> We probably could, but that sounds like extra code for not a lot of gains, no?
I don't feel strongly, just a fan of consistency.
>
> >
> > > + spin_unlock(lock);
> > > + mem_cgroup_put(memcg);
> > > + ret = LRU_RETRY;
> > > goto put_unlock;
> > > }
> > > + zswap_written_back_pages++;
> > >
> > > /*
> > > * Writeback started successfully, the page now belongs to the
[..]
> > > @@ -696,15 +759,17 @@ static void shrink_worker(struct work_struct *w)
> > > shrink_work);
> > > int ret, failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > - if (ret) {
> > > - zswap_reject_reclaim_fail++;
> > > - if (ret != -EAGAIN)
> > > - break;
> > > - if (++failures == MAX_RECLAIM_RETRIES)
> > > - break;
> > > - }
> > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> >
> > I think this can be a problem. We hold a ref to a memcg here until the
> > next time we shrink, which can be a long time IIUC. This can cause the
> > memcg to linger as a zombie. I understand it is one memcg per-zswap
> > pool, but I am still unsure about it.
> >
> > MGLRU maintains a memcg LRU for global reclaim that gets properly
> > cleaned up when a memcg is going away, so that's one option, although
> > complicated.
> >
> > A second option would be to hold a pointer to the objcg instead, which
> > should be less problematic (although we are still holding that objcg
> > hostage indefinitely). The problem here is that if the objcg gets
> > reparented, next time we will start at the parent of the memcg we
> > stopped at last time, which tbh doesn't sound bad at all to me.
> >
> > A third option would be to flag the memcg such that when it is getting
> > offlined we can call into zswap to reset pool->next_shrink (or move it
> > to the parent) and drop the ref. Although synchronization can get
> > hairy when racing with offlining.
> >
> > Not sure what's the right solution, but I prefer we don't hold any
> > memcgs hostages indefinitely. I also think if we end up using
> > mem_cgroup_iter() then there should be a mem_cgroup_iter_break()
> > somewhere if/when breaking the iteration.
> >
>
> I'm not sure if this is that big of a problem in the first place, but
> if it is, doing something similar to MGLRU is probably the cleanest:
> when the memcg is freed, trigger the zswap_exit_memcg() callback,
> which will loop through all the zswap pools and update pool->next_shrink
> where appropriate.
>
> Note that we only have one pool per (compression algorithm x allocator)
> combinations, so there cannot be that many pools, correct?
>
> Johannes suggests this idea to me (my apologies if I butcher it)
> during one of our conversations. That sounds relatively easy IIUC.
Be careful that there will be a race between memcg offlining and
zswap's usage of pool->next_shrink. AFAICT there is no lock to prevent
offlining so there will have to be some sort of dance here to make
sure everything works correctly.
cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
about memory controller + list_lru reparenting logic than me.
There seems to be a race between memcg offlining and zswap’s
cgroup-aware LRU implementation:
CPU0 CPU1
zswap_lru_add() mem_cgroup_css_offline()
get_mem_cgroup_from_objcg()
memcg_offline_kmem()
memcg_reparent_objcgs()
memcg_reparent_list_lrus()
memcg_reparent_list_lru()
memcg_reparent_list_lru_node()
list_lru_add()
memcg_list_lru_free()
Essentially: on CPU0, zswap gets the memcg from the entry's objcg
(before the objcgs are reparented). Then it performs list_lru_add()
after the list_lru entries reparenting (memcg_reparent_list_lru_node())
step. If the list_lru of the memcg being offlined has not been freed
(i.e before the memcg_list_lru_free() call), then the list_lru_add()
call would succeed - but the list will be freed soon after. The new
zswap entry as a result will not be subjected to future reclaim
attempt. IOW, this list_lru_add() call is effectively swallowed. And
worse, there might be a crash when we invalidate the zswap_entry in the
future (which will perform a list_lru removal).
Within get_mem_cgroup_from_objcg(), none of the following seem
sufficient to prevent this race:
1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
section.
2. Checking if the memcg is freed yet (with css_tryget()) (what
we're currently doing in this patch series).
3. Checking if the memcg is still online (with css_tryget_online())
The memcg can still be offlined down the line.
I've discussed this privately with Johannes, and it seems like the
cleanest solution here is to move the reparenting logic down to release
stage. That way, when get_mem_cgroup_from_objcg() returns,
zswap_lru_add() is given an memcg that is reparenting-safe (until we
drop the obtained reference).
On Tue, Oct 31, 2023 at 6:26 PM Nhat Pham <[email protected]> wrote:
>
> cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
> about memory controller + list_lru reparenting logic than me.
>
> There seems to be a race between memcg offlining and zswap’s
> cgroup-aware LRU implementation:
>
> CPU0 CPU1
> zswap_lru_add() mem_cgroup_css_offline()
> get_mem_cgroup_from_objcg()
> memcg_offline_kmem()
> memcg_reparent_objcgs()
> memcg_reparent_list_lrus()
> memcg_reparent_list_lru()
> memcg_reparent_list_lru_node()
> list_lru_add()
> memcg_list_lru_free()
>
>
> Essentially: on CPU0, zswap gets the memcg from the entry's objcg
> (before the objcgs are reparented). Then it performs list_lru_add()
> after the list_lru entries reparenting (memcg_reparent_list_lru_node())
> step. If the list_lru of the memcg being offlined has not been freed
> (i.e before the memcg_list_lru_free() call), then the list_lru_add()
> call would succeed - but the list will be freed soon after. The new
> zswap entry as a result will not be subjected to future reclaim
> attempt. IOW, this list_lru_add() call is effectively swallowed. And
> worse, there might be a crash when we invalidate the zswap_entry in the
> future (which will perform a list_lru removal).
>
> Within get_mem_cgroup_from_objcg(), none of the following seem
> sufficient to prevent this race:
>
> 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
> section.
> 2. Checking if the memcg is freed yet (with css_tryget()) (what
> we're currently doing in this patch series).
> 3. Checking if the memcg is still online (with css_tryget_online())
> The memcg can still be offlined down the line.
>
>
> I've discussed this privately with Johannes, and it seems like the
> cleanest solution here is to move the reparenting logic down to release
> stage. That way, when get_mem_cgroup_from_objcg() returns,
> zswap_lru_add() is given an memcg that is reparenting-safe (until we
> drop the obtained reference).
The objcgs hold refs to the memcg, which are dropped during
reparenting. How can we do reparenting in the release stage, which
IIUC happens after all refs are dropped?
On Tue, Oct 31, 2023 at 6:33 PM Yosry Ahmed <[email protected]> wrote:
>
> On Tue, Oct 31, 2023 at 6:26 PM Nhat Pham <[email protected]> wrote:
> >
> > cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
> > about memory controller + list_lru reparenting logic than me.
> >
> > There seems to be a race between memcg offlining and zswap’s
> > cgroup-aware LRU implementation:
> >
> > CPU0 CPU1
> > zswap_lru_add() mem_cgroup_css_offline()
> > get_mem_cgroup_from_objcg()
> > memcg_offline_kmem()
> > memcg_reparent_objcgs()
> > memcg_reparent_list_lrus()
> > memcg_reparent_list_lru()
> > memcg_reparent_list_lru_node()
> > list_lru_add()
> > memcg_list_lru_free()
> >
> >
> > Essentially: on CPU0, zswap gets the memcg from the entry's objcg
> > (before the objcgs are reparented). Then it performs list_lru_add()
> > after the list_lru entries reparenting (memcg_reparent_list_lru_node())
> > step. If the list_lru of the memcg being offlined has not been freed
> > (i.e before the memcg_list_lru_free() call), then the list_lru_add()
> > call would succeed - but the list will be freed soon after. The new
> > zswap entry as a result will not be subjected to future reclaim
> > attempt. IOW, this list_lru_add() call is effectively swallowed. And
> > worse, there might be a crash when we invalidate the zswap_entry in the
> > future (which will perform a list_lru removal).
> >
> > Within get_mem_cgroup_from_objcg(), none of the following seem
> > sufficient to prevent this race:
> >
> > 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
> > section.
> > 2. Checking if the memcg is freed yet (with css_tryget()) (what
> > we're currently doing in this patch series).
> > 3. Checking if the memcg is still online (with css_tryget_online())
> > The memcg can still be offlined down the line.
> >
> >
> > I've discussed this privately with Johannes, and it seems like the
> > cleanest solution here is to move the reparenting logic down to release
> > stage. That way, when get_mem_cgroup_from_objcg() returns,
> > zswap_lru_add() is given an memcg that is reparenting-safe (until we
> > drop the obtained reference).
>
> The objcgs hold refs to the memcg, which are dropped during
> reparenting. How can we do reparenting in the release stage, which
> IIUC happens after all refs are dropped?
Oh I meant just the list_lru reparenting. The list_lru themselves
don't hold any ref I believe, right? Then it's safe to perform this at
the release stage.
(also, I think I might have messed up the encoding for the email
above. Let me know if people cannot view it, and I'll resend it :( )
> On Nov 1, 2023, at 09:26, Nhat Pham <[email protected]> wrote:
>
> cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
> about memory controller + list_lru reparenting logic than me.
>
> There seems to be a race between memcg offlining and zswap’s
> cgroup-aware LRU implementation:
>
> CPU0 CPU1
> zswap_lru_add() mem_cgroup_css_offline()
> get_mem_cgroup_from_objcg()
> memcg_offline_kmem()
> memcg_reparent_objcgs()
> memcg_reparent_list_lrus()
> memcg_reparent_list_lru()
> memcg_reparent_list_lru_node()
> list_lru_add()
> memcg_list_lru_free()
>
>
> Essentially: on CPU0, zswap gets the memcg from the entry's objcg
> (before the objcgs are reparented). Then it performs list_lru_add()
> after the list_lru entries reparenting (memcg_reparent_list_lru_node())
> step. If the list_lru of the memcg being offlined has not been freed
> (i.e before the memcg_list_lru_free() call), then the list_lru_add()
> call would succeed - but the list will be freed soon after. The new
No worries. list_lru_add() will add the object to the lru list of
the parent of the memcg being offlined, because the ->kmemcg_id of the
memcg being offlined will be changed to its parent's ->kmemcg_id before memcg_reparent_list_lru().
Muchun,
Thanks
> zswap entry as a result will not be subjected to future reclaim
> attempt. IOW, this list_lru_add() call is effectively swallowed. And
> worse, there might be a crash when we invalidate the zswap_entry in the
> future (which will perform a list_lru removal).
>
> Within get_mem_cgroup_from_objcg(), none of the following seem
> sufficient to prevent this race:
>
> 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
> section.
> 2. Checking if the memcg is freed yet (with css_tryget()) (what
> we're currently doing in this patch series).
> 3. Checking if the memcg is still online (with css_tryget_online())
> The memcg can still be offlined down the line.
>
>
> I've discussed this privately with Johannes, and it seems like the
> cleanest solution here is to move the reparenting logic down to release
> stage. That way, when get_mem_cgroup_from_objcg() returns,
> zswap_lru_add() is given an memcg that is reparenting-safe (until we
> drop the obtained reference).
On Tue, Oct 31, 2023 at 8:07 PM Muchun Song <[email protected]> wrote:
>
>
>
> > On Nov 1, 2023, at 09:26, Nhat Pham <[email protected]> wrote:
> >
> > cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
> > about memory controller + list_lru reparenting logic than me.
> >
> > There seems to be a race between memcg offlining and zswap’s
> > cgroup-aware LRU implementation:
> >
> > CPU0 CPU1
> > zswap_lru_add() mem_cgroup_css_offline()
> > get_mem_cgroup_from_objcg()
> > memcg_offline_kmem()
> > memcg_reparent_objcgs()
> > memcg_reparent_list_lrus()
> > memcg_reparent_list_lru()
> > memcg_reparent_list_lru_node()
> > list_lru_add()
> > memcg_list_lru_free()
> >
> >
> > Essentially: on CPU0, zswap gets the memcg from the entry's objcg
> > (before the objcgs are reparented). Then it performs list_lru_add()
> > after the list_lru entries reparenting (memcg_reparent_list_lru_node())
> > step. If the list_lru of the memcg being offlined has not been freed
> > (i.e before the memcg_list_lru_free() call), then the list_lru_add()
> > call would succeed - but the list will be freed soon after. The new
>
> No worries. list_lru_add() will add the object to the lru list of
> the parent of the memcg being offlined, because the ->kmemcg_id of the
> memcg being offlined will be changed to its parent's ->kmemcg_id before memcg_reparent_list_lru().
>
Ohhh that is subtle. Thanks for pointing this out, Muchun!
In that case, I think Yosry is right after all! We don't even need to get
a reference to the memcg:
rcu_read_lock();
memcg = obj_cgroup_memcg(objcg);
list_lru_add();
rcu_read_unlock();
As long as we're inside this rcu section, we're guaranteed to get
an un-freed memcg. Now it could be offlined etc., but as Muchun has
pointed out, the list_lru_add() call will still does the right thing - it will
either add the new entry to the parent list if this happens after the
kmemcg_id update, or the child list before the list_lru reparenting
action. Both of these scenarios are fine.
> Muchun,
> Thanks
>
> > zswap entry as a result will not be subjected to future reclaim
> > attempt. IOW, this list_lru_add() call is effectively swallowed. And
> > worse, there might be a crash when we invalidate the zswap_entry in the
> > future (which will perform a list_lru removal).
> >
> > Within get_mem_cgroup_from_objcg(), none of the following seem
> > sufficient to prevent this race:
> >
> > 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
> > section.
> > 2. Checking if the memcg is freed yet (with css_tryget()) (what
> > we're currently doing in this patch series).
> > 3. Checking if the memcg is still online (with css_tryget_online())
> > The memcg can still be offlined down the line.
> >
> >
> > I've discussed this privately with Johannes, and it seems like the
> > cleanest solution here is to move the reparenting logic down to release
> > stage. That way, when get_mem_cgroup_from_objcg() returns,
> > zswap_lru_add() is given an memcg that is reparenting-safe (until we
> > drop the obtained reference).
>
> On Nov 2, 2023, at 01:44, Nhat Pham <[email protected]> wrote:
>
> On Tue, Oct 31, 2023 at 8:07 PM Muchun Song <[email protected]> wrote:
>>
>>
>>
>>> On Nov 1, 2023, at 09:26, Nhat Pham <[email protected]> wrote:
>>>
>>> cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
>>> about memory controller + list_lru reparenting logic than me.
>>>
>>> There seems to be a race between memcg offlining and zswap’s
>>> cgroup-aware LRU implementation:
>>>
>>> CPU0 CPU1
>>> zswap_lru_add() mem_cgroup_css_offline()
>>> get_mem_cgroup_from_objcg()
>>> memcg_offline_kmem()
>>> memcg_reparent_objcgs()
>>> memcg_reparent_list_lrus()
>>> memcg_reparent_list_lru()
>>> memcg_reparent_list_lru_node()
>>> list_lru_add()
>>> memcg_list_lru_free()
>>>
>>>
>>> Essentially: on CPU0, zswap gets the memcg from the entry's objcg
>>> (before the objcgs are reparented). Then it performs list_lru_add()
>>> after the list_lru entries reparenting (memcg_reparent_list_lru_node())
>>> step. If the list_lru of the memcg being offlined has not been freed
>>> (i.e before the memcg_list_lru_free() call), then the list_lru_add()
>>> call would succeed - but the list will be freed soon after. The new
>>
>> No worries. list_lru_add() will add the object to the lru list of
>> the parent of the memcg being offlined, because the ->kmemcg_id of the
>> memcg being offlined will be changed to its parent's ->kmemcg_id before memcg_reparent_list_lru().
>>
>
> Ohhh that is subtle. Thanks for pointing this out, Muchun!
>
> In that case, I think Yosry is right after all! We don't even need to get
> a reference to the memcg:
>
> rcu_read_lock();
> memcg = obj_cgroup_memcg(objcg);
> list_lru_add();
> rcu_read_unlock();
>
> As long as we're inside this rcu section, we're guaranteed to get
> an un-freed memcg. Now it could be offlined etc., but as Muchun has
Right.
Thanks.
> pointed out, the list_lru_add() call will still does the right thing - it will
> either add the new entry to the parent list if this happens after the
> kmemcg_id update, or the child list before the list_lru reparenting
> action. Both of these scenarios are fine.
>
>> Muchun,
>> Thanks
>>
>>> zswap entry as a result will not be subjected to future reclaim
>>> attempt. IOW, this list_lru_add() call is effectively swallowed. And
>>> worse, there might be a crash when we invalidate the zswap_entry in the
>>> future (which will perform a list_lru removal).
>>>
>>> Within get_mem_cgroup_from_objcg(), none of the following seem
>>> sufficient to prevent this race:
>>>
>>> 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
>>> section.
>>> 2. Checking if the memcg is freed yet (with css_tryget()) (what
>>> we're currently doing in this patch series).
>>> 3. Checking if the memcg is still online (with css_tryget_online())
>>> The memcg can still be offlined down the line.
>>>
>>>
>>> I've discussed this privately with Johannes, and it seems like the
>>> cleanest solution here is to move the reparenting logic down to release
>>> stage. That way, when get_mem_cgroup_from_objcg() returns,
>>> zswap_lru_add() is given an memcg that is reparenting-safe (until we
>>> drop the obtained reference).