2023-09-19 18:16:34

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v2 0/2] workload-specific and memory pressure-driven zswap writeback

Changelog:
v2:
* Fix loongarch compiler errors
* Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM

There are currently several issues with zswap writeback:

1. There is only a single global LRU for zswap. This makes it impossible
to perform worload-specific shrinking - an memcg under memory
pressure cannot determine which pages in the pool it owns, and often
ends up writing pages from other memcgs. This issue has been
previously observed in practice and mitigated by simply disabling
memcg-initiated shrinking:

https://lore.kernel.org/all/[email protected]/T/#u

But this solution leaves a lot to be desired, as we still do not have an
avenue for an memcg to free up its own memory locked up in zswap.

2. We only shrink the zswap pool when the user-defined limit is hit.
This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed
ahead of time, as this depends on the workload (specifically, on
factors such as memory access patterns and compressibility of the
memory pages).

This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.

On a benchmark that we have run:

(without the shrinker)
real -- mean: 153.27s, median: 153.199s
sys -- mean: 541.652s, median: 541.903s
user -- mean: 4384.9673999999995s, median: 4385.471s

(with the shrinker)
real -- mean: 151.4956s, median: 151.456s
sys -- mean: 461.14639999999997s, median: 465.656s
user -- mean: 4384.7118s, median: 4384.675s

We observed a 14-15% reduction in kernel CPU time, which translated to
over 1% reduction in real time.

On another benchmark, where there was a lot more cold memory residing in
zswap, we observed even more pronounced gains:

(without the shrinker)
real -- mean: 157.52519999999998s, median: 157.281s
sys -- mean: 769.3082s, median: 780.545s
user -- mean: 4378.1622s, median: 4378.286s

(with the shrinker)
real -- mean: 152.9608s, median: 152.845s
sys -- mean: 517.4446s, median: 506.749s
user -- mean: 4387.694s, median: 4387.935s

Here, we saw around 32-35% reduction in kernel CPU time, which
translated to 2.8% reduction in real time. These results confirm our
hypothesis that the shrinker is more helpful the more cold memory we
have.

Domenico Cerasuolo (1):
zswap: make shrinking memcg-aware

Nhat Pham (1):
zswap: shrinks zswap pool based on memory pressure

Documentation/admin-guide/mm/zswap.rst | 12 +
include/linux/list_lru.h | 39 +++
include/linux/memcontrol.h | 6 +
include/linux/mmzone.h | 14 +
include/linux/zswap.h | 9 +
mm/list_lru.c | 46 ++-
mm/memcontrol.c | 33 ++
mm/swap_state.c | 50 +++-
mm/zswap.c | 397 ++++++++++++++++++++++---
9 files changed, 548 insertions(+), 58 deletions(-)

--
2.34.1


2023-09-19 23:42:48

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v2 1/2] zswap: make shrinking memcg-aware

From: Domenico Cerasuolo <[email protected]>

Currently, we only have a single global LRU for zswap. This makes it
impossible to perform worload-specific shrinking - an memcg cannot
determine which pages in the pool it owns, and often ends up writing
pages from other memcgs. This issue has been previously observed in
practice and mitigated by simply disabling memcg-initiated shrinking:

https://lore.kernel.org/all/[email protected]/T/#u

This patch fully resolves the issue by replacing the global zswap LRU
with memcg- and NUMA-specific LRUs, and modify the reclaim logic:

a) When a store attempt hits an memcg limit, it now triggers a
synchronous reclaim attempt that, if successful, allows the new
hotter page to be accepted by zswap.
b) If the store attempt instead hits the global zswap limit, it will
trigger an asynchronous reclaim attempt, in which an memcg is
selected for reclaim in a round-robin-like fashion.

Signed-off-by: Domenico Cerasuolo <[email protected]>
Co-developed-by: Nhat Pham <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
include/linux/list_lru.h | 39 +++++++
include/linux/memcontrol.h | 5 +
include/linux/zswap.h | 9 ++
mm/list_lru.c | 46 ++++++--
mm/swap_state.c | 19 ++++
mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
6 files changed, 287 insertions(+), 52 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index b35968ee9fb5..b517b4e2c7c4 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
*/
bool list_lru_add(struct list_lru *lru, struct list_head *item);

+/**
+ * __list_lru_add: add an element to a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be added.
+ * @memcg: the cgroup of the sublist to add the item to.
+ * @nid: the node id of the sublist to add the item to.
+ *
+ * This function is similar to list_lru_add(), but it allows the caller to
+ * specify the sublist to which the item should be added. This can be useful
+ * when the list_head node is not necessarily in the same cgroup and NUMA node
+ * as the data it represents, such as zswap, where the list_head node could be
+ * from kswapd and the data from a different cgroup altogether.
+ *
+ * Return value: true if the list was updated, false otherwise
+ */
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
/**
* list_lru_del: delete an element to the lru list
* @list_lru: the lru pointer
@@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
*/
bool list_lru_del(struct list_lru *lru, struct list_head *item);

+/**
+ * __list_lru_delete: delete an element from a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be deleted.
+ * @memcg: the cgroup of the sublist to delete the item from.
+ * @nid: the node id of the sublist to delete the item from.
+ *
+ * Return value: true if the list was updated, false otherwise.
+ */
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
/**
* list_lru_count_one: return the number of objects currently held by @lru
* @lru: the lru pointer.
@@ -137,6 +167,15 @@ void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head);

+/*
+ * list_lru_putback: undo list_lru_isolate.
+ *
+ * Since we might have dropped the LRU lock in between, recompute list_lru_one
+ * from the node's id and memcg.
+ */
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
struct list_lru_one *list, spinlock_t *lock, void *cb_arg);

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 67b823dfa47d..05d34b328d9d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
return NULL;
}

+static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
+{
+ return NULL;
+}
+
static inline bool folio_memcg_kmem(struct folio *folio)
{
return false;
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index 2a60ce39cfde..04f80b64a09b 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -15,6 +15,8 @@ bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
void zswap_swapon(int type);
void zswap_swapoff(int type);
+bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry);
+void zswap_insert_swpentry_into_lru(swp_entry_t swpentry);

#else

@@ -32,6 +34,13 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
static inline void zswap_swapon(int type) {}
static inline void zswap_swapoff(int type) {}

+static inline bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
+{
+ return false;
+}
+
+static inline void zswap_insert_swpentry_into_lru(swp_entry_t swpentry) {}
+
#endif

#endif /* _LINUX_ZSWAP_H */
diff --git a/mm/list_lru.c b/mm/list_lru.c
index a05e5bef3b40..37c5c2ef6c0e 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -119,18 +119,26 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
bool list_lru_add(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return __list_lru_add(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_add);
+
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
- struct mem_cgroup *memcg;
struct list_lru_one *l;

spin_lock(&nlru->lock);
if (list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, &memcg);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_add_tail(item, &l->list);
/* Set shrinker bit if the first element was added */
if (!l->nr_items++)
- set_shrinker_bit(memcg, nid,
- lru_shrinker_id(lru));
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
nlru->nr_items++;
spin_unlock(&nlru->lock);
return true;
@@ -138,17 +146,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
spin_unlock(&nlru->lock);
return false;
}
-EXPORT_SYMBOL_GPL(list_lru_add);
+EXPORT_SYMBOL_GPL(__list_lru_add);

bool list_lru_del(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return __list_lru_del(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_del);
+
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
struct list_lru_one *l;

spin_lock(&nlru->lock);
if (!list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, NULL);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_del_init(item);
l->nr_items--;
nlru->nr_items--;
@@ -158,7 +176,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
spin_unlock(&nlru->lock);
return false;
}
-EXPORT_SYMBOL_GPL(list_lru_del);
+EXPORT_SYMBOL_GPL(__list_lru_del);

void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
{
@@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
}
EXPORT_SYMBOL_GPL(list_lru_isolate_move);

+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
+ struct list_lru_one *list =
+ list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
+
+ if (list_empty(item)) {
+ list_add_tail(item, &list->list);
+ if (!list->nr_items++)
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
+ }
+}
+EXPORT_SYMBOL_GPL(list_lru_putback);
+
unsigned long list_lru_count_one(struct list_lru *lru,
int nid, struct mem_cgroup *memcg)
{
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b3b14bd0dd64..1c826737aacb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -21,6 +21,7 @@
#include <linux/swap_slots.h>
#include <linux/huge_mm.h>
#include <linux/shmem_fs.h>
+#include <linux/zswap.h>
#include "internal.h"
#include "swap.h"

@@ -417,6 +418,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct folio *folio;
struct page *page;
void *shadow = NULL;
+ bool zswap_lru_removed = false;

*new_page_allocated = false;
si = get_swap_device(entry);
@@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
__folio_set_locked(folio);
__folio_set_swapbacked(folio);

+ /*
+ * Page fault might itself trigger reclaim, on a zswap object that
+ * corresponds to the same swap entry. However, as the swap entry has
+ * previously been pinned, the task will run into an infinite loop trying
+ * to pin the swap entry again.
+ *
+ * To prevent this from happening, we remove it from the zswap
+ * LRU to prevent its reclamation.
+ */
+ zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
+
if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
goto fail_unlock;

@@ -497,6 +510,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
if (shadow)
workingset_refault(folio, shadow);

+ if (zswap_lru_removed)
+ zswap_insert_swpentry_into_lru(entry);
+
/* Caller will initiate read into locked folio */
folio_add_lru(folio);
*new_page_allocated = true;
@@ -506,6 +522,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
return page;

fail_unlock:
+ if (zswap_lru_removed)
+ zswap_insert_swpentry_into_lru(entry);
+
put_swap_folio(folio, entry);
folio_unlock(folio);
folio_put(folio);
diff --git a/mm/zswap.c b/mm/zswap.c
index 412b1409a0d7..1a469e5d5197 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -34,6 +34,7 @@
#include <linux/writeback.h>
#include <linux/pagemap.h>
#include <linux/workqueue.h>
+#include <linux/list_lru.h>

#include "swap.h"
#include "internal.h"
@@ -171,8 +172,8 @@ struct zswap_pool {
struct work_struct shrink_work;
struct hlist_node node;
char tfm_name[CRYPTO_MAX_ALG_NAME];
- struct list_head lru;
- spinlock_t lru_lock;
+ struct list_lru list_lru;
+ struct mem_cgroup *next_shrink;
};

/*
@@ -209,6 +210,7 @@ struct zswap_entry {
unsigned long value;
};
struct obj_cgroup *objcg;
+ int nid;
struct list_head lru;
};

@@ -309,6 +311,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}

+/*********************************
+* lru functions
+**********************************/
+static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = entry->objcg ?
+ get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return added;
+}
+
+static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = entry->objcg ?
+ get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ bool removed = __list_lru_del(list_lru, &entry->lru, entry->nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return removed;
+}
+
/*********************************
* rbtree functions
**********************************/
@@ -393,9 +418,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
if (!entry->length)
atomic_dec(&zswap_same_filled_pages);
else {
- spin_lock(&entry->pool->lru_lock);
- list_del(&entry->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
zswap_pool_put(entry->pool);
}
@@ -629,21 +652,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
zswap_entry_put(tree, entry);
}

-static int zswap_reclaim_entry(struct zswap_pool *pool)
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg)
{
- struct zswap_entry *entry;
+ struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ struct mem_cgroup *memcg;
struct zswap_tree *tree;
pgoff_t swpoffset;
- int ret;
+ enum lru_status ret = LRU_REMOVED_RETRY;
+ int writeback_result;

- /* Get an entry off the LRU */
- spin_lock(&pool->lru_lock);
- if (list_empty(&pool->lru)) {
- spin_unlock(&pool->lru_lock);
- return -EINVAL;
- }
- entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
- list_del_init(&entry->lru);
/*
* Once the lru lock is dropped, the entry might get freed. The
* swpoffset is copied to the stack, and entry isn't deref'd again
@@ -651,26 +669,35 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
*/
swpoffset = swp_offset(entry->swpentry);
tree = zswap_trees[swp_type(entry->swpentry)];
- spin_unlock(&pool->lru_lock);
+ list_lru_isolate(l, item);
+ spin_unlock(lock);

/* Check for invalidate() race */
spin_lock(&tree->lock);
if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
- ret = -EAGAIN;
goto unlock;
}
/* Hold a reference to prevent a free during writeback */
zswap_entry_get(entry);
spin_unlock(&tree->lock);

- ret = zswap_writeback_entry(entry, tree);
+ writeback_result = zswap_writeback_entry(entry, tree);

spin_lock(&tree->lock);
- if (ret) {
- /* Writeback failed, put entry back on LRU */
- spin_lock(&pool->lru_lock);
- list_move(&entry->lru, &pool->lru);
- spin_unlock(&pool->lru_lock);
+ if (writeback_result) {
+ zswap_reject_reclaim_fail++;
+
+ /* Check for invalidate() race */
+ if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
+ goto put_unlock;
+
+ memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ spin_lock(lock);
+ /* we cannot use zswap_lru_add here, because it increments node's lru count */
+ list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
+ spin_unlock(lock);
+ mem_cgroup_put(memcg);
+ ret = LRU_RETRY;
goto put_unlock;
}

@@ -686,19 +713,63 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
zswap_entry_put(tree, entry);
unlock:
spin_unlock(&tree->lock);
- return ret ? -EAGAIN : 0;
+ spin_lock(lock);
+ return ret;
+}
+
+static int shrink_memcg(struct mem_cgroup *memcg)
+{
+ struct zswap_pool *pool;
+ int nid, shrunk = 0;
+ bool is_empty = true;
+
+ pool = zswap_pool_current_get();
+ if (!pool)
+ return -EINVAL;
+
+ for_each_node_state(nid, N_NORMAL_MEMORY) {
+ unsigned long nr_to_walk = 1;
+
+ if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
+ NULL, &nr_to_walk))
+ shrunk++;
+ if (!nr_to_walk)
+ is_empty = false;
+ }
+ zswap_pool_put(pool);
+
+ if (is_empty)
+ return -EINVAL;
+ if (shrunk)
+ return 0;
+ return -EAGAIN;
}

static void shrink_worker(struct work_struct *w)
{
struct zswap_pool *pool = container_of(w, typeof(*pool),
shrink_work);
- int ret, failures = 0;
+ int ret, failures = 0, memcg_selection_failures = 0;

+ /* global reclaim will select cgroup in a round-robin fashion. */
do {
- ret = zswap_reclaim_entry(pool);
+ /* previous next_shrink has become a zombie - restart from the top */
+ if (pool->next_shrink && !mem_cgroup_online(pool->next_shrink)) {
+ mem_cgroup_put(pool->next_shrink);
+ pool->next_shrink = NULL;
+ }
+ pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+
+ /* fails to find a suitable cgroup - give the worker another chance. */
+ if (!pool->next_shrink) {
+ if (++memcg_selection_failures == 2)
+ break;
+ continue;
+ }
+
+ ret = shrink_memcg(pool->next_shrink);
+
if (ret) {
- zswap_reject_reclaim_fail++;
if (ret != -EAGAIN)
break;
if (++failures == MAX_RECLAIM_RETRIES)
@@ -764,9 +835,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- INIT_LIST_HEAD(&pool->lru);
- spin_lock_init(&pool->lru_lock);
INIT_WORK(&pool->shrink_work, shrink_worker);
+ list_lru_init_memcg(&pool->list_lru, NULL);

zswap_pool_debug("created", pool);

@@ -831,6 +901,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)

cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
+ list_lru_destroy(&pool->list_lru);
+ if (pool->next_shrink)
+ mem_cgroup_put(pool->next_shrink);
for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
zpool_destroy_pool(pool->zpools[i]);
kfree(pool);
@@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct obj_cgroup *objcg = NULL;
+ struct mem_cgroup *memcg = NULL;
struct zswap_pool *pool;
struct zpool *zpool;
+ int lru_alloc_ret;
unsigned int dlen = PAGE_SIZE;
unsigned long handle, value;
char *buf;
@@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
if (!zswap_enabled || !tree)
return false;

- /*
- * XXX: zswap reclaim does not work with cgroups yet. Without a
- * cgroup-aware entry LRU, we will push out entries system-wide based on
- * local cgroup limits.
- */
objcg = get_obj_cgroup_from_folio(folio);
- if (objcg && !obj_cgroup_may_zswap(objcg))
- goto reject;
+ if (objcg && !obj_cgroup_may_zswap(objcg)) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ if (shrink_memcg(memcg)) {
+ mem_cgroup_put(memcg);
+ goto reject;
+ }
+ mem_cgroup_put(memcg);
+ }

/* reclaim space if needed */
if (zswap_is_full()) {
@@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
else
zswap_pool_reached_full = false;
}
-
+ pool = zswap_pool_current_get();
+ if (!pool) {
+ ret = -EINVAL;
+ goto reject;
+ }
/* allocate entry */
entry = zswap_entry_cache_alloc(GFP_KERNEL);
if (!entry) {
@@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
entry->length = 0;
entry->value = value;
atomic_inc(&zswap_same_filled_pages);
+ zswap_pool_put(pool);
goto insert_entry;
}
kunmap_atomic(src);
@@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
if (!zswap_non_same_filled_pages_enabled)
goto freepage;

+ if (objcg) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
+ mem_cgroup_put(memcg);
+
+ if (lru_alloc_ret)
+ goto freepage;
+ }
+
/* if entry is successfully added, it keeps the reference */
entry->pool = zswap_pool_current_get();
if (!entry->pool)
@@ -1325,6 +1415,7 @@ bool zswap_store(struct folio *folio)

insert_entry:
entry->objcg = objcg;
+ entry->nid = page_to_nid(page);
if (objcg) {
obj_cgroup_charge_zswap(objcg, entry->length);
/* Account before objcg ref is moved to tree */
@@ -1338,9 +1429,8 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_add(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ INIT_LIST_HEAD(&entry->lru);
+ zswap_lru_add(&pool->list_lru, entry);
}
spin_unlock(&tree->lock);

@@ -1447,9 +1537,8 @@ bool zswap_load(struct folio *folio)
zswap_invalidate_entry(tree, entry);
folio_mark_dirty(folio);
} else if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_move(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
zswap_entry_put(tree, entry);
spin_unlock(&tree->lock);
@@ -1507,6 +1596,48 @@ void zswap_swapoff(int type)
zswap_trees[type] = NULL;
}

+bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
+{
+ struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
+ struct zswap_entry *entry;
+ struct zswap_pool *pool;
+ bool removed = false;
+
+ /* get the zswap entry and prevent it from being freed */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
+ /* skip if the entry is already written back or is a same filled page */
+ if (!entry || !entry->length)
+ goto tree_unlock;
+
+ pool = entry->pool;
+ removed = zswap_lru_del(&pool->list_lru, entry);
+
+tree_unlock:
+ spin_unlock(&tree->lock);
+ return removed;
+}
+
+void zswap_insert_swpentry_into_lru(swp_entry_t swpentry)
+{
+ struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
+ struct zswap_entry *entry;
+ struct zswap_pool *pool;
+
+ /* get the zswap entry and prevent it from being freed */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
+ /* skip if the entry is already written back or is a same filled page */
+ if (!entry || !entry->length)
+ goto tree_unlock;
+
+ pool = entry->pool;
+ zswap_lru_add(&pool->list_lru, entry);
+
+tree_unlock:
+ spin_unlock(&tree->lock);
+}
+
/*********************************
* debugfs functions
**********************************/
@@ -1560,7 +1691,7 @@ static int zswap_setup(void)
struct zswap_pool *pool;
int ret;

- zswap_entry_cache = KMEM_CACHE(zswap_entry, 0);
+ zswap_entry_cache = KMEM_CACHE(zswap_entry, SLAB_ACCOUNT);
if (!zswap_entry_cache) {
pr_err("entry cache creation failed\n");
goto cache_fail;
--
2.34.1

2023-09-20 01:35:20

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v2 0/2] workload-specific and memory pressure-driven zswap writeback

On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
>
> Changelog:
> v2:
> * Fix loongarch compiler errors
> * Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM
* Rebase the patch on top of the new shrinker API.
>
> There are currently several issues with zswap writeback:
>
> 1. There is only a single global LRU for zswap. This makes it impossible
> to perform worload-specific shrinking - an memcg under memory
> pressure cannot determine which pages in the pool it owns, and often
> ends up writing pages from other memcgs. This issue has been
> previously observed in practice and mitigated by simply disabling
> memcg-initiated shrinking:
>
> https://lore.kernel.org/all/[email protected]/T/#u
>
> But this solution leaves a lot to be desired, as we still do not have an
> avenue for an memcg to free up its own memory locked up in zswap.
>
> 2. We only shrink the zswap pool when the user-defined limit is hit.
> This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed
> ahead of time, as this depends on the workload (specifically, on
> factors such as memory access patterns and compressibility of the
> memory pages).
>
> This patch series solves these issues by separating the global zswap
> LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
> (i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
> new shrinker does not have any parameter that must be tuned by the
> user, and can be opted in or out on a per-memcg basis.
>
> On a benchmark that we have run:
>
> (without the shrinker)
> real -- mean: 153.27s, median: 153.199s
> sys -- mean: 541.652s, median: 541.903s
> user -- mean: 4384.9673999999995s, median: 4385.471s
>
> (with the shrinker)
> real -- mean: 151.4956s, median: 151.456s
> sys -- mean: 461.14639999999997s, median: 465.656s
> user -- mean: 4384.7118s, median: 4384.675s
>
> We observed a 14-15% reduction in kernel CPU time, which translated to
> over 1% reduction in real time.
>
> On another benchmark, where there was a lot more cold memory residing in
> zswap, we observed even more pronounced gains:
>
> (without the shrinker)
> real -- mean: 157.52519999999998s, median: 157.281s
> sys -- mean: 769.3082s, median: 780.545s
> user -- mean: 4378.1622s, median: 4378.286s
>
> (with the shrinker)
> real -- mean: 152.9608s, median: 152.845s
> sys -- mean: 517.4446s, median: 506.749s
> user -- mean: 4387.694s, median: 4387.935s
>
> Here, we saw around 32-35% reduction in kernel CPU time, which
> translated to 2.8% reduction in real time. These results confirm our
> hypothesis that the shrinker is more helpful the more cold memory we
> have.
>
> Domenico Cerasuolo (1):
> zswap: make shrinking memcg-aware
>
> Nhat Pham (1):
> zswap: shrinks zswap pool based on memory pressure
>
> Documentation/admin-guide/mm/zswap.rst | 12 +
> include/linux/list_lru.h | 39 +++
> include/linux/memcontrol.h | 6 +
> include/linux/mmzone.h | 14 +
> include/linux/zswap.h | 9 +
> mm/list_lru.c | 46 ++-
> mm/memcontrol.c | 33 ++
> mm/swap_state.c | 50 +++-
> mm/zswap.c | 397 ++++++++++++++++++++++---
> 9 files changed, 548 insertions(+), 58 deletions(-)
>
> --
> 2.34.1

2023-09-20 03:28:27

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).

This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.

Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:

* Estimate the number of warm pages residing in zswap, and attempt to
protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
saving factor. The better zswap compresses the data, the fewer pages
we will evict to swap (as we will otherwise incur IO for relatively
small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
brought into memory, the shrinker will cautiously terminate its
shrinking action, as this is a sign that it is touching the warmer
region of the zswap LRU.

On a benchmark that we have run:

(without the shrinker)
real -- mean: 153.27s, median: 153.199s
sys -- mean: 541.652s, median: 541.903s
user -- mean: 4384.9673999999995s, median: 4385.471s

(with the shrinker)
real -- mean: 151.4956s, median: 151.456s
sys -- mean: 461.14639999999997s, median: 465.656s
user -- mean: 4384.7118s, median: 4384.675s

We observed a 14-15% reduction in kernel CPU time, which translated to
over 1% reduction in real time.

On another benchmark, where there was a lot more cold memory residing in
zswap, we observed even more pronounced gains:

(without the shrinker)
real -- mean: 157.52519999999998s, median: 157.281s
sys -- mean: 769.3082s, median: 780.545s
user -- mean: 4378.1622s, median: 4378.286s

(with the shrinker)
real -- mean: 152.9608s, median: 152.845s
sys -- mean: 517.4446s, median: 506.749s
user -- mean: 4387.694s, median: 4387.935s

Here, we saw around 32-35% reduction in kernel CPU time, which
translated to 2.8% reduction in real time. These results confirm our
hypothesis that the shrinker is more helpful the more cold memory we
have.

Suggested-by: Johannes Weiner <[email protected]>
Signed-off-by: Nhat Pham <[email protected]>
---
Documentation/admin-guide/mm/zswap.rst | 12 ++
include/linux/memcontrol.h | 1 +
include/linux/mmzone.h | 14 ++
mm/memcontrol.c | 33 +++++
mm/swap_state.c | 31 ++++-
mm/zswap.c | 180 ++++++++++++++++++++++++-
6 files changed, 263 insertions(+), 8 deletions(-)

diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..ae8597a67804 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,18 @@ attribute, e. g.::

Setting this parameter to 100 will disable the hysteresis.

+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it by first switching on the global knob:
+
+ echo Y > /sys/module/zswap/par meters/shrinker_enabled
+
+When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
+it on for each cgroup that the shrinker should target:
+
+ echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
+
A debugfs interface is provided for various statistic about pool size, number
of pages stored, same-value filled pages and various counters for the reasons
pages are rejected.
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 05d34b328d9d..f005ea667863 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -219,6 +219,7 @@ struct mem_cgroup {

#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
unsigned long zswap_max;
+ atomic_t zswap_shrinker_enabled;
#endif

unsigned long soft_limit;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4106fbc5b4b3..81f4c5ea3e16 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -637,6 +637,20 @@ struct lruvec {
#ifdef CONFIG_MEMCG
struct pglist_data *pgdat;
#endif
+#ifdef CONFIG_ZSWAP
+ /*
+ * Number of pages in zswap that should be protected from the shrinker.
+ * This number is an estimate of the following counts:
+ *
+ * a) Recent page faults.
+ * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+ * as well as recent zswap LRU rotations.
+ *
+ * These pages are likely to be warm, and might incur IO if the are written
+ * to swap.
+ */
+ unsigned long nr_zswap_protected;
+#endif
};

/* Isolate unmapped pages */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9f84b3f7b469..1a2c97cf396f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
memcg->zswap_max = PAGE_COUNTER_MAX;
+ /* Disable the shrinker by default */
+ atomic_set(&memcg->zswap_shrinker_enabled, 0);
#endif
page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
if (parent) {
@@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
return nbytes;
}

+static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
+
+ seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
+ return 0;
+}
+
+static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+ int zswap_shrinker_enabled;
+ ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
+
+ if (parse_ret)
+ return parse_ret;
+
+ if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
+ return -ERANGE;
+
+ atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
+ return nbytes;
+}
+
static struct cftype zswap_files[] = {
{
.name = "zswap.current",
@@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
.seq_show = zswap_max_show,
.write = zswap_max_write,
},
+ {
+ .name = "zswap.shrinker.enabled",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .seq_show = zswap_shrinker_enabled_show,
+ .write = zswap_shrinker_enabled_write,
+ },
{ } /* terminate */
};
#endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1c826737aacb..788e36a06c34 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
return pages;
}

+#ifdef CONFIG_ZSWAP
+/*
+ * Refault is an indication that warmer pages are not resident in memory.
+ * Increase the size of zswap's protected area.
+ */
+static void inc_nr_protected(struct page *page)
+{
+ struct lruvec *lruvec = folio_lruvec(page_folio(page));
+ unsigned long flags;
+
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ lruvec->nr_zswap_protected++;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+}
+#endif
+
/**
* swap_cluster_readahead - swap in pages in hope we need them soon
* @entry: swap entry of this memory
@@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
lru_add_drain(); /* Push any new pages onto the LRU now */
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+ page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page)
+ inc_nr_protected(page);
+#endif
+ return page;
}

int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
lru_add_drain();
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
- NULL);
+ page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page)
+ inc_nr_protected(page);
+#endif
+ return page;
}

/**
diff --git a/mm/zswap.c b/mm/zswap.c
index 1a469e5d5197..79cb18eeb8bf 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32

+/*
+ * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
+ * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
+ * the shrinker for each memcg.
+ */
+static bool zswap_shrinker_enabled;
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+#ifdef CONFIG_MEMCG_KMEM
+static bool is_shrinker_enabled(struct mem_cgroup *memcg)
+{
+ return zswap_shrinker_enabled &&
+ atomic_read(&memcg->zswap_shrinker_enabled);
+}
+#else
+static bool is_shrinker_enabled(struct mem_cgroup *memcg)
+{
+ return zswap_shrinker_enabled;
+}
+#endif
+
/*********************************
* data structures
**********************************/
@@ -174,6 +194,8 @@ struct zswap_pool {
char tfm_name[CRYPTO_MAX_ALG_NAME];
struct list_lru list_lru;
struct mem_cgroup *next_shrink;
+ struct shrinker *shrinker;
+ atomic_t nr_stored;
};

/*
@@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
}

+static u64 get_zswap_pool_size(struct zswap_pool *pool)
+{
+ u64 pool_size = 0;
+ int i;
+
+ for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
+ pool_size += zpool_get_total_size(pool->zpools[i]);
+
+ return pool_size;
+}
+
static void zswap_update_total_size(void)
{
struct zswap_pool *pool;
u64 total = 0;
- int i;

rcu_read_lock();

list_for_each_entry_rcu(pool, &zswap_pools, list)
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- total += zpool_get_total_size(pool->zpools[i]);
+ total += get_zswap_pool_size(pool);

rcu_read_unlock();

@@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
{
struct mem_cgroup *memcg = entry->objcg ?
get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
+ unsigned long flags, lru_size;
+
+ if (added) {
+ lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ lruvec->nr_zswap_protected++;

+ /*
+ * Decay to avoid overflow and adapt to changing workloads.
+ * This is based on LRU reclaim cost decaying heuristics.
+ */
+ if (lruvec->nr_zswap_protected > lru_size / 4)
+ lruvec->nr_zswap_protected /= 2;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+ }
mem_cgroup_put(memcg);
return added;
}
@@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
else {
zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
+ atomic_dec(&entry->pool->nr_stored);
zswap_pool_put(entry->pool);
}
zswap_entry_cache_free(entry);
@@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
return entry;
}

+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = shrinker->private_data;
+ unsigned long shrink_ret, nr_zswap_protected, flags,
+ lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+ struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+ bool encountered_page_in_swapcache = false;
+
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ nr_zswap_protected = lruvec->nr_zswap_protected;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ /*
+ * Abort if the shrinker is disabled or if we are shrinking into the
+ * protected region.
+ */
+ if (!is_shrinker_enabled(sc->memcg) ||
+ nr_zswap_protected >= lru_size - sc->nr_to_scan) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
+ shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+ &encountered_page_in_swapcache);
+
+ if (encountered_page_in_swapcache)
+ return SHRINK_STOP;
+
+ return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = shrinker->private_data;
+ struct mem_cgroup *memcg = sc->memcg;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+ unsigned long nr_backing, nr_stored, nr_freeable, flags;
+
+#ifdef CONFIG_MEMCG_KMEM
+ cgroup_rstat_flush(memcg->css.cgroup);
+ nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+ nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+#else
+ /* use pool stats instead of memcg stats */
+ nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
+ nr_stored = atomic_read(&pool->nr_stored);
+#endif
+
+ if (!is_shrinker_enabled(memcg) || !nr_stored)
+ return 0;
+
+ nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+ /*
+ * Subtract the lru size by an estimate of the number of pages
+ * that should be protected.
+ */
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
+ nr_freeable - lruvec->nr_zswap_protected : 0;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ /*
+ * Scale the number of freeable pages by the memory saving factor.
+ * This ensures that the better zswap compresses memory, the fewer
+ * pages we will evict to swap (as it will otherwise incur IO for
+ * relatively small memory saving).
+ */
+ return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static void zswap_alloc_shrinker(struct zswap_pool *pool)
+{
+ pool->shrinker =
+ shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
+ if (!pool->shrinker)
+ return;
+
+ pool->shrinker->private_data = pool;
+ pool->shrinker->scan_objects = zswap_shrinker_scan;
+ pool->shrinker->count_objects = zswap_shrinker_count;
+ pool->shrinker->batch = 0;
+ pool->shrinker->seeks = DEFAULT_SEEKS;
+}
+
/*********************************
* per-cpu code
**********************************/
@@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
spinlock_t *lock, void *arg)
{
struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ bool *encountered_page_in_swapcache = (bool *)arg;
struct mem_cgroup *memcg;
struct zswap_tree *tree;
+ struct lruvec *lruvec;
pgoff_t swpoffset;
enum lru_status ret = LRU_REMOVED_RETRY;
int writeback_result;
+ unsigned long flags;

/*
* Once the lru lock is dropped, the entry might get freed. The
@@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
/* we cannot use zswap_lru_add here, because it increments node's lru count */
list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
spin_unlock(lock);
- mem_cgroup_put(memcg);
ret = LRU_RETRY;
+
+ /*
+ * Encountering a page already in swap cache is a sign that we are shrinking
+ * into the warmer region. We should terminate shrinking (if we're in the dynamic
+ * shrinker context).
+ */
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+ ret = LRU_SKIP;
+ *encountered_page_in_swapcache = true;
+ }
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ /* Increment the protection area to account for the LRU rotation. */
+ lruvec->nr_zswap_protected++;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ mem_cgroup_put(memcg);
goto put_unlock;
}

@@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
&pool->node);
if (ret)
goto error;
+
+ zswap_alloc_shrinker(pool);
+ if (!pool->shrinker)
+ goto error;
+
pr_debug("using %s compressor\n", pool->tfm_name);

/* being the current pool takes 1 ref; this func expects the
@@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
INIT_WORK(&pool->shrink_work, shrink_worker);
- list_lru_init_memcg(&pool->list_lru, NULL);
+ if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
+ goto lru_fail;
+ shrinker_register(pool->shrinker);

zswap_pool_debug("created", pool);

return pool;

+lru_fail:
+ list_lru_destroy(&pool->list_lru);
+ shrinker_free(pool->shrinker);
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)

zswap_pool_debug("destroying", pool);

+ shrinker_free(pool->shrinker);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
list_lru_destroy(&pool->list_lru);
@@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
zswap_lru_add(&pool->list_lru, entry);
+ atomic_inc(&pool->nr_stored);
}
spin_unlock(&tree->lock);

--
2.34.1

2023-09-25 22:21:52

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

+Chris Li

On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
>
> From: Domenico Cerasuolo <[email protected]>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/[email protected]/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.

Hey Nhat,

I didn't take a very close look as I am currently swamped, but going
through the patch I have some comments/questions below.

I am not very familiar with list_lru, but it seems like the existing
API derives the node and memcg from the list item itself. Seems like
we can avoid a lot of changes if we allocate struct zswap_entry from
the same node as the page, and account it to the same memcg. Would
this be too much of a change or too strong of a restriction? It's a
slab allocation and we will free memory on that node/memcg right
after.

>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Co-developed-by: Nhat Pham <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> include/linux/list_lru.h | 39 +++++++
> include/linux/memcontrol.h | 5 +
> include/linux/zswap.h | 9 ++
> mm/list_lru.c | 46 ++++++--
> mm/swap_state.c | 19 ++++
> mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
> 6 files changed, 287 insertions(+), 52 deletions(-)
>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index b35968ee9fb5..b517b4e2c7c4 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
> */
> bool list_lru_add(struct list_lru *lru, struct list_head *item);
>
> +/**
> + * __list_lru_add: add an element to a specific sublist.
> + * @list_lru: the lru pointer
> + * @item: the item to be added.
> + * @memcg: the cgroup of the sublist to add the item to.
> + * @nid: the node id of the sublist to add the item to.
> + *
> + * This function is similar to list_lru_add(), but it allows the caller to
> + * specify the sublist to which the item should be added. This can be useful
> + * when the list_head node is not necessarily in the same cgroup and NUMA node
> + * as the data it represents, such as zswap, where the list_head node could be
> + * from kswapd and the data from a different cgroup altogether.
> + *
> + * Return value: true if the list was updated, false otherwise
> + */
> +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg);
> +
> /**
> * list_lru_del: delete an element to the lru list
> * @list_lru: the lru pointer
> @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
> */
> bool list_lru_del(struct list_lru *lru, struct list_head *item);
>
> +/**
> + * __list_lru_delete: delete an element from a specific sublist.
> + * @list_lru: the lru pointer
> + * @item: the item to be deleted.
> + * @memcg: the cgroup of the sublist to delete the item from.
> + * @nid: the node id of the sublist to delete the item from.
> + *
> + * Return value: true if the list was updated, false otherwise.
> + */
> +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg);
> +
> /**
> * list_lru_count_one: return the number of objects currently held by @lru
> * @lru: the lru pointer.
> @@ -137,6 +167,15 @@ void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
> void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> struct list_head *head);
>
> +/*
> + * list_lru_putback: undo list_lru_isolate.
> + *
> + * Since we might have dropped the LRU lock in between, recompute list_lru_one
> + * from the node's id and memcg.
> + */
> +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg);
> +
> typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
> struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 67b823dfa47d..05d34b328d9d 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> + return NULL;
> +}
> +
> static inline bool folio_memcg_kmem(struct folio *folio)
> {
> return false;
> diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> index 2a60ce39cfde..04f80b64a09b 100644
> --- a/include/linux/zswap.h
> +++ b/include/linux/zswap.h
> @@ -15,6 +15,8 @@ bool zswap_load(struct folio *folio);
> void zswap_invalidate(int type, pgoff_t offset);
> void zswap_swapon(int type);
> void zswap_swapoff(int type);
> +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry);
> +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry);
>
> #else
>
> @@ -32,6 +34,13 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
> static inline void zswap_swapon(int type) {}
> static inline void zswap_swapoff(int type) {}
>
> +static inline bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> +{
> + return false;
> +}
> +
> +static inline void zswap_insert_swpentry_into_lru(swp_entry_t swpentry) {}
> +
> #endif
>
> #endif /* _LINUX_ZSWAP_H */
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index a05e5bef3b40..37c5c2ef6c0e 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -119,18 +119,26 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
> bool list_lru_add(struct list_lru *lru, struct list_head *item)
> {
> int nid = page_to_nid(virt_to_page(item));
> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> + mem_cgroup_from_slab_obj(item) : NULL;
> +
> + return __list_lru_add(lru, item, nid, memcg);
> +}
> +EXPORT_SYMBOL_GPL(list_lru_add);
> +
> +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg)
> +{
> struct list_lru_node *nlru = &lru->node[nid];
> - struct mem_cgroup *memcg;
> struct list_lru_one *l;
>
> spin_lock(&nlru->lock);
> if (list_empty(item)) {
> - l = list_lru_from_kmem(lru, nid, item, &memcg);
> + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> list_add_tail(item, &l->list);
> /* Set shrinker bit if the first element was added */
> if (!l->nr_items++)
> - set_shrinker_bit(memcg, nid,
> - lru_shrinker_id(lru));
> + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));

Unrelated diff.

> nlru->nr_items++;
> spin_unlock(&nlru->lock);
> return true;
> @@ -138,17 +146,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
> spin_unlock(&nlru->lock);
> return false;
> }
> -EXPORT_SYMBOL_GPL(list_lru_add);
> +EXPORT_SYMBOL_GPL(__list_lru_add);
>
> bool list_lru_del(struct list_lru *lru, struct list_head *item)
> {
> int nid = page_to_nid(virt_to_page(item));
> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> + mem_cgroup_from_slab_obj(item) : NULL;
> +
> + return __list_lru_del(lru, item, nid, memcg);
> +}
> +EXPORT_SYMBOL_GPL(list_lru_del);
> +
> +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg)
> +{
> struct list_lru_node *nlru = &lru->node[nid];
> struct list_lru_one *l;
>
> spin_lock(&nlru->lock);
> if (!list_empty(item)) {
> - l = list_lru_from_kmem(lru, nid, item, NULL);

If we decide to keep the list_lru.c changes, do we have any other
callers of list_lru_from_kmem()?

> + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> list_del_init(item);
> l->nr_items--;
> nlru->nr_items--;
> @@ -158,7 +176,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
> spin_unlock(&nlru->lock);
> return false;
> }
> -EXPORT_SYMBOL_GPL(list_lru_del);
> +EXPORT_SYMBOL_GPL(__list_lru_del);
>
> void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
> {
> @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> }
> EXPORT_SYMBOL_GPL(list_lru_isolate_move);
>
> +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> + struct mem_cgroup *memcg)
> +{
> + struct list_lru_one *list =
> + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> +
> + if (list_empty(item)) {
> + list_add_tail(item, &list->list);
> + if (!list->nr_items++)
> + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> + }
> +}
> +EXPORT_SYMBOL_GPL(list_lru_putback);
> +
> unsigned long list_lru_count_one(struct list_lru *lru,
> int nid, struct mem_cgroup *memcg)
> {
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index b3b14bd0dd64..1c826737aacb 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -21,6 +21,7 @@
> #include <linux/swap_slots.h>
> #include <linux/huge_mm.h>
> #include <linux/shmem_fs.h>
> +#include <linux/zswap.h>
> #include "internal.h"
> #include "swap.h"
>
> @@ -417,6 +418,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct folio *folio;
> struct page *page;
> void *shadow = NULL;
> + bool zswap_lru_removed = false;
>
> *new_page_allocated = false;
> si = get_swap_device(entry);
> @@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> __folio_set_locked(folio);
> __folio_set_swapbacked(folio);
>
> + /*
> + * Page fault might itself trigger reclaim, on a zswap object that
> + * corresponds to the same swap entry. However, as the swap entry has
> + * previously been pinned, the task will run into an infinite loop trying
> + * to pin the swap entry again.
> + *
> + * To prevent this from happening, we remove it from the zswap
> + * LRU to prevent its reclamation.
> + */
> + zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
> +

This will add a zswap lookup (and potentially an insertion below) in
every single swap fault path, right?. Doesn't this introduce latency
regressions? I am also not a fan of having zswap-specific details in
this path.

When you say "pinned", do you mean the call to swapcache_prepare()
above (i.e. setting SWAP_HAS_CACHE)? IIUC, the scenario you are
worried about is that the following call to charge the page may invoke
reclaim, go into zswap, and try to writeback the same page we are
swapping in here. The writeback call will recurse into
__read_swap_cache_async(), call swapcache_prepare() and get EEXIST,
and keep looping indefinitely. Is this correct?

If yes, can we handle this by adding a flag to
__read_swap_cache_async() that basically says "don't wait for
SWAP_HAS_CACHE and the swapcache to be consistent, if
swapcache_prepare() returns EEXIST just fail and return"? The zswap
writeback path can pass in this flag and skip such pages. We might
want to modify the writeback code to put back those pages at the end
of the lru instead of in the beginning.

> if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
> goto fail_unlock;
>
> @@ -497,6 +510,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> if (shadow)
> workingset_refault(folio, shadow);
>
> + if (zswap_lru_removed)
> + zswap_insert_swpentry_into_lru(entry);
> +
> /* Caller will initiate read into locked folio */
> folio_add_lru(folio);
> *new_page_allocated = true;
> @@ -506,6 +522,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> return page;
>
> fail_unlock:
> + if (zswap_lru_removed)
> + zswap_insert_swpentry_into_lru(entry);
> +
> put_swap_folio(folio, entry);
> folio_unlock(folio);
> folio_put(folio);
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 412b1409a0d7..1a469e5d5197 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -34,6 +34,7 @@
> #include <linux/writeback.h>
> #include <linux/pagemap.h>
> #include <linux/workqueue.h>
> +#include <linux/list_lru.h>
>
> #include "swap.h"
> #include "internal.h"
> @@ -171,8 +172,8 @@ struct zswap_pool {
> struct work_struct shrink_work;
> struct hlist_node node;
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> - struct list_head lru;
> - spinlock_t lru_lock;
> + struct list_lru list_lru;
> + struct mem_cgroup *next_shrink;
> };
>
> /*
> @@ -209,6 +210,7 @@ struct zswap_entry {
> unsigned long value;
> };
> struct obj_cgroup *objcg;
> + int nid;
> struct list_head lru;
> };

Ideally this can be avoided if we can allocate struct zswap_entry on
the correct node.

>
> @@ -309,6 +311,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +/*********************************
> +* lru functions
> +**********************************/
> +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + struct mem_cgroup *memcg = entry->objcg ?
> + get_mem_cgroup_from_objcg(entry->objcg) : NULL;

This line is repeated at least 3 times, perhaps add a helper for it?
get_mem_cgroup_from_zswap()?

> + bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> +
> + mem_cgroup_put(memcg);
> + return added;
> +}
> +
> +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + struct mem_cgroup *memcg = entry->objcg ?
> + get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> + bool removed = __list_lru_del(list_lru, &entry->lru, entry->nid, memcg);
> +
> + mem_cgroup_put(memcg);
> + return removed;
> +}
> +
> /*********************************
> * rbtree functions
> **********************************/
> @@ -393,9 +418,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> if (!entry->length)
> atomic_dec(&zswap_same_filled_pages);
> else {
> - spin_lock(&entry->pool->lru_lock);
> - list_del(&entry->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> zswap_pool_put(entry->pool);
> }
> @@ -629,21 +652,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> zswap_entry_put(tree, entry);
> }
>
> -static int zswap_reclaim_entry(struct zswap_pool *pool)
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg)
> {
> - struct zswap_entry *entry;
> + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> + struct mem_cgroup *memcg;
> struct zswap_tree *tree;
> pgoff_t swpoffset;
> - int ret;
> + enum lru_status ret = LRU_REMOVED_RETRY;
> + int writeback_result;
>
> - /* Get an entry off the LRU */
> - spin_lock(&pool->lru_lock);
> - if (list_empty(&pool->lru)) {
> - spin_unlock(&pool->lru_lock);
> - return -EINVAL;
> - }
> - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> - list_del_init(&entry->lru);
> /*
> * Once the lru lock is dropped, the entry might get freed. The
> * swpoffset is copied to the stack, and entry isn't deref'd again
> @@ -651,26 +669,35 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> */
> swpoffset = swp_offset(entry->swpentry);
> tree = zswap_trees[swp_type(entry->swpentry)];
> - spin_unlock(&pool->lru_lock);
> + list_lru_isolate(l, item);
> + spin_unlock(lock);
>
> /* Check for invalidate() race */
> spin_lock(&tree->lock);
> if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> - ret = -EAGAIN;
> goto unlock;
> }
> /* Hold a reference to prevent a free during writeback */
> zswap_entry_get(entry);
> spin_unlock(&tree->lock);
>
> - ret = zswap_writeback_entry(entry, tree);
> + writeback_result = zswap_writeback_entry(entry, tree);
>
> spin_lock(&tree->lock);
> - if (ret) {
> - /* Writeback failed, put entry back on LRU */
> - spin_lock(&pool->lru_lock);
> - list_move(&entry->lru, &pool->lru);
> - spin_unlock(&pool->lru_lock);
> + if (writeback_result) {
> + zswap_reject_reclaim_fail++;
> +
> + /* Check for invalidate() race */
> + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> + goto put_unlock;
> +
> + memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> + spin_lock(lock);
> + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> + list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> + spin_unlock(lock);
> + mem_cgroup_put(memcg);
> + ret = LRU_RETRY;
> goto put_unlock;
> }
>
> @@ -686,19 +713,63 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> zswap_entry_put(tree, entry);
> unlock:
> spin_unlock(&tree->lock);
> - return ret ? -EAGAIN : 0;
> + spin_lock(lock);
> + return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> + int nid, shrunk = 0;
> + bool is_empty = true;
> +
> + pool = zswap_pool_current_get();
> + if (!pool)
> + return -EINVAL;
> +
> + for_each_node_state(nid, N_NORMAL_MEMORY) {
> + unsigned long nr_to_walk = 1;
> +
> + if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> + NULL, &nr_to_walk))
> + shrunk++;
> + if (!nr_to_walk)

nr_to_walk will be 0 if we shrunk 1 page, so it's the same condition
as the above, right?

is_empty seems to be shrunk == 0 if I understand correctly, seems like
there is no need for both.

> + is_empty = false;
> + }
> + zswap_pool_put(pool);
> +
> + if (is_empty)
> + return -EINVAL;
> + if (shrunk)
> + return 0;
> + return -EAGAIN;
> }
>
> static void shrink_worker(struct work_struct *w)
> {
> struct zswap_pool *pool = container_of(w, typeof(*pool),
> shrink_work);
> - int ret, failures = 0;
> + int ret, failures = 0, memcg_selection_failures = 0;
>
> + /* global reclaim will select cgroup in a round-robin fashion. */
> do {
> - ret = zswap_reclaim_entry(pool);
> + /* previous next_shrink has become a zombie - restart from the top */

Do we skip zombies because all zswap entries are reparented with the objcg?

If yes, why do we restart from the top instead of just skipping them?
memcgs after a zombie will not be reachable now IIUC.

Also, why explicitly check for zombies instead of having
shrink_memcg() just skip memcgs with no zswap entries? The logic is
slightly complicated.

> + if (pool->next_shrink && !mem_cgroup_online(pool->next_shrink)) {
> + mem_cgroup_put(pool->next_shrink);
> + pool->next_shrink = NULL;
> + }
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> +
> + /* fails to find a suitable cgroup - give the worker another chance. */
> + if (!pool->next_shrink) {
> + if (++memcg_selection_failures == 2)
> + break;
> + continue;
> + }
> +
> + ret = shrink_memcg(pool->next_shrink);
> +
> if (ret) {
> - zswap_reject_reclaim_fail++;
> if (ret != -EAGAIN)
> break;
> if (++failures == MAX_RECLAIM_RETRIES)
> @@ -764,9 +835,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - INIT_LIST_HEAD(&pool->lru);
> - spin_lock_init(&pool->lru_lock);
> INIT_WORK(&pool->shrink_work, shrink_worker);
> + list_lru_init_memcg(&pool->list_lru, NULL);
>
> zswap_pool_debug("created", pool);
>
> @@ -831,6 +901,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> + list_lru_destroy(&pool->list_lru);
> + if (pool->next_shrink)
> + mem_cgroup_put(pool->next_shrink);
> for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> zpool_destroy_pool(pool->zpools[i]);
> kfree(pool);
> @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> struct obj_cgroup *objcg = NULL;
> + struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> struct zpool *zpool;
> + int lru_alloc_ret;
> unsigned int dlen = PAGE_SIZE;
> unsigned long handle, value;
> char *buf;
> @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> if (!zswap_enabled || !tree)
> return false;
>
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> else
> zswap_pool_reached_full = false;
> }
> -
> + pool = zswap_pool_current_get();
> + if (!pool) {
> + ret = -EINVAL;
> + goto reject;
> + }
> /* allocate entry */
> entry = zswap_entry_cache_alloc(GFP_KERNEL);
> if (!entry) {
> @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> entry->length = 0;
> entry->value = value;
> atomic_inc(&zswap_same_filled_pages);
> + zswap_pool_put(pool);
> goto insert_entry;
> }
> kunmap_atomic(src);
> @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> if (!zswap_non_same_filled_pages_enabled)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> + mem_cgroup_put(memcg);
> +
> + if (lru_alloc_ret)
> + goto freepage;
> + }
> +
> /* if entry is successfully added, it keeps the reference */
> entry->pool = zswap_pool_current_get();
> if (!entry->pool)
> @@ -1325,6 +1415,7 @@ bool zswap_store(struct folio *folio)
>
> insert_entry:
> entry->objcg = objcg;
> + entry->nid = page_to_nid(page);
> if (objcg) {
> obj_cgroup_charge_zswap(objcg, entry->length);
> /* Account before objcg ref is moved to tree */
> @@ -1338,9 +1429,8 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_add(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + INIT_LIST_HEAD(&entry->lru);
> + zswap_lru_add(&pool->list_lru, entry);
> }
> spin_unlock(&tree->lock);
>
> @@ -1447,9 +1537,8 @@ bool zswap_load(struct folio *folio)
> zswap_invalidate_entry(tree, entry);
> folio_mark_dirty(folio);
> } else if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_move(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> zswap_entry_put(tree, entry);
> spin_unlock(&tree->lock);
> @@ -1507,6 +1596,48 @@ void zswap_swapoff(int type)
> zswap_trees[type] = NULL;
> }
>
> +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> +{
> + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> + struct zswap_entry *entry;
> + struct zswap_pool *pool;
> + bool removed = false;
> +
> + /* get the zswap entry and prevent it from being freed */
> + spin_lock(&tree->lock);
> + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> + /* skip if the entry is already written back or is a same filled page */
> + if (!entry || !entry->length)
> + goto tree_unlock;
> +
> + pool = entry->pool;
> + removed = zswap_lru_del(&pool->list_lru, entry);
> +
> +tree_unlock:
> + spin_unlock(&tree->lock);
> + return removed;
> +}
> +
> +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry)
> +{
> + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> + struct zswap_entry *entry;
> + struct zswap_pool *pool;
> +
> + /* get the zswap entry and prevent it from being freed */
> + spin_lock(&tree->lock);
> + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> + /* skip if the entry is already written back or is a same filled page */
> + if (!entry || !entry->length)
> + goto tree_unlock;
> +
> + pool = entry->pool;
> + zswap_lru_add(&pool->list_lru, entry);
> +
> +tree_unlock:
> + spin_unlock(&tree->lock);
> +}
> +
> /*********************************
> * debugfs functions
> **********************************/
> @@ -1560,7 +1691,7 @@ static int zswap_setup(void)
> struct zswap_pool *pool;
> int ret;
>
> - zswap_entry_cache = KMEM_CACHE(zswap_entry, 0);
> + zswap_entry_cache = KMEM_CACHE(zswap_entry, SLAB_ACCOUNT);
> if (!zswap_entry_cache) {
> pr_err("entry cache creation failed\n");
> goto cache_fail;
> --
> 2.34.1

2023-09-26 04:12:13

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
>
> Currently, we only shrink the zswap pool when the user-defined limit is
> hit. This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed ahead
> of time, as this depends on the workload (specifically, on factors such
> as memory access patterns and compressibility of the memory pages).
>
> This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> is initiated when there is memory pressure. The shrinker does not
> have any parameter that must be tuned by the user, and can be opted in
> or out on a per-memcg basis.

What's the use case for having per-memcg opt-in/out?

If there is memory pressure, reclaiming swap-backed pages will push
pages out of zswap anyway, regardless of this patch. With this patch,
any sort of reclaim can push pages out of zswap. Wouldn't that be
preferable to reclaiming memory that is currently resident in memory
(so arguably hotter than the pages in zswap)? Why would this decision
be different per-memcg?

>
> Furthermore, to make it more robust for many workloads and prevent
> overshrinking (i.e evicting warm pages that might be refaulted into
> memory), we build in the following heuristics:
>
> * Estimate the number of warm pages residing in zswap, and attempt to
> protect this region of the zswap LRU.
> * Scale the number of freeable objects by an estimate of the memory
> saving factor. The better zswap compresses the data, the fewer pages
> we will evict to swap (as we will otherwise incur IO for relatively
> small memory saving).
> * During reclaim, if the shrinker encounters a page that is also being
> brought into memory, the shrinker will cautiously terminate its
> shrinking action, as this is a sign that it is touching the warmer
> region of the zswap LRU.

I don't have an opinion about the reclaim heuristics here, I will let
reclaim experts chip in.

>
> On a benchmark that we have run:

Please add more details (as much as possible) about the benchmarks used here.

>
> (without the shrinker)
> real -- mean: 153.27s, median: 153.199s
> sys -- mean: 541.652s, median: 541.903s
> user -- mean: 4384.9673999999995s, median: 4385.471s
>
> (with the shrinker)
> real -- mean: 151.4956s, median: 151.456s
> sys -- mean: 461.14639999999997s, median: 465.656s
> user -- mean: 4384.7118s, median: 4384.675s
>
> We observed a 14-15% reduction in kernel CPU time, which translated to
> over 1% reduction in real time.
>
> On another benchmark, where there was a lot more cold memory residing in
> zswap, we observed even more pronounced gains:
>
> (without the shrinker)
> real -- mean: 157.52519999999998s, median: 157.281s
> sys -- mean: 769.3082s, median: 780.545s
> user -- mean: 4378.1622s, median: 4378.286s
>
> (with the shrinker)
> real -- mean: 152.9608s, median: 152.845s
> sys -- mean: 517.4446s, median: 506.749s
> user -- mean: 4387.694s, median: 4387.935s
>
> Here, we saw around 32-35% reduction in kernel CPU time, which
> translated to 2.8% reduction in real time. These results confirm our
> hypothesis that the shrinker is more helpful the more cold memory we
> have.
>
> Suggested-by: Johannes Weiner <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> Documentation/admin-guide/mm/zswap.rst | 12 ++
> include/linux/memcontrol.h | 1 +
> include/linux/mmzone.h | 14 ++
> mm/memcontrol.c | 33 +++++
> mm/swap_state.c | 31 ++++-
> mm/zswap.c | 180 ++++++++++++++++++++++++-
> 6 files changed, 263 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> index 45b98390e938..ae8597a67804 100644
> --- a/Documentation/admin-guide/mm/zswap.rst
> +++ b/Documentation/admin-guide/mm/zswap.rst
> @@ -153,6 +153,18 @@ attribute, e. g.::
>
> Setting this parameter to 100 will disable the hysteresis.
>
> +When there is a sizable amount of cold memory residing in the zswap pool, it
> +can be advantageous to proactively write these cold pages to swap and reclaim
> +the memory for other use cases. By default, the zswap shrinker is disabled.
> +User can enable it by first switching on the global knob:
> +
> + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> +
> +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> +it on for each cgroup that the shrinker should target:
> +
> + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> +
> A debugfs interface is provided for various statistic about pool size, number
> of pages stored, same-value filled pages and various counters for the reasons
> pages are rejected.
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 05d34b328d9d..f005ea667863 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -219,6 +219,7 @@ struct mem_cgroup {
>
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> unsigned long zswap_max;
> + atomic_t zswap_shrinker_enabled;
> #endif
>
> unsigned long soft_limit;
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 4106fbc5b4b3..81f4c5ea3e16 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -637,6 +637,20 @@ struct lruvec {
> #ifdef CONFIG_MEMCG
> struct pglist_data *pgdat;
> #endif
> +#ifdef CONFIG_ZSWAP
> + /*
> + * Number of pages in zswap that should be protected from the shrinker.
> + * This number is an estimate of the following counts:
> + *
> + * a) Recent page faults.
> + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> + * as well as recent zswap LRU rotations.
> + *
> + * These pages are likely to be warm, and might incur IO if the are written
> + * to swap.
> + */
> + unsigned long nr_zswap_protected;
> +#endif

Would this be better abstracted in a zswap lruvec struct?

> };
>
> /* Isolate unmapped pages */
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9f84b3f7b469..1a2c97cf396f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> memcg->zswap_max = PAGE_COUNTER_MAX;
> + /* Disable the shrinker by default */
> + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> #endif
> page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> if (parent) {
> @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> return nbytes;
> }
>
> +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> +
> + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> + return 0;
> +}
> +
> +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> + char *buf, size_t nbytes, loff_t off)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> + int zswap_shrinker_enabled;
> + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> +
> + if (parse_ret)
> + return parse_ret;
> +
> + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> + return -ERANGE;
> +
> + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> + return nbytes;
> +}
> +
> static struct cftype zswap_files[] = {
> {
> .name = "zswap.current",
> @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> .seq_show = zswap_max_show,
> .write = zswap_max_write,
> },
> + {
> + .name = "zswap.shrinker.enabled",
> + .flags = CFTYPE_NOT_ON_ROOT,
> + .seq_show = zswap_shrinker_enabled_show,
> + .write = zswap_shrinker_enabled_write,
> + },
> { } /* terminate */
> };
> #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 1c826737aacb..788e36a06c34 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> return pages;
> }
>
> +#ifdef CONFIG_ZSWAP
> +/*
> + * Refault is an indication that warmer pages are not resident in memory.
> + * Increase the size of zswap's protected area.
> + */
> +static void inc_nr_protected(struct page *page)
> +{
> + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> + unsigned long flags;
> +
> + spin_lock_irqsave(&lruvec->lru_lock, flags);
> + lruvec->nr_zswap_protected++;
> + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> +}
> +#endif
> +

A few questions:
- Why is this function named in such a generic way?
- Why is this function here instead of in mm/zswap.c?
- Why is this protected by the heavily contested lruvec lock instead
of being an atomic?

> /**
> * swap_cluster_readahead - swap in pages in hope we need them soon
> * @entry: swap entry of this memory
> @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> lru_add_drain(); /* Push any new pages onto the LRU now */
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> +#ifdef CONFIG_ZSWAP
> + if (page)
> + inc_nr_protected(page);
> +#endif
> + return page;
> }
>
> int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> lru_add_drain();
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> - NULL);
> + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> +#ifdef CONFIG_ZSWAP
> + if (page)
> + inc_nr_protected(page);
> +#endif
> + return page;
> }
>
> /**
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 1a469e5d5197..79cb18eeb8bf 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> /* Number of zpools in zswap_pool (empirically determined for scalability) */
> #define ZSWAP_NR_ZPOOLS 32
>
> +/*
> + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> + * the shrinker for each memcg.
> + */
> +static bool zswap_shrinker_enabled;
> +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> +#ifdef CONFIG_MEMCG_KMEM
> +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> +{
> + return zswap_shrinker_enabled &&
> + atomic_read(&memcg->zswap_shrinker_enabled);
> +}
> +#else
> +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> +{
> + return zswap_shrinker_enabled;
> +}
> +#endif
> +
> /*********************************
> * data structures
> **********************************/
> @@ -174,6 +194,8 @@ struct zswap_pool {
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> struct list_lru list_lru;
> struct mem_cgroup *next_shrink;
> + struct shrinker *shrinker;
> + atomic_t nr_stored;
> };
>
> /*
> @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> }
>
> +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> +{
> + u64 pool_size = 0;
> + int i;
> +
> + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> + pool_size += zpool_get_total_size(pool->zpools[i]);
> +
> + return pool_size;
> +}
> +
> static void zswap_update_total_size(void)
> {
> struct zswap_pool *pool;
> u64 total = 0;
> - int i;
>
> rcu_read_lock();
>
> list_for_each_entry_rcu(pool, &zswap_pools, list)
> - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> - total += zpool_get_total_size(pool->zpools[i]);
> + total += get_zswap_pool_size(pool);
>
> rcu_read_unlock();
>
> @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> {
> struct mem_cgroup *memcg = entry->objcg ?
> get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> + unsigned long flags, lru_size;
> +
> + if (added) {
> + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> + spin_lock_irqsave(&lruvec->lru_lock, flags);
> + lruvec->nr_zswap_protected++;
>
> + /*
> + * Decay to avoid overflow and adapt to changing workloads.
> + * This is based on LRU reclaim cost decaying heuristics.
> + */
> + if (lruvec->nr_zswap_protected > lru_size / 4)
> + lruvec->nr_zswap_protected /= 2;
> + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> + }
> mem_cgroup_put(memcg);
> return added;
> }
> @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> else {
> zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> + atomic_dec(&entry->pool->nr_stored);
> zswap_pool_put(entry->pool);
> }
> zswap_entry_cache_free(entry);
> @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> return entry;
> }
>
> +/*********************************
> +* shrinker functions
> +**********************************/
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg);
> +
> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct zswap_pool *pool = shrinker->private_data;
> + unsigned long shrink_ret, nr_zswap_protected, flags,
> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> + bool encountered_page_in_swapcache = false;
> +
> + spin_lock_irqsave(&lruvec->lru_lock, flags);
> + nr_zswap_protected = lruvec->nr_zswap_protected;
> + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> +
> + /*
> + * Abort if the shrinker is disabled or if we are shrinking into the
> + * protected region.
> + */
> + if (!is_shrinker_enabled(sc->memcg) ||
> + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> + sc->nr_scanned = 0;
> + return SHRINK_STOP;
> + }
> +
> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> + &encountered_page_in_swapcache);
> +
> + if (encountered_page_in_swapcache)
> + return SHRINK_STOP;
> +
> + return shrink_ret ? shrink_ret : SHRINK_STOP;
> +}
> +
> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct zswap_pool *pool = shrinker->private_data;
> + struct mem_cgroup *memcg = sc->memcg;
> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> +
> +#ifdef CONFIG_MEMCG_KMEM
> + cgroup_rstat_flush(memcg->css.cgroup);
> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> +#else
> + /* use pool stats instead of memcg stats */
> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> + nr_stored = atomic_read(&pool->nr_stored);
> +#endif
> +
> + if (!is_shrinker_enabled(memcg) || !nr_stored)
> + return 0;
> +
> + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> + /*
> + * Subtract the lru size by an estimate of the number of pages
> + * that should be protected.
> + */
> + spin_lock_irqsave(&lruvec->lru_lock, flags);
> + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> + nr_freeable - lruvec->nr_zswap_protected : 0;
> + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> +
> + /*
> + * Scale the number of freeable pages by the memory saving factor.
> + * This ensures that the better zswap compresses memory, the fewer
> + * pages we will evict to swap (as it will otherwise incur IO for
> + * relatively small memory saving).
> + */
> + return mult_frac(nr_freeable, nr_backing, nr_stored);
> +}
> +
> +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> +{
> + pool->shrinker =
> + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> + if (!pool->shrinker)
> + return;
> +
> + pool->shrinker->private_data = pool;
> + pool->shrinker->scan_objects = zswap_shrinker_scan;
> + pool->shrinker->count_objects = zswap_shrinker_count;
> + pool->shrinker->batch = 0;
> + pool->shrinker->seeks = DEFAULT_SEEKS;
> +}
> +
> /*********************************
> * per-cpu code
> **********************************/
> @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> spinlock_t *lock, void *arg)
> {
> struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> + bool *encountered_page_in_swapcache = (bool *)arg;
> struct mem_cgroup *memcg;
> struct zswap_tree *tree;
> + struct lruvec *lruvec;
> pgoff_t swpoffset;
> enum lru_status ret = LRU_REMOVED_RETRY;
> int writeback_result;
> + unsigned long flags;
>
> /*
> * Once the lru lock is dropped, the entry might get freed. The
> @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> /* we cannot use zswap_lru_add here, because it increments node's lru count */
> list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> spin_unlock(lock);
> - mem_cgroup_put(memcg);
> ret = LRU_RETRY;
> +
> + /*
> + * Encountering a page already in swap cache is a sign that we are shrinking
> + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> + * shrinker context).
> + */
> + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> + ret = LRU_SKIP;
> + *encountered_page_in_swapcache = true;
> + }
> + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> + spin_lock_irqsave(&lruvec->lru_lock, flags);
> + /* Increment the protection area to account for the LRU rotation. */
> + lruvec->nr_zswap_protected++;
> + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> +
> + mem_cgroup_put(memcg);
> goto put_unlock;
> }
>
> @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> &pool->node);
> if (ret)
> goto error;
> +
> + zswap_alloc_shrinker(pool);
> + if (!pool->shrinker)
> + goto error;
> +
> pr_debug("using %s compressor\n", pool->tfm_name);
>
> /* being the current pool takes 1 ref; this func expects the
> @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> INIT_WORK(&pool->shrink_work, shrink_worker);
> - list_lru_init_memcg(&pool->list_lru, NULL);
> + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> + goto lru_fail;
> + shrinker_register(pool->shrinker);
>
> zswap_pool_debug("created", pool);
>
> return pool;
>
> +lru_fail:
> + list_lru_destroy(&pool->list_lru);
> + shrinker_free(pool->shrinker);
> error:
> if (pool->acomp_ctx)
> free_percpu(pool->acomp_ctx);
> @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> zswap_pool_debug("destroying", pool);
>
> + shrinker_free(pool->shrinker);
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> list_lru_destroy(&pool->list_lru);
> @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> zswap_lru_add(&pool->list_lru, entry);
> + atomic_inc(&pool->nr_stored);
> }
> spin_unlock(&tree->lock);
>
> --
> 2.34.1

2023-09-26 06:16:41

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Mon, Sep 25, 2023 at 1:38 PM Yosry Ahmed <[email protected]> wrote:
>
> On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> >
> > Currently, we only shrink the zswap pool when the user-defined limit is
> > hit. This means that if we set the limit too high, cold data that are
> > unlikely to be used again will reside in the pool, wasting precious
> > memory. It is hard to predict how much zswap space will be needed ahead
> > of time, as this depends on the workload (specifically, on factors such
> > as memory access patterns and compressibility of the memory pages).
> >
> > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > is initiated when there is memory pressure. The shrinker does not
> > have any parameter that must be tuned by the user, and can be opted in
> > or out on a per-memcg basis.
>
> What's the use case for having per-memcg opt-in/out?
>
> If there is memory pressure, reclaiming swap-backed pages will push
> pages out of zswap anyway, regardless of this patch. With this patch,
> any sort of reclaim can push pages out of zswap. Wouldn't that be
> preferable to reclaiming memory that is currently resident in memory
> (so arguably hotter than the pages in zswap)? Why would this decision
> be different per-memcg?
I'm not quite following your argument here. The point of having this
be done on a per-memcg basis is that we have different workloads
with different memory access pattern (and as a result, different memory
coldness distribution).

In a workload where there is a lot of cold data, we can really benefit
from reclaiming all of those pages and repurpose the memory reclaimed
(for e.g for filecache).

On the other hand, in a workload where there aren't a lot of cold data,
reclaiming its zswapped pages will at best do nothing (wasting CPU
cycles on compression/decompression), and at worst hurt performance
(due to increased IO when we need those writtenback pages again).

Such different workloads could co-exist in the same system, and having
a per-memcg knob allows us to crank on the shrinker only on workloads
where it makes sense.
>
> >
> > Furthermore, to make it more robust for many workloads and prevent
> > overshrinking (i.e evicting warm pages that might be refaulted into
> > memory), we build in the following heuristics:
> >
> > * Estimate the number of warm pages residing in zswap, and attempt to
> > protect this region of the zswap LRU.
> > * Scale the number of freeable objects by an estimate of the memory
> > saving factor. The better zswap compresses the data, the fewer pages
> > we will evict to swap (as we will otherwise incur IO for relatively
> > small memory saving).
> > * During reclaim, if the shrinker encounters a page that is also being
> > brought into memory, the shrinker will cautiously terminate its
> > shrinking action, as this is a sign that it is touching the warmer
> > region of the zswap LRU.
>
> I don't have an opinion about the reclaim heuristics here, I will let
> reclaim experts chip in.
>
> >
> > On a benchmark that we have run:
>
> Please add more details (as much as possible) about the benchmarks used here.
Sure! I built the kernel in a memory-limited cgroup a couple times,
then measured the build time.

To simulate conditions where there are cold, unused data, I
also generated a bunch of data in tmpfs (and never touch them
again).
>
> >
> > (without the shrinker)
> > real -- mean: 153.27s, median: 153.199s
> > sys -- mean: 541.652s, median: 541.903s
> > user -- mean: 4384.9673999999995s, median: 4385.471s
> >
> > (with the shrinker)
> > real -- mean: 151.4956s, median: 151.456s
> > sys -- mean: 461.14639999999997s, median: 465.656s
> > user -- mean: 4384.7118s, median: 4384.675s
> >
> > We observed a 14-15% reduction in kernel CPU time, which translated to
> > over 1% reduction in real time.
> >
> > On another benchmark, where there was a lot more cold memory residing in
> > zswap, we observed even more pronounced gains:
> >
> > (without the shrinker)
> > real -- mean: 157.52519999999998s, median: 157.281s
> > sys -- mean: 769.3082s, median: 780.545s
> > user -- mean: 4378.1622s, median: 4378.286s
> >
> > (with the shrinker)
> > real -- mean: 152.9608s, median: 152.845s
> > sys -- mean: 517.4446s, median: 506.749s
> > user -- mean: 4387.694s, median: 4387.935s
> >
> > Here, we saw around 32-35% reduction in kernel CPU time, which
> > translated to 2.8% reduction in real time. These results confirm our
> > hypothesis that the shrinker is more helpful the more cold memory we
> > have.
> >
> > Suggested-by: Johannes Weiner <[email protected]>
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
> > Documentation/admin-guide/mm/zswap.rst | 12 ++
> > include/linux/memcontrol.h | 1 +
> > include/linux/mmzone.h | 14 ++
> > mm/memcontrol.c | 33 +++++
> > mm/swap_state.c | 31 ++++-
> > mm/zswap.c | 180 ++++++++++++++++++++++++-
> > 6 files changed, 263 insertions(+), 8 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > index 45b98390e938..ae8597a67804 100644
> > --- a/Documentation/admin-guide/mm/zswap.rst
> > +++ b/Documentation/admin-guide/mm/zswap.rst
> > @@ -153,6 +153,18 @@ attribute, e. g.::
> >
> > Setting this parameter to 100 will disable the hysteresis.
> >
> > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > +can be advantageous to proactively write these cold pages to swap and reclaim
> > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > +User can enable it by first switching on the global knob:
> > +
> > + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> > +
> > +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> > +it on for each cgroup that the shrinker should target:
> > +
> > + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> > +
> > A debugfs interface is provided for various statistic about pool size, number
> > of pages stored, same-value filled pages and various counters for the reasons
> > pages are rejected.
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 05d34b328d9d..f005ea667863 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -219,6 +219,7 @@ struct mem_cgroup {
> >
> > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > unsigned long zswap_max;
> > + atomic_t zswap_shrinker_enabled;
> > #endif
> >
> > unsigned long soft_limit;
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 4106fbc5b4b3..81f4c5ea3e16 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -637,6 +637,20 @@ struct lruvec {
> > #ifdef CONFIG_MEMCG
> > struct pglist_data *pgdat;
> > #endif
> > +#ifdef CONFIG_ZSWAP
> > + /*
> > + * Number of pages in zswap that should be protected from the shrinker.
> > + * This number is an estimate of the following counts:
> > + *
> > + * a) Recent page faults.
> > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > + * as well as recent zswap LRU rotations.
> > + *
> > + * These pages are likely to be warm, and might incur IO if the are written
> > + * to swap.
> > + */
> > + unsigned long nr_zswap_protected;
> > +#endif
>
> Would this be better abstracted in a zswap lruvec struct?
There is just one field, so that sounds like overkill to me.
But if we need to store more data (for smarter heuristics),
that'll be a good idea. I'll keep this in mind. Thanks for the
suggestion, Yosry!
>
> > };
> >
> > /* Isolate unmapped pages */
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 9f84b3f7b469..1a2c97cf396f 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > memcg->zswap_max = PAGE_COUNTER_MAX;
> > + /* Disable the shrinker by default */
> > + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> > #endif
> > page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> > if (parent) {
> > @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> > return nbytes;
> > }
> >
> > +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> > +{
> > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > +
> > + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> > + return 0;
> > +}
> > +
> > +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> > + char *buf, size_t nbytes, loff_t off)
> > +{
> > + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> > + int zswap_shrinker_enabled;
> > + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> > +
> > + if (parse_ret)
> > + return parse_ret;
> > +
> > + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> > + return -ERANGE;
> > +
> > + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> > + return nbytes;
> > +}
> > +
> > static struct cftype zswap_files[] = {
> > {
> > .name = "zswap.current",
> > @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> > .seq_show = zswap_max_show,
> > .write = zswap_max_write,
> > },
> > + {
> > + .name = "zswap.shrinker.enabled",
> > + .flags = CFTYPE_NOT_ON_ROOT,
> > + .seq_show = zswap_shrinker_enabled_show,
> > + .write = zswap_shrinker_enabled_write,
> > + },
> > { } /* terminate */
> > };
> > #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 1c826737aacb..788e36a06c34 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> > return pages;
> > }
> >
> > +#ifdef CONFIG_ZSWAP
> > +/*
> > + * Refault is an indication that warmer pages are not resident in memory.
> > + * Increase the size of zswap's protected area.
> > + */
> > +static void inc_nr_protected(struct page *page)
> > +{
> > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > + lruvec->nr_zswap_protected++;
> > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > +}
> > +#endif
> > +
>
> A few questions:
> - Why is this function named in such a generic way?
Perhaps inc_nr_zswap_protected would be better? :)
> - Why is this function here instead of in mm/zswap.c?
No particular reason :) It's not being used anywhere else,
so I just put it as a static function here.
> - Why is this protected by the heavily contested lruvec lock instead
> of being an atomic?
nr_zswap_protected can be decayed (see zswap_lru_add), which
I don't think it can be implemented with atomics :( It'd be much
cleaner indeed.

I'm wary of adding new locks, so I just re-use this existing lock.
But if lruvec lock is heavily congested (I'm not aware/familar with
this issue), then perhaps a new, dedicated lock would help?
>
> > /**
> > * swap_cluster_readahead - swap in pages in hope we need them soon
> > * @entry: swap entry of this memory
> > @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > lru_add_drain(); /* Push any new pages onto the LRU now */
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > +#ifdef CONFIG_ZSWAP
> > + if (page)
> > + inc_nr_protected(page);
> > +#endif
> > + return page;
> > }
> >
> > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > lru_add_drain();
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > - NULL);
> > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > +#ifdef CONFIG_ZSWAP
> > + if (page)
> > + inc_nr_protected(page);
> > +#endif
> > + return page;
> > }
> >
> > /**
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 1a469e5d5197..79cb18eeb8bf 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > #define ZSWAP_NR_ZPOOLS 32
> >
> > +/*
> > + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> > + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> > + * the shrinker for each memcg.
> > + */
> > +static bool zswap_shrinker_enabled;
> > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > +#ifdef CONFIG_MEMCG_KMEM
> > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > +{
> > + return zswap_shrinker_enabled &&
> > + atomic_read(&memcg->zswap_shrinker_enabled);
> > +}
> > +#else
> > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > +{
> > + return zswap_shrinker_enabled;
> > +}
> > +#endif
> > +
> > /*********************************
> > * data structures
> > **********************************/
> > @@ -174,6 +194,8 @@ struct zswap_pool {
> > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > struct list_lru list_lru;
> > struct mem_cgroup *next_shrink;
> > + struct shrinker *shrinker;
> > + atomic_t nr_stored;
> > };
> >
> > /*
> > @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > }
> >
> > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > +{
> > + u64 pool_size = 0;
> > + int i;
> > +
> > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > +
> > + return pool_size;
> > +}
> > +
> > static void zswap_update_total_size(void)
> > {
> > struct zswap_pool *pool;
> > u64 total = 0;
> > - int i;
> >
> > rcu_read_lock();
> >
> > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > - total += zpool_get_total_size(pool->zpools[i]);
> > + total += get_zswap_pool_size(pool);
> >
> > rcu_read_unlock();
> >
> > @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > {
> > struct mem_cgroup *memcg = entry->objcg ?
> > get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > + unsigned long flags, lru_size;
> > +
> > + if (added) {
> > + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > + lruvec->nr_zswap_protected++;
> >
> > + /*
> > + * Decay to avoid overflow and adapt to changing workloads.
> > + * This is based on LRU reclaim cost decaying heuristics.
> > + */
> > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > + lruvec->nr_zswap_protected /= 2;
> > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > + }
> > mem_cgroup_put(memcg);
> > return added;
> > }
> > @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > else {
> > zswap_lru_del(&entry->pool->list_lru, entry);
> > zpool_free(zswap_find_zpool(entry), entry->handle);
> > + atomic_dec(&entry->pool->nr_stored);
> > zswap_pool_put(entry->pool);
> > }
> > zswap_entry_cache_free(entry);
> > @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > return entry;
> > }
> >
> > +/*********************************
> > +* shrinker functions
> > +**********************************/
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg);
> > +
> > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct zswap_pool *pool = shrinker->private_data;
> > + unsigned long shrink_ret, nr_zswap_protected, flags,
> > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > + bool encountered_page_in_swapcache = false;
> > +
> > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > + nr_zswap_protected = lruvec->nr_zswap_protected;
> > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > +
> > + /*
> > + * Abort if the shrinker is disabled or if we are shrinking into the
> > + * protected region.
> > + */
> > + if (!is_shrinker_enabled(sc->memcg) ||
> > + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> > + sc->nr_scanned = 0;
> > + return SHRINK_STOP;
> > + }
> > +
> > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > + &encountered_page_in_swapcache);
> > +
> > + if (encountered_page_in_swapcache)
> > + return SHRINK_STOP;
> > +
> > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > +}
> > +
> > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct zswap_pool *pool = shrinker->private_data;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > + cgroup_rstat_flush(memcg->css.cgroup);
> > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > +#else
> > + /* use pool stats instead of memcg stats */
> > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > + nr_stored = atomic_read(&pool->nr_stored);
> > +#endif
> > +
> > + if (!is_shrinker_enabled(memcg) || !nr_stored)
> > + return 0;
> > +
> > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > + /*
> > + * Subtract the lru size by an estimate of the number of pages
> > + * that should be protected.
> > + */
> > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> > + nr_freeable - lruvec->nr_zswap_protected : 0;
> > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > +
> > + /*
> > + * Scale the number of freeable pages by the memory saving factor.
> > + * This ensures that the better zswap compresses memory, the fewer
> > + * pages we will evict to swap (as it will otherwise incur IO for
> > + * relatively small memory saving).
> > + */
> > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > +}
> > +
> > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > +{
> > + pool->shrinker =
> > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > + if (!pool->shrinker)
> > + return;
> > +
> > + pool->shrinker->private_data = pool;
> > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > + pool->shrinker->count_objects = zswap_shrinker_count;
> > + pool->shrinker->batch = 0;
> > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > +}
> > +
> > /*********************************
> > * per-cpu code
> > **********************************/
> > @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > spinlock_t *lock, void *arg)
> > {
> > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > + bool *encountered_page_in_swapcache = (bool *)arg;
> > struct mem_cgroup *memcg;
> > struct zswap_tree *tree;
> > + struct lruvec *lruvec;
> > pgoff_t swpoffset;
> > enum lru_status ret = LRU_REMOVED_RETRY;
> > int writeback_result;
> > + unsigned long flags;
> >
> > /*
> > * Once the lru lock is dropped, the entry might get freed. The
> > @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > spin_unlock(lock);
> > - mem_cgroup_put(memcg);
> > ret = LRU_RETRY;
> > +
> > + /*
> > + * Encountering a page already in swap cache is a sign that we are shrinking
> > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > + * shrinker context).
> > + */
> > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > + ret = LRU_SKIP;
> > + *encountered_page_in_swapcache = true;
> > + }
> > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > + /* Increment the protection area to account for the LRU rotation. */
> > + lruvec->nr_zswap_protected++;
> > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > +
> > + mem_cgroup_put(memcg);
> > goto put_unlock;
> > }
> >
> > @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > &pool->node);
> > if (ret)
> > goto error;
> > +
> > + zswap_alloc_shrinker(pool);
> > + if (!pool->shrinker)
> > + goto error;
> > +
> > pr_debug("using %s compressor\n", pool->tfm_name);
> >
> > /* being the current pool takes 1 ref; this func expects the
> > @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > kref_init(&pool->kref);
> > INIT_LIST_HEAD(&pool->list);
> > INIT_WORK(&pool->shrink_work, shrink_worker);
> > - list_lru_init_memcg(&pool->list_lru, NULL);
> > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > + goto lru_fail;
> > + shrinker_register(pool->shrinker);
> >
> > zswap_pool_debug("created", pool);
> >
> > return pool;
> >
> > +lru_fail:
> > + list_lru_destroy(&pool->list_lru);
> > + shrinker_free(pool->shrinker);
> > error:
> > if (pool->acomp_ctx)
> > free_percpu(pool->acomp_ctx);
> > @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> > zswap_pool_debug("destroying", pool);
> >
> > + shrinker_free(pool->shrinker);
> > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > free_percpu(pool->acomp_ctx);
> > list_lru_destroy(&pool->list_lru);
> > @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> > if (entry->length) {
> > INIT_LIST_HEAD(&entry->lru);
> > zswap_lru_add(&pool->list_lru, entry);
> > + atomic_inc(&pool->nr_stored);
> > }
> > spin_unlock(&tree->lock);
> >
> > --
> > 2.34.1
Thanks for the comments/suggestion, Yosry!

2023-09-26 11:28:07

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Mon, Sep 25, 2023 at 4:29 PM Nhat Pham <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 1:38 PM Yosry Ahmed <[email protected]> wrote:
> >
> > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > >
> > > Currently, we only shrink the zswap pool when the user-defined limit is
> > > hit. This means that if we set the limit too high, cold data that are
> > > unlikely to be used again will reside in the pool, wasting precious
> > > memory. It is hard to predict how much zswap space will be needed ahead
> > > of time, as this depends on the workload (specifically, on factors such
> > > as memory access patterns and compressibility of the memory pages).
> > >
> > > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > > is initiated when there is memory pressure. The shrinker does not
> > > have any parameter that must be tuned by the user, and can be opted in
> > > or out on a per-memcg basis.
> >
> > What's the use case for having per-memcg opt-in/out?
> >
> > If there is memory pressure, reclaiming swap-backed pages will push
> > pages out of zswap anyway, regardless of this patch. With this patch,
> > any sort of reclaim can push pages out of zswap. Wouldn't that be
> > preferable to reclaiming memory that is currently resident in memory
> > (so arguably hotter than the pages in zswap)? Why would this decision
> > be different per-memcg?
> I'm not quite following your argument here. The point of having this
> be done on a per-memcg basis is that we have different workloads
> with different memory access pattern (and as a result, different memory
> coldness distribution).
>
> In a workload where there is a lot of cold data, we can really benefit
> from reclaiming all of those pages and repurpose the memory reclaimed
> (for e.g for filecache).
>
> On the other hand, in a workload where there aren't a lot of cold data,
> reclaiming its zswapped pages will at best do nothing (wasting CPU
> cycles on compression/decompression), and at worst hurt performance
> (due to increased IO when we need those writtenback pages again).
>
> Such different workloads could co-exist in the same system, and having
> a per-memcg knob allows us to crank on the shrinker only on workloads
> where it makes sense.

I am not sure we are on the same page here.

What you're describing sounds more like proactive reclaim, which we
wouldn't invoke unless the workload has cold data anyway.

IIUC, outside of that, this shrinker will run when there is memory
pressure. This means that we need to free memory anyway, regardless of
its absolute coldness. We want to evict the colder pages in the memcg.
It seems to be that in ~all cases, evicting pages in zswap will be
better than evicting pages in memory, as the pages in memory are
arguably hotter (since they weren't reclaimed first). This seems to be
something that would be true for all workloads.

What am I missing?

> >
> > >
> > > Furthermore, to make it more robust for many workloads and prevent
> > > overshrinking (i.e evicting warm pages that might be refaulted into
> > > memory), we build in the following heuristics:
> > >
> > > * Estimate the number of warm pages residing in zswap, and attempt to
> > > protect this region of the zswap LRU.
> > > * Scale the number of freeable objects by an estimate of the memory
> > > saving factor. The better zswap compresses the data, the fewer pages
> > > we will evict to swap (as we will otherwise incur IO for relatively
> > > small memory saving).
> > > * During reclaim, if the shrinker encounters a page that is also being
> > > brought into memory, the shrinker will cautiously terminate its
> > > shrinking action, as this is a sign that it is touching the warmer
> > > region of the zswap LRU.
> >
> > I don't have an opinion about the reclaim heuristics here, I will let
> > reclaim experts chip in.
> >
> > >
> > > On a benchmark that we have run:
> >
> > Please add more details (as much as possible) about the benchmarks used here.
> Sure! I built the kernel in a memory-limited cgroup a couple times,
> then measured the build time.
>
> To simulate conditions where there are cold, unused data, I
> also generated a bunch of data in tmpfs (and never touch them
> again).

Please include such details in the commit message, there is also
another reference below to "another" benchmark.


> >
> > >
> > > (without the shrinker)
> > > real -- mean: 153.27s, median: 153.199s
> > > sys -- mean: 541.652s, median: 541.903s
> > > user -- mean: 4384.9673999999995s, median: 4385.471s
> > >
> > > (with the shrinker)
> > > real -- mean: 151.4956s, median: 151.456s
> > > sys -- mean: 461.14639999999997s, median: 465.656s
> > > user -- mean: 4384.7118s, median: 4384.675s
> > >
> > > We observed a 14-15% reduction in kernel CPU time, which translated to
> > > over 1% reduction in real time.
> > >
> > > On another benchmark, where there was a lot more cold memory residing in
> > > zswap, we observed even more pronounced gains:
> > >
> > > (without the shrinker)
> > > real -- mean: 157.52519999999998s, median: 157.281s
> > > sys -- mean: 769.3082s, median: 780.545s
> > > user -- mean: 4378.1622s, median: 4378.286s
> > >
> > > (with the shrinker)
> > > real -- mean: 152.9608s, median: 152.845s
> > > sys -- mean: 517.4446s, median: 506.749s
> > > user -- mean: 4387.694s, median: 4387.935s
> > >
> > > Here, we saw around 32-35% reduction in kernel CPU time, which
> > > translated to 2.8% reduction in real time. These results confirm our
> > > hypothesis that the shrinker is more helpful the more cold memory we
> > > have.
> > >
> > > Suggested-by: Johannes Weiner <[email protected]>
> > > Signed-off-by: Nhat Pham <[email protected]>
> > > ---
> > > Documentation/admin-guide/mm/zswap.rst | 12 ++
> > > include/linux/memcontrol.h | 1 +
> > > include/linux/mmzone.h | 14 ++
> > > mm/memcontrol.c | 33 +++++
> > > mm/swap_state.c | 31 ++++-
> > > mm/zswap.c | 180 ++++++++++++++++++++++++-
> > > 6 files changed, 263 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > > index 45b98390e938..ae8597a67804 100644
> > > --- a/Documentation/admin-guide/mm/zswap.rst
> > > +++ b/Documentation/admin-guide/mm/zswap.rst
> > > @@ -153,6 +153,18 @@ attribute, e. g.::
> > >
> > > Setting this parameter to 100 will disable the hysteresis.
> > >
> > > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > > +can be advantageous to proactively write these cold pages to swap and reclaim
> > > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > > +User can enable it by first switching on the global knob:
> > > +
> > > + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> > > +
> > > +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> > > +it on for each cgroup that the shrinker should target:
> > > +
> > > + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> > > +
> > > A debugfs interface is provided for various statistic about pool size, number
> > > of pages stored, same-value filled pages and various counters for the reasons
> > > pages are rejected.
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 05d34b328d9d..f005ea667863 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -219,6 +219,7 @@ struct mem_cgroup {
> > >
> > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > unsigned long zswap_max;
> > > + atomic_t zswap_shrinker_enabled;
> > > #endif
> > >
> > > unsigned long soft_limit;
> > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > > index 4106fbc5b4b3..81f4c5ea3e16 100644
> > > --- a/include/linux/mmzone.h
> > > +++ b/include/linux/mmzone.h
> > > @@ -637,6 +637,20 @@ struct lruvec {
> > > #ifdef CONFIG_MEMCG
> > > struct pglist_data *pgdat;
> > > #endif
> > > +#ifdef CONFIG_ZSWAP
> > > + /*
> > > + * Number of pages in zswap that should be protected from the shrinker.
> > > + * This number is an estimate of the following counts:
> > > + *
> > > + * a) Recent page faults.
> > > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > > + * as well as recent zswap LRU rotations.
> > > + *
> > > + * These pages are likely to be warm, and might incur IO if the are written
> > > + * to swap.
> > > + */
> > > + unsigned long nr_zswap_protected;
> > > +#endif
> >
> > Would this be better abstracted in a zswap lruvec struct?
> There is just one field, so that sounds like overkill to me.
> But if we need to store more data (for smarter heuristics),
> that'll be a good idea. I'll keep this in mind. Thanks for the
> suggestion, Yosry!

(A space between the quoted text and the reply usually helps visually :)

It wasn't really about the number of fields, but rather place this
struct in zswap.h (with the long comment explaining what it's doing),
and adding an abstracted struct member here. The comment will live in
an appropriate file, further modifications don't need to touch
mmzone.h, and struct lruvec is less cluttered for readers that don't
care about zswap (and we can avoid the ifdef).

Anyway, this is all mostly aesthetic so I don't feel strongly.

> >
> > > };
> > >
> > > /* Isolate unmapped pages */
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 9f84b3f7b469..1a2c97cf396f 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> > > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > memcg->zswap_max = PAGE_COUNTER_MAX;
> > > + /* Disable the shrinker by default */
> > > + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> > > #endif
> > > page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> > > if (parent) {
> > > @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> > > return nbytes;
> > > }
> > >
> > > +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> > > +{
> > > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > > +
> > > + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> > > + return 0;
> > > +}
> > > +
> > > +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> > > + char *buf, size_t nbytes, loff_t off)
> > > +{
> > > + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> > > + int zswap_shrinker_enabled;
> > > + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> > > +
> > > + if (parse_ret)
> > > + return parse_ret;
> > > +
> > > + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> > > + return -ERANGE;
> > > +
> > > + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> > > + return nbytes;
> > > +}
> > > +
> > > static struct cftype zswap_files[] = {
> > > {
> > > .name = "zswap.current",
> > > @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> > > .seq_show = zswap_max_show,
> > > .write = zswap_max_write,
> > > },
> > > + {
> > > + .name = "zswap.shrinker.enabled",
> > > + .flags = CFTYPE_NOT_ON_ROOT,
> > > + .seq_show = zswap_shrinker_enabled_show,
> > > + .write = zswap_shrinker_enabled_write,
> > > + },
> > > { } /* terminate */
> > > };
> > > #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > index 1c826737aacb..788e36a06c34 100644
> > > --- a/mm/swap_state.c
> > > +++ b/mm/swap_state.c
> > > @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> > > return pages;
> > > }
> > >
> > > +#ifdef CONFIG_ZSWAP
> > > +/*
> > > + * Refault is an indication that warmer pages are not resident in memory.
> > > + * Increase the size of zswap's protected area.
> > > + */
> > > +static void inc_nr_protected(struct page *page)
> > > +{
> > > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > > + unsigned long flags;
> > > +
> > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > + lruvec->nr_zswap_protected++;
> > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > +}
> > > +#endif
> > > +
> >
> > A few questions:
> > - Why is this function named in such a generic way?
> Perhaps inc_nr_zswap_protected would be better? :)

If we use an atomic, the function can go away anyway. See below.

> > - Why is this function here instead of in mm/zswap.c?
> No particular reason :) It's not being used anywhere else,
> so I just put it as a static function here.

It is inline in mm/zswap.c in one place. I personally would have
preferred nr_zswap_protected and the helper to be defined in
zswap.h/zswap.c as I mentioned below. Anyway, this function can go
away.

> > - Why is this protected by the heavily contested lruvec lock instead
> > of being an atomic?
> nr_zswap_protected can be decayed (see zswap_lru_add), which
> I don't think it can be implemented with atomics :( It'd be much
> cleaner indeed.

I think a cmpxchg (or a try_cmpxchg) loop can be used in this case to
implement it using an atomic?

See https://docs.kernel.org/core-api/wrappers/atomic_t.html.

> > > + lruvec->nr_zswap_protected++;
> > >
> > > + /*
> > > + * Decay to avoid overflow and adapt to changing workloads.
> > > + * This is based on LRU reclaim cost decaying heuristics.
> > > + */
> > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > + lruvec->nr_zswap_protected /= 2;

>
> I'm wary of adding new locks, so I just re-use this existing lock.
> But if lruvec lock is heavily congested (I'm not aware/familar with
> this issue), then perhaps a new, dedicated lock would help?
> >
> > > /**
> > > * swap_cluster_readahead - swap in pages in hope we need them soon
> > > * @entry: swap entry of this memory
> > > @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > > lru_add_drain(); /* Push any new pages onto the LRU now */
> > > skip:
> > > /* The page was likely read above, so no need for plugging here */
> > > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > +#ifdef CONFIG_ZSWAP
> > > + if (page)
> > > + inc_nr_protected(page);
> > > +#endif
> > > + return page;
> > > }
> > >
> > > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > > @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > > lru_add_drain();
> > > skip:
> > > /* The page was likely read above, so no need for plugging here */
> > > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > > - NULL);
> > > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > > +#ifdef CONFIG_ZSWAP
> > > + if (page)
> > > + inc_nr_protected(page);
> > > +#endif
> > > + return page;
> > > }
> > >
> > > /**
> > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > index 1a469e5d5197..79cb18eeb8bf 100644
> > > --- a/mm/zswap.c
> > > +++ b/mm/zswap.c
> > > @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > > #define ZSWAP_NR_ZPOOLS 32
> > >
> > > +/*
> > > + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> > > + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> > > + * the shrinker for each memcg.
> > > + */
> > > +static bool zswap_shrinker_enabled;
> > > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > > +#ifdef CONFIG_MEMCG_KMEM
> > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > +{
> > > + return zswap_shrinker_enabled &&
> > > + atomic_read(&memcg->zswap_shrinker_enabled);
> > > +}
> > > +#else
> > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > +{
> > > + return zswap_shrinker_enabled;
> > > +}
> > > +#endif
> > > +
> > > /*********************************
> > > * data structures
> > > **********************************/
> > > @@ -174,6 +194,8 @@ struct zswap_pool {
> > > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > struct list_lru list_lru;
> > > struct mem_cgroup *next_shrink;
> > > + struct shrinker *shrinker;
> > > + atomic_t nr_stored;
> > > };
> > >
> > > /*
> > > @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> > > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > > }
> > >
> > > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > > +{
> > > + u64 pool_size = 0;
> > > + int i;
> > > +
> > > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > > +
> > > + return pool_size;
> > > +}
> > > +
> > > static void zswap_update_total_size(void)
> > > {
> > > struct zswap_pool *pool;
> > > u64 total = 0;
> > > - int i;
> > >
> > > rcu_read_lock();
> > >
> > > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > - total += zpool_get_total_size(pool->zpools[i]);
> > > + total += get_zswap_pool_size(pool);
> > >
> > > rcu_read_unlock();
> > >
> > > @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > {
> > > struct mem_cgroup *memcg = entry->objcg ?
> > > get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > > + unsigned long flags, lru_size;
> > > +
> > > + if (added) {
> > > + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > + lruvec->nr_zswap_protected++;
> > >
> > > + /*
> > > + * Decay to avoid overflow and adapt to changing workloads.
> > > + * This is based on LRU reclaim cost decaying heuristics.
> > > + */
> > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > + lruvec->nr_zswap_protected /= 2;
> > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > + }
> > > mem_cgroup_put(memcg);
> > > return added;
> > > }
> > > @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > > else {
> > > zswap_lru_del(&entry->pool->list_lru, entry);
> > > zpool_free(zswap_find_zpool(entry), entry->handle);
> > > + atomic_dec(&entry->pool->nr_stored);
> > > zswap_pool_put(entry->pool);
> > > }
> > > zswap_entry_cache_free(entry);
> > > @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > return entry;
> > > }
> > >
> > > +/*********************************
> > > +* shrinker functions
> > > +**********************************/
> > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > + spinlock_t *lock, void *arg);
> > > +
> > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + unsigned long shrink_ret, nr_zswap_protected, flags,
> > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > + bool encountered_page_in_swapcache = false;
> > > +
> > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > + nr_zswap_protected = lruvec->nr_zswap_protected;
> > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > +
> > > + /*
> > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > + * protected region.
> > > + */
> > > + if (!is_shrinker_enabled(sc->memcg) ||
> > > + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> > > + sc->nr_scanned = 0;
> > > + return SHRINK_STOP;
> > > + }
> > > +
> > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > + &encountered_page_in_swapcache);
> > > +
> > > + if (encountered_page_in_swapcache)
> > > + return SHRINK_STOP;
> > > +
> > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > +}
> > > +
> > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + struct mem_cgroup *memcg = sc->memcg;
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> > > +
> > > +#ifdef CONFIG_MEMCG_KMEM
> > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > +#else
> > > + /* use pool stats instead of memcg stats */
> > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > + nr_stored = atomic_read(&pool->nr_stored);
> > > +#endif
> > > +
> > > + if (!is_shrinker_enabled(memcg) || !nr_stored)
> > > + return 0;
> > > +
> > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > + /*
> > > + * Subtract the lru size by an estimate of the number of pages
> > > + * that should be protected.
> > > + */
> > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> > > + nr_freeable - lruvec->nr_zswap_protected : 0;
> > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > +
> > > + /*
> > > + * Scale the number of freeable pages by the memory saving factor.
> > > + * This ensures that the better zswap compresses memory, the fewer
> > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > + * relatively small memory saving).
> > > + */
> > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > +}
> > > +
> > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > +{
> > > + pool->shrinker =
> > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > + if (!pool->shrinker)
> > > + return;
> > > +
> > > + pool->shrinker->private_data = pool;
> > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > + pool->shrinker->batch = 0;
> > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > +}
> > > +
> > > /*********************************
> > > * per-cpu code
> > > **********************************/
> > > @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > spinlock_t *lock, void *arg)
> > > {
> > > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > + bool *encountered_page_in_swapcache = (bool *)arg;
> > > struct mem_cgroup *memcg;
> > > struct zswap_tree *tree;
> > > + struct lruvec *lruvec;
> > > pgoff_t swpoffset;
> > > enum lru_status ret = LRU_REMOVED_RETRY;
> > > int writeback_result;
> > > + unsigned long flags;
> > >
> > > /*
> > > * Once the lru lock is dropped, the entry might get freed. The
> > > @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > > spin_unlock(lock);
> > > - mem_cgroup_put(memcg);
> > > ret = LRU_RETRY;
> > > +
> > > + /*
> > > + * Encountering a page already in swap cache is a sign that we are shrinking
> > > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > > + * shrinker context).
> > > + */
> > > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > > + ret = LRU_SKIP;
> > > + *encountered_page_in_swapcache = true;
> > > + }
> > > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > + /* Increment the protection area to account for the LRU rotation. */
> > > + lruvec->nr_zswap_protected++;
> > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > +
> > > + mem_cgroup_put(memcg);
> > > goto put_unlock;
> > > }
> > >
> > > @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > &pool->node);
> > > if (ret)
> > > goto error;
> > > +
> > > + zswap_alloc_shrinker(pool);
> > > + if (!pool->shrinker)
> > > + goto error;
> > > +
> > > pr_debug("using %s compressor\n", pool->tfm_name);
> > >
> > > /* being the current pool takes 1 ref; this func expects the
> > > @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > kref_init(&pool->kref);
> > > INIT_LIST_HEAD(&pool->list);
> > > INIT_WORK(&pool->shrink_work, shrink_worker);
> > > - list_lru_init_memcg(&pool->list_lru, NULL);
> > > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > > + goto lru_fail;
> > > + shrinker_register(pool->shrinker);
> > >
> > > zswap_pool_debug("created", pool);
> > >
> > > return pool;
> > >
> > > +lru_fail:
> > > + list_lru_destroy(&pool->list_lru);
> > > + shrinker_free(pool->shrinker);
> > > error:
> > > if (pool->acomp_ctx)
> > > free_percpu(pool->acomp_ctx);
> > > @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > >
> > > zswap_pool_debug("destroying", pool);
> > >
> > > + shrinker_free(pool->shrinker);
> > > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > free_percpu(pool->acomp_ctx);
> > > list_lru_destroy(&pool->list_lru);
> > > @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> > > if (entry->length) {
> > > INIT_LIST_HEAD(&entry->lru);
> > > zswap_lru_add(&pool->list_lru, entry);
> > > + atomic_inc(&pool->nr_stored);
> > > }
> > > spin_unlock(&tree->lock);
> > >
> > > --
> > > 2.34.1
> Thanks for the comments/suggestion, Yosry!

2023-09-26 12:26:16

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Mon, Sep 25, 2023 at 5:00 PM Yosry Ahmed <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 4:29 PM Nhat Pham <[email protected]> wrote:
> >
> > On Mon, Sep 25, 2023 at 1:38 PM Yosry Ahmed <[email protected]> wrote:
> > >
> > > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > > >
> > > > Currently, we only shrink the zswap pool when the user-defined limit is
> > > > hit. This means that if we set the limit too high, cold data that are
> > > > unlikely to be used again will reside in the pool, wasting precious
> > > > memory. It is hard to predict how much zswap space will be needed ahead
> > > > of time, as this depends on the workload (specifically, on factors such
> > > > as memory access patterns and compressibility of the memory pages).
> > > >
> > > > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > > > is initiated when there is memory pressure. The shrinker does not
> > > > have any parameter that must be tuned by the user, and can be opted in
> > > > or out on a per-memcg basis.
> > >
> > > What's the use case for having per-memcg opt-in/out?
> > >
> > > If there is memory pressure, reclaiming swap-backed pages will push
> > > pages out of zswap anyway, regardless of this patch. With this patch,
> > > any sort of reclaim can push pages out of zswap. Wouldn't that be
> > > preferable to reclaiming memory that is currently resident in memory
> > > (so arguably hotter than the pages in zswap)? Why would this decision
> > > be different per-memcg?
> > I'm not quite following your argument here. The point of having this
> > be done on a per-memcg basis is that we have different workloads
> > with different memory access pattern (and as a result, different memory
> > coldness distribution).
> >
> > In a workload where there is a lot of cold data, we can really benefit
> > from reclaiming all of those pages and repurpose the memory reclaimed
> > (for e.g for filecache).
> >
> > On the other hand, in a workload where there aren't a lot of cold data,
> > reclaiming its zswapped pages will at best do nothing (wasting CPU
> > cycles on compression/decompression), and at worst hurt performance
> > (due to increased IO when we need those writtenback pages again).
> >
> > Such different workloads could co-exist in the same system, and having
> > a per-memcg knob allows us to crank on the shrinker only on workloads
> > where it makes sense.
>
> I am not sure we are on the same page here.
>
> What you're describing sounds more like proactive reclaim, which we
> wouldn't invoke unless the workload has cold data anyway.
>
> IIUC, outside of that, this shrinker will run when there is memory
> pressure. This means that we need to free memory anyway, regardless of
> its absolute coldness. We want to evict the colder pages in the memcg.
> It seems to be that in ~all cases, evicting pages in zswap will be
> better than evicting pages in memory, as the pages in memory are
> arguably hotter (since they weren't reclaimed first). This seems to be
> something that would be true for all workloads.
>
> What am I missing?

Yup, the shrinker is initiated under memory pressure.
And with it, we can reclaim memory from zswap when
it's (often) not at max capacity.

The kernel has no knowledge of absolute coldness, only relative
coldness thanks to LRU. We don't have a global LRU of all possible
memory pages/objects for a particular memcg either, so we cannot
compare the coldness of objects from different sources.

The "coldest" pages in zswap LRU could very well be warm enough
that swapping them out degrades performance, while there are even
colder memory from other sources (other shrinkers registered for this
memcg). Alternatively, we can also "evict" uncompressed anonymous
memory, which will go to the zswap pool. This also saves memory,
and could potentially be better than zswap reclaim (2 compressed
pages might be better performance-wise than 1 uncompressed,
1 swapped out)

All of this depends on the memory access pattern of the workloads,
which could differ cgroup-by-cgroup within the same system.
Having a per-memcg knob is a way for admins to influence this
decision from userspace, if the admins have knowledge about
workload memory access patterns.

For e.g, if we know that there is one particular cgroup that populates
a bunch of single-use tmpfs pages, then we can target that cgroup
specifically, while leaving the other cgroups in the system alone.

>
> > >
> > > >
> > > > Furthermore, to make it more robust for many workloads and prevent
> > > > overshrinking (i.e evicting warm pages that might be refaulted into
> > > > memory), we build in the following heuristics:
> > > >
> > > > * Estimate the number of warm pages residing in zswap, and attempt to
> > > > protect this region of the zswap LRU.
> > > > * Scale the number of freeable objects by an estimate of the memory
> > > > saving factor. The better zswap compresses the data, the fewer pages
> > > > we will evict to swap (as we will otherwise incur IO for relatively
> > > > small memory saving).
> > > > * During reclaim, if the shrinker encounters a page that is also being
> > > > brought into memory, the shrinker will cautiously terminate its
> > > > shrinking action, as this is a sign that it is touching the warmer
> > > > region of the zswap LRU.
> > >
> > > I don't have an opinion about the reclaim heuristics here, I will let
> > > reclaim experts chip in.
> > >
> > > >
> > > > On a benchmark that we have run:
> > >
> > > Please add more details (as much as possible) about the benchmarks used here.
> > Sure! I built the kernel in a memory-limited cgroup a couple times,
> > then measured the build time.
> >
> > To simulate conditions where there are cold, unused data, I
> > also generated a bunch of data in tmpfs (and never touch them
> > again).
>
> Please include such details in the commit message, there is also
> another reference below to "another" benchmark.

Will do if/when I send v3.
The "another" benchmark is just generating even more tmpfs cold data :)

>
>
> > >
> > > >
> > > > (without the shrinker)
> > > > real -- mean: 153.27s, median: 153.199s
> > > > sys -- mean: 541.652s, median: 541.903s
> > > > user -- mean: 4384.9673999999995s, median: 4385.471s
> > > >
> > > > (with the shrinker)
> > > > real -- mean: 151.4956s, median: 151.456s
> > > > sys -- mean: 461.14639999999997s, median: 465.656s
> > > > user -- mean: 4384.7118s, median: 4384.675s
> > > >
> > > > We observed a 14-15% reduction in kernel CPU time, which translated to
> > > > over 1% reduction in real time.
> > > >
> > > > On another benchmark, where there was a lot more cold memory residing in
> > > > zswap, we observed even more pronounced gains:
> > > >
> > > > (without the shrinker)
> > > > real -- mean: 157.52519999999998s, median: 157.281s
> > > > sys -- mean: 769.3082s, median: 780.545s
> > > > user -- mean: 4378.1622s, median: 4378.286s
> > > >
> > > > (with the shrinker)
> > > > real -- mean: 152.9608s, median: 152.845s
> > > > sys -- mean: 517.4446s, median: 506.749s
> > > > user -- mean: 4387.694s, median: 4387.935s
> > > >
> > > > Here, we saw around 32-35% reduction in kernel CPU time, which
> > > > translated to 2.8% reduction in real time. These results confirm our
> > > > hypothesis that the shrinker is more helpful the more cold memory we
> > > > have.
> > > >
> > > > Suggested-by: Johannes Weiner <[email protected]>
> > > > Signed-off-by: Nhat Pham <[email protected]>
> > > > ---
> > > > Documentation/admin-guide/mm/zswap.rst | 12 ++
> > > > include/linux/memcontrol.h | 1 +
> > > > include/linux/mmzone.h | 14 ++
> > > > mm/memcontrol.c | 33 +++++
> > > > mm/swap_state.c | 31 ++++-
> > > > mm/zswap.c | 180 ++++++++++++++++++++++++-
> > > > 6 files changed, 263 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > > > index 45b98390e938..ae8597a67804 100644
> > > > --- a/Documentation/admin-guide/mm/zswap.rst
> > > > +++ b/Documentation/admin-guide/mm/zswap.rst
> > > > @@ -153,6 +153,18 @@ attribute, e. g.::
> > > >
> > > > Setting this parameter to 100 will disable the hysteresis.
> > > >
> > > > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > > > +can be advantageous to proactively write these cold pages to swap and reclaim
> > > > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > > > +User can enable it by first switching on the global knob:
> > > > +
> > > > + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> > > > +
> > > > +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> > > > +it on for each cgroup that the shrinker should target:
> > > > +
> > > > + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> > > > +
> > > > A debugfs interface is provided for various statistic about pool size, number
> > > > of pages stored, same-value filled pages and various counters for the reasons
> > > > pages are rejected.
> > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > > index 05d34b328d9d..f005ea667863 100644
> > > > --- a/include/linux/memcontrol.h
> > > > +++ b/include/linux/memcontrol.h
> > > > @@ -219,6 +219,7 @@ struct mem_cgroup {
> > > >
> > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > unsigned long zswap_max;
> > > > + atomic_t zswap_shrinker_enabled;
> > > > #endif
> > > >
> > > > unsigned long soft_limit;
> > > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > > > index 4106fbc5b4b3..81f4c5ea3e16 100644
> > > > --- a/include/linux/mmzone.h
> > > > +++ b/include/linux/mmzone.h
> > > > @@ -637,6 +637,20 @@ struct lruvec {
> > > > #ifdef CONFIG_MEMCG
> > > > struct pglist_data *pgdat;
> > > > #endif
> > > > +#ifdef CONFIG_ZSWAP
> > > > + /*
> > > > + * Number of pages in zswap that should be protected from the shrinker.
> > > > + * This number is an estimate of the following counts:
> > > > + *
> > > > + * a) Recent page faults.
> > > > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > > > + * as well as recent zswap LRU rotations.
> > > > + *
> > > > + * These pages are likely to be warm, and might incur IO if the are written
> > > > + * to swap.
> > > > + */
> > > > + unsigned long nr_zswap_protected;
> > > > +#endif
> > >
> > > Would this be better abstracted in a zswap lruvec struct?
> > There is just one field, so that sounds like overkill to me.
> > But if we need to store more data (for smarter heuristics),
> > that'll be a good idea. I'll keep this in mind. Thanks for the
> > suggestion, Yosry!
>
> (A space between the quoted text and the reply usually helps visually :)
>
> It wasn't really about the number of fields, but rather place this
> struct in zswap.h (with the long comment explaining what it's doing),
> and adding an abstracted struct member here. The comment will live in
> an appropriate file, further modifications don't need to touch
> mmzone.h, and struct lruvec is less cluttered for readers that don't
> care about zswap (and we can avoid the ifdef).
>
> Anyway, this is all mostly aesthetic so I don't feel strongly.
>
> > >
> > > > };
> > > >
> > > > /* Isolate unmapped pages */
> > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > index 9f84b3f7b469..1a2c97cf396f 100644
> > > > --- a/mm/memcontrol.c
> > > > +++ b/mm/memcontrol.c
> > > > @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> > > > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > memcg->zswap_max = PAGE_COUNTER_MAX;
> > > > + /* Disable the shrinker by default */
> > > > + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> > > > #endif
> > > > page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> > > > if (parent) {
> > > > @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> > > > return nbytes;
> > > > }
> > > >
> > > > +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> > > > +{
> > > > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > > > +
> > > > + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> > > > + return 0;
> > > > +}
> > > > +
> > > > +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> > > > + char *buf, size_t nbytes, loff_t off)
> > > > +{
> > > > + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> > > > + int zswap_shrinker_enabled;
> > > > + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> > > > +
> > > > + if (parse_ret)
> > > > + return parse_ret;
> > > > +
> > > > + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> > > > + return -ERANGE;
> > > > +
> > > > + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> > > > + return nbytes;
> > > > +}
> > > > +
> > > > static struct cftype zswap_files[] = {
> > > > {
> > > > .name = "zswap.current",
> > > > @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> > > > .seq_show = zswap_max_show,
> > > > .write = zswap_max_write,
> > > > },
> > > > + {
> > > > + .name = "zswap.shrinker.enabled",
> > > > + .flags = CFTYPE_NOT_ON_ROOT,
> > > > + .seq_show = zswap_shrinker_enabled_show,
> > > > + .write = zswap_shrinker_enabled_write,
> > > > + },
> > > > { } /* terminate */
> > > > };
> > > > #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> > > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > > index 1c826737aacb..788e36a06c34 100644
> > > > --- a/mm/swap_state.c
> > > > +++ b/mm/swap_state.c
> > > > @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> > > > return pages;
> > > > }
> > > >
> > > > +#ifdef CONFIG_ZSWAP
> > > > +/*
> > > > + * Refault is an indication that warmer pages are not resident in memory.
> > > > + * Increase the size of zswap's protected area.
> > > > + */
> > > > +static void inc_nr_protected(struct page *page)
> > > > +{
> > > > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > > > + unsigned long flags;
> > > > +
> > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > + lruvec->nr_zswap_protected++;
> > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > +}
> > > > +#endif
> > > > +
> > >
> > > A few questions:
> > > - Why is this function named in such a generic way?
> > Perhaps inc_nr_zswap_protected would be better? :)
>
> If we use an atomic, the function can go away anyway. See below.
>
> > > - Why is this function here instead of in mm/zswap.c?
> > No particular reason :) It's not being used anywhere else,
> > so I just put it as a static function here.
>
> It is inline in mm/zswap.c in one place. I personally would have
> preferred nr_zswap_protected and the helper to be defined in
> zswap.h/zswap.c as I mentioned below. Anyway, this function can go
> away.
>
> > > - Why is this protected by the heavily contested lruvec lock instead
> > > of being an atomic?
> > nr_zswap_protected can be decayed (see zswap_lru_add), which
> > I don't think it can be implemented with atomics :( It'd be much
> > cleaner indeed.
>
> I think a cmpxchg (or a try_cmpxchg) loop can be used in this case to
> implement it using an atomic?
>
> See https://docs.kernel.org/core-api/wrappers/atomic_t.html.

Ah I did think about this, but that seems overkill at the time.
But if lruvec lock is indeed hotly contested, this should help.

>
> > > > + lruvec->nr_zswap_protected++;
> > > >
> > > > + /*
> > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > + */
> > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > + lruvec->nr_zswap_protected /= 2;
>
> >
> > I'm wary of adding new locks, so I just re-use this existing lock.
> > But if lruvec lock is heavily congested (I'm not aware/familar with
> > this issue), then perhaps a new, dedicated lock would help?
> > >
> > > > /**
> > > > * swap_cluster_readahead - swap in pages in hope we need them soon
> > > > * @entry: swap entry of this memory
> > > > @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > > > lru_add_drain(); /* Push any new pages onto the LRU now */
> > > > skip:
> > > > /* The page was likely read above, so no need for plugging here */
> > > > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > +#ifdef CONFIG_ZSWAP
> > > > + if (page)
> > > > + inc_nr_protected(page);
> > > > +#endif
> > > > + return page;
> > > > }
> > > >
> > > > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > > > @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > > > lru_add_drain();
> > > > skip:
> > > > /* The page was likely read above, so no need for plugging here */
> > > > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > > > - NULL);
> > > > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > > > +#ifdef CONFIG_ZSWAP
> > > > + if (page)
> > > > + inc_nr_protected(page);
> > > > +#endif
> > > > + return page;
> > > > }
> > > >
> > > > /**
> > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > index 1a469e5d5197..79cb18eeb8bf 100644
> > > > --- a/mm/zswap.c
> > > > +++ b/mm/zswap.c
> > > > @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > > > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > > > #define ZSWAP_NR_ZPOOLS 32
> > > >
> > > > +/*
> > > > + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> > > > + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> > > > + * the shrinker for each memcg.
> > > > + */
> > > > +static bool zswap_shrinker_enabled;
> > > > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > +{
> > > > + return zswap_shrinker_enabled &&
> > > > + atomic_read(&memcg->zswap_shrinker_enabled);
> > > > +}
> > > > +#else
> > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > +{
> > > > + return zswap_shrinker_enabled;
> > > > +}
> > > > +#endif
> > > > +
> > > > /*********************************
> > > > * data structures
> > > > **********************************/
> > > > @@ -174,6 +194,8 @@ struct zswap_pool {
> > > > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > > struct list_lru list_lru;
> > > > struct mem_cgroup *next_shrink;
> > > > + struct shrinker *shrinker;
> > > > + atomic_t nr_stored;
> > > > };
> > > >
> > > > /*
> > > > @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> > > > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > > > }
> > > >
> > > > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > > > +{
> > > > + u64 pool_size = 0;
> > > > + int i;
> > > > +
> > > > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > > > +
> > > > + return pool_size;
> > > > +}
> > > > +
> > > > static void zswap_update_total_size(void)
> > > > {
> > > > struct zswap_pool *pool;
> > > > u64 total = 0;
> > > > - int i;
> > > >
> > > > rcu_read_lock();
> > > >
> > > > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > > > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > - total += zpool_get_total_size(pool->zpools[i]);
> > > > + total += get_zswap_pool_size(pool);
> > > >
> > > > rcu_read_unlock();
> > > >
> > > > @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > > {
> > > > struct mem_cgroup *memcg = entry->objcg ?
> > > > get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > > > + unsigned long flags, lru_size;
> > > > +
> > > > + if (added) {
> > > > + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > + lruvec->nr_zswap_protected++;
> > > >
> > > > + /*
> > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > + */
> > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > + lruvec->nr_zswap_protected /= 2;
> > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > + }
> > > > mem_cgroup_put(memcg);
> > > > return added;
> > > > }
> > > > @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > > > else {
> > > > zswap_lru_del(&entry->pool->list_lru, entry);
> > > > zpool_free(zswap_find_zpool(entry), entry->handle);
> > > > + atomic_dec(&entry->pool->nr_stored);
> > > > zswap_pool_put(entry->pool);
> > > > }
> > > > zswap_entry_cache_free(entry);
> > > > @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > > return entry;
> > > > }
> > > >
> > > > +/*********************************
> > > > +* shrinker functions
> > > > +**********************************/
> > > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > > + spinlock_t *lock, void *arg);
> > > > +
> > > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > > + struct shrink_control *sc)
> > > > +{
> > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > + unsigned long shrink_ret, nr_zswap_protected, flags,
> > > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > > + bool encountered_page_in_swapcache = false;
> > > > +
> > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > + nr_zswap_protected = lruvec->nr_zswap_protected;
> > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > +
> > > > + /*
> > > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > > + * protected region.
> > > > + */
> > > > + if (!is_shrinker_enabled(sc->memcg) ||
> > > > + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> > > > + sc->nr_scanned = 0;
> > > > + return SHRINK_STOP;
> > > > + }
> > > > +
> > > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > > + &encountered_page_in_swapcache);
> > > > +
> > > > + if (encountered_page_in_swapcache)
> > > > + return SHRINK_STOP;
> > > > +
> > > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > > +}
> > > > +
> > > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > > + struct shrink_control *sc)
> > > > +{
> > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > + struct mem_cgroup *memcg = sc->memcg;
> > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > > + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> > > > +
> > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > > +#else
> > > > + /* use pool stats instead of memcg stats */
> > > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > > + nr_stored = atomic_read(&pool->nr_stored);
> > > > +#endif
> > > > +
> > > > + if (!is_shrinker_enabled(memcg) || !nr_stored)
> > > > + return 0;
> > > > +
> > > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > > + /*
> > > > + * Subtract the lru size by an estimate of the number of pages
> > > > + * that should be protected.
> > > > + */
> > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> > > > + nr_freeable - lruvec->nr_zswap_protected : 0;
> > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > +
> > > > + /*
> > > > + * Scale the number of freeable pages by the memory saving factor.
> > > > + * This ensures that the better zswap compresses memory, the fewer
> > > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > > + * relatively small memory saving).
> > > > + */
> > > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > > +}
> > > > +
> > > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > > +{
> > > > + pool->shrinker =
> > > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > > + if (!pool->shrinker)
> > > > + return;
> > > > +
> > > > + pool->shrinker->private_data = pool;
> > > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > > + pool->shrinker->batch = 0;
> > > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > > +}
> > > > +
> > > > /*********************************
> > > > * per-cpu code
> > > > **********************************/
> > > > @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > spinlock_t *lock, void *arg)
> > > > {
> > > > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > > + bool *encountered_page_in_swapcache = (bool *)arg;
> > > > struct mem_cgroup *memcg;
> > > > struct zswap_tree *tree;
> > > > + struct lruvec *lruvec;
> > > > pgoff_t swpoffset;
> > > > enum lru_status ret = LRU_REMOVED_RETRY;
> > > > int writeback_result;
> > > > + unsigned long flags;
> > > >
> > > > /*
> > > > * Once the lru lock is dropped, the entry might get freed. The
> > > > @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > > list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > > > spin_unlock(lock);
> > > > - mem_cgroup_put(memcg);
> > > > ret = LRU_RETRY;
> > > > +
> > > > + /*
> > > > + * Encountering a page already in swap cache is a sign that we are shrinking
> > > > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > > > + * shrinker context).
> > > > + */
> > > > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > > > + ret = LRU_SKIP;
> > > > + *encountered_page_in_swapcache = true;
> > > > + }
> > > > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > + /* Increment the protection area to account for the LRU rotation. */
> > > > + lruvec->nr_zswap_protected++;
> > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > +
> > > > + mem_cgroup_put(memcg);
> > > > goto put_unlock;
> > > > }
> > > >
> > > > @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > &pool->node);
> > > > if (ret)
> > > > goto error;
> > > > +
> > > > + zswap_alloc_shrinker(pool);
> > > > + if (!pool->shrinker)
> > > > + goto error;
> > > > +
> > > > pr_debug("using %s compressor\n", pool->tfm_name);
> > > >
> > > > /* being the current pool takes 1 ref; this func expects the
> > > > @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > kref_init(&pool->kref);
> > > > INIT_LIST_HEAD(&pool->list);
> > > > INIT_WORK(&pool->shrink_work, shrink_worker);
> > > > - list_lru_init_memcg(&pool->list_lru, NULL);
> > > > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > > > + goto lru_fail;
> > > > + shrinker_register(pool->shrinker);
> > > >
> > > > zswap_pool_debug("created", pool);
> > > >
> > > > return pool;
> > > >
> > > > +lru_fail:
> > > > + list_lru_destroy(&pool->list_lru);
> > > > + shrinker_free(pool->shrinker);
> > > > error:
> > > > if (pool->acomp_ctx)
> > > > free_percpu(pool->acomp_ctx);
> > > > @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > > >
> > > > zswap_pool_debug("destroying", pool);
> > > >
> > > > + shrinker_free(pool->shrinker);
> > > > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > > free_percpu(pool->acomp_ctx);
> > > > list_lru_destroy(&pool->list_lru);
> > > > @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> > > > if (entry->length) {
> > > > INIT_LIST_HEAD(&entry->lru);
> > > > zswap_lru_add(&pool->list_lru, entry);
> > > > + atomic_inc(&pool->nr_stored);
> > > > }
> > > > spin_unlock(&tree->lock);
> > > >
> > > > --
> > > > 2.34.1
> > Thanks for the comments/suggestion, Yosry!

2023-09-26 13:02:22

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Mon, Sep 25, 2023 at 5:43 PM Nhat Pham <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 5:00 PM Yosry Ahmed <[email protected]> wrote:
> >
> > On Mon, Sep 25, 2023 at 4:29 PM Nhat Pham <[email protected]> wrote:
> > >
> > > On Mon, Sep 25, 2023 at 1:38 PM Yosry Ahmed <[email protected]> wrote:
> > > >
> > > > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > > > >
> > > > > Currently, we only shrink the zswap pool when the user-defined limit is
> > > > > hit. This means that if we set the limit too high, cold data that are
> > > > > unlikely to be used again will reside in the pool, wasting precious
> > > > > memory. It is hard to predict how much zswap space will be needed ahead
> > > > > of time, as this depends on the workload (specifically, on factors such
> > > > > as memory access patterns and compressibility of the memory pages).
> > > > >
> > > > > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > > > > is initiated when there is memory pressure. The shrinker does not
> > > > > have any parameter that must be tuned by the user, and can be opted in
> > > > > or out on a per-memcg basis.
> > > >
> > > > What's the use case for having per-memcg opt-in/out?
> > > >
> > > > If there is memory pressure, reclaiming swap-backed pages will push
> > > > pages out of zswap anyway, regardless of this patch. With this patch,
> > > > any sort of reclaim can push pages out of zswap. Wouldn't that be
> > > > preferable to reclaiming memory that is currently resident in memory
> > > > (so arguably hotter than the pages in zswap)? Why would this decision
> > > > be different per-memcg?
> > > I'm not quite following your argument here. The point of having this
> > > be done on a per-memcg basis is that we have different workloads
> > > with different memory access pattern (and as a result, different memory
> > > coldness distribution).
> > >
> > > In a workload where there is a lot of cold data, we can really benefit
> > > from reclaiming all of those pages and repurpose the memory reclaimed
> > > (for e.g for filecache).
> > >
> > > On the other hand, in a workload where there aren't a lot of cold data,
> > > reclaiming its zswapped pages will at best do nothing (wasting CPU
> > > cycles on compression/decompression), and at worst hurt performance
> > > (due to increased IO when we need those writtenback pages again).
> > >
> > > Such different workloads could co-exist in the same system, and having
> > > a per-memcg knob allows us to crank on the shrinker only on workloads
> > > where it makes sense.
> >
> > I am not sure we are on the same page here.
> >
> > What you're describing sounds more like proactive reclaim, which we
> > wouldn't invoke unless the workload has cold data anyway.
> >
> > IIUC, outside of that, this shrinker will run when there is memory
> > pressure. This means that we need to free memory anyway, regardless of
> > its absolute coldness. We want to evict the colder pages in the memcg.
> > It seems to be that in ~all cases, evicting pages in zswap will be
> > better than evicting pages in memory, as the pages in memory are
> > arguably hotter (since they weren't reclaimed first). This seems to be
> > something that would be true for all workloads.
> >
> > What am I missing?
>
> Yup, the shrinker is initiated under memory pressure.
> And with it, we can reclaim memory from zswap when
> it's (often) not at max capacity.
>
> The kernel has no knowledge of absolute coldness, only relative
> coldness thanks to LRU. We don't have a global LRU of all possible
> memory pages/objects for a particular memcg either, so we cannot
> compare the coldness of objects from different sources.
>
> The "coldest" pages in zswap LRU could very well be warm enough
> that swapping them out degrades performance, while there are even
> colder memory from other sources (other shrinkers registered for this
> memcg). Alternatively, we can also "evict" uncompressed anonymous
> memory, which will go to the zswap pool. This also saves memory,
> and could potentially be better than zswap reclaim (2 compressed
> pages might be better performance-wise than 1 uncompressed,
> 1 swapped out)
>
> All of this depends on the memory access pattern of the workloads,
> which could differ cgroup-by-cgroup within the same system.
> Having a per-memcg knob is a way for admins to influence this
> decision from userspace, if the admins have knowledge about
> workload memory access patterns.
>
> For e.g, if we know that there is one particular cgroup that populates
> a bunch of single-use tmpfs pages, then we can target that cgroup
> specifically, while leaving the other cgroups in the system alone.

I think it's useful to break down the discussion here for cgroup
reclaim and global reclaim.

For cgroup reclaim, the kernel knows that the pages in the LRUs are
relatively hotter than the pages in zswap. So I don't see why
userspace would opt out specific cgroups from zswap shrinking. In my
experience, most memory usage comes from LRU pages, so let's ignore
other shrinkers for a second. Yes, in some cases compressing another
page might be better than moving a compressed page to swap, but how
would userspace have the intuition to decide this? It varies not only
based on workload, but also the point in time, the compressibility of
pages, etc.

In other words, how would a system admin choose to opt a cgroup in or out?

For global reclaim, IIUC you are saying that we want to protect some
cgroups under global memory pressure because we know that their "cold"
memory in zswap is hotter than memory elsewhere in the hierarchy,
right?

Isn't this the case for LRU reclaim as well? I would assume memory
protections would be used to tune this, not opting a cgroup completely
from zswap shrinking. Global reclaim can end up reclaiming LRU pages
from that cgroup if protection is not set up correctly anyway. What do
we gain by protecting pages in zswap if hotter pages in the LRUs are
not protected?

>
> >
> > > >
> > > > >
> > > > > Furthermore, to make it more robust for many workloads and prevent
> > > > > overshrinking (i.e evicting warm pages that might be refaulted into
> > > > > memory), we build in the following heuristics:
> > > > >
> > > > > * Estimate the number of warm pages residing in zswap, and attempt to
> > > > > protect this region of the zswap LRU.
> > > > > * Scale the number of freeable objects by an estimate of the memory
> > > > > saving factor. The better zswap compresses the data, the fewer pages
> > > > > we will evict to swap (as we will otherwise incur IO for relatively
> > > > > small memory saving).
> > > > > * During reclaim, if the shrinker encounters a page that is also being
> > > > > brought into memory, the shrinker will cautiously terminate its
> > > > > shrinking action, as this is a sign that it is touching the warmer
> > > > > region of the zswap LRU.
> > > >
> > > > I don't have an opinion about the reclaim heuristics here, I will let
> > > > reclaim experts chip in.
> > > >
> > > > >
> > > > > On a benchmark that we have run:
> > > >
> > > > Please add more details (as much as possible) about the benchmarks used here.
> > > Sure! I built the kernel in a memory-limited cgroup a couple times,
> > > then measured the build time.
> > >
> > > To simulate conditions where there are cold, unused data, I
> > > also generated a bunch of data in tmpfs (and never touch them
> > > again).
> >
> > Please include such details in the commit message, there is also
> > another reference below to "another" benchmark.
>
> Will do if/when I send v3.
> The "another" benchmark is just generating even more tmpfs cold data :)

Those benchmarks are heavily synthetic, which is not a showstopper,
but describing them in the commit message helps people reason about
the change.

>
> >
> >
> > > >
> > > > >
> > > > > (without the shrinker)
> > > > > real -- mean: 153.27s, median: 153.199s
> > > > > sys -- mean: 541.652s, median: 541.903s
> > > > > user -- mean: 4384.9673999999995s, median: 4385.471s
> > > > >
> > > > > (with the shrinker)
> > > > > real -- mean: 151.4956s, median: 151.456s
> > > > > sys -- mean: 461.14639999999997s, median: 465.656s
> > > > > user -- mean: 4384.7118s, median: 4384.675s
> > > > >
> > > > > We observed a 14-15% reduction in kernel CPU time, which translated to
> > > > > over 1% reduction in real time.
> > > > >
> > > > > On another benchmark, where there was a lot more cold memory residing in
> > > > > zswap, we observed even more pronounced gains:
> > > > >
> > > > > (without the shrinker)
> > > > > real -- mean: 157.52519999999998s, median: 157.281s
> > > > > sys -- mean: 769.3082s, median: 780.545s
> > > > > user -- mean: 4378.1622s, median: 4378.286s
> > > > >
> > > > > (with the shrinker)
> > > > > real -- mean: 152.9608s, median: 152.845s
> > > > > sys -- mean: 517.4446s, median: 506.749s
> > > > > user -- mean: 4387.694s, median: 4387.935s
> > > > >
> > > > > Here, we saw around 32-35% reduction in kernel CPU time, which
> > > > > translated to 2.8% reduction in real time. These results confirm our
> > > > > hypothesis that the shrinker is more helpful the more cold memory we
> > > > > have.
> > > > >
> > > > > Suggested-by: Johannes Weiner <[email protected]>
> > > > > Signed-off-by: Nhat Pham <[email protected]>
> > > > > ---
> > > > > Documentation/admin-guide/mm/zswap.rst | 12 ++
> > > > > include/linux/memcontrol.h | 1 +
> > > > > include/linux/mmzone.h | 14 ++
> > > > > mm/memcontrol.c | 33 +++++
> > > > > mm/swap_state.c | 31 ++++-
> > > > > mm/zswap.c | 180 ++++++++++++++++++++++++-
> > > > > 6 files changed, 263 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > > > > index 45b98390e938..ae8597a67804 100644
> > > > > --- a/Documentation/admin-guide/mm/zswap.rst
> > > > > +++ b/Documentation/admin-guide/mm/zswap.rst
> > > > > @@ -153,6 +153,18 @@ attribute, e. g.::
> > > > >
> > > > > Setting this parameter to 100 will disable the hysteresis.
> > > > >
> > > > > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > > > > +can be advantageous to proactively write these cold pages to swap and reclaim
> > > > > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > > > > +User can enable it by first switching on the global knob:
> > > > > +
> > > > > + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> > > > > +
> > > > > +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> > > > > +it on for each cgroup that the shrinker should target:
> > > > > +
> > > > > + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> > > > > +
> > > > > A debugfs interface is provided for various statistic about pool size, number
> > > > > of pages stored, same-value filled pages and various counters for the reasons
> > > > > pages are rejected.
> > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > > > index 05d34b328d9d..f005ea667863 100644
> > > > > --- a/include/linux/memcontrol.h
> > > > > +++ b/include/linux/memcontrol.h
> > > > > @@ -219,6 +219,7 @@ struct mem_cgroup {
> > > > >
> > > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > > unsigned long zswap_max;
> > > > > + atomic_t zswap_shrinker_enabled;
> > > > > #endif
> > > > >
> > > > > unsigned long soft_limit;
> > > > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > > > > index 4106fbc5b4b3..81f4c5ea3e16 100644
> > > > > --- a/include/linux/mmzone.h
> > > > > +++ b/include/linux/mmzone.h
> > > > > @@ -637,6 +637,20 @@ struct lruvec {
> > > > > #ifdef CONFIG_MEMCG
> > > > > struct pglist_data *pgdat;
> > > > > #endif
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > + /*
> > > > > + * Number of pages in zswap that should be protected from the shrinker.
> > > > > + * This number is an estimate of the following counts:
> > > > > + *
> > > > > + * a) Recent page faults.
> > > > > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > > > > + * as well as recent zswap LRU rotations.
> > > > > + *
> > > > > + * These pages are likely to be warm, and might incur IO if the are written
> > > > > + * to swap.
> > > > > + */
> > > > > + unsigned long nr_zswap_protected;
> > > > > +#endif
> > > >
> > > > Would this be better abstracted in a zswap lruvec struct?
> > > There is just one field, so that sounds like overkill to me.
> > > But if we need to store more data (for smarter heuristics),
> > > that'll be a good idea. I'll keep this in mind. Thanks for the
> > > suggestion, Yosry!
> >
> > (A space between the quoted text and the reply usually helps visually :)
> >
> > It wasn't really about the number of fields, but rather place this
> > struct in zswap.h (with the long comment explaining what it's doing),
> > and adding an abstracted struct member here. The comment will live in
> > an appropriate file, further modifications don't need to touch
> > mmzone.h, and struct lruvec is less cluttered for readers that don't
> > care about zswap (and we can avoid the ifdef).
> >
> > Anyway, this is all mostly aesthetic so I don't feel strongly.
> >
> > > >
> > > > > };
> > > > >
> > > > > /* Isolate unmapped pages */
> > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > > index 9f84b3f7b469..1a2c97cf396f 100644
> > > > > --- a/mm/memcontrol.c
> > > > > +++ b/mm/memcontrol.c
> > > > > @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> > > > > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> > > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > > memcg->zswap_max = PAGE_COUNTER_MAX;
> > > > > + /* Disable the shrinker by default */
> > > > > + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> > > > > #endif
> > > > > page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> > > > > if (parent) {
> > > > > @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> > > > > return nbytes;
> > > > > }
> > > > >
> > > > > +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> > > > > +{
> > > > > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > > > > +
> > > > > + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> > > > > + char *buf, size_t nbytes, loff_t off)
> > > > > +{
> > > > > + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> > > > > + int zswap_shrinker_enabled;
> > > > > + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> > > > > +
> > > > > + if (parse_ret)
> > > > > + return parse_ret;
> > > > > +
> > > > > + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> > > > > + return -ERANGE;
> > > > > +
> > > > > + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> > > > > + return nbytes;
> > > > > +}
> > > > > +
> > > > > static struct cftype zswap_files[] = {
> > > > > {
> > > > > .name = "zswap.current",
> > > > > @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> > > > > .seq_show = zswap_max_show,
> > > > > .write = zswap_max_write,
> > > > > },
> > > > > + {
> > > > > + .name = "zswap.shrinker.enabled",
> > > > > + .flags = CFTYPE_NOT_ON_ROOT,
> > > > > + .seq_show = zswap_shrinker_enabled_show,
> > > > > + .write = zswap_shrinker_enabled_write,
> > > > > + },
> > > > > { } /* terminate */
> > > > > };
> > > > > #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> > > > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > > > index 1c826737aacb..788e36a06c34 100644
> > > > > --- a/mm/swap_state.c
> > > > > +++ b/mm/swap_state.c
> > > > > @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> > > > > return pages;
> > > > > }
> > > > >
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > +/*
> > > > > + * Refault is an indication that warmer pages are not resident in memory.
> > > > > + * Increase the size of zswap's protected area.
> > > > > + */
> > > > > +static void inc_nr_protected(struct page *page)
> > > > > +{
> > > > > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > > > > + unsigned long flags;
> > > > > +
> > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > + lruvec->nr_zswap_protected++;
> > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > +}
> > > > > +#endif
> > > > > +
> > > >
> > > > A few questions:
> > > > - Why is this function named in such a generic way?
> > > Perhaps inc_nr_zswap_protected would be better? :)
> >
> > If we use an atomic, the function can go away anyway. See below.
> >
> > > > - Why is this function here instead of in mm/zswap.c?
> > > No particular reason :) It's not being used anywhere else,
> > > so I just put it as a static function here.
> >
> > It is inline in mm/zswap.c in one place. I personally would have
> > preferred nr_zswap_protected and the helper to be defined in
> > zswap.h/zswap.c as I mentioned below. Anyway, this function can go
> > away.
> >
> > > > - Why is this protected by the heavily contested lruvec lock instead
> > > > of being an atomic?
> > > nr_zswap_protected can be decayed (see zswap_lru_add), which
> > > I don't think it can be implemented with atomics :( It'd be much
> > > cleaner indeed.
> >
> > I think a cmpxchg (or a try_cmpxchg) loop can be used in this case to
> > implement it using an atomic?
> >
> > See https://docs.kernel.org/core-api/wrappers/atomic_t.html.
>
> Ah I did think about this, but that seems overkill at the time.
> But if lruvec lock is indeed hotly contested, this should help.

I wouldn't say so, we can drop numerous calls to grab/drop the lock,
and drop the helper. A try_cmpxchg loop here would only be a couple of
lines, I suspect it would be more concise than the code now:

old = atomic_inc_return(&lruvec->nr_zswap_protected);
do {
if (old > lru_size / 4)
new = old / 2;
} while (atomic_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));

>
> >
> > > > > + lruvec->nr_zswap_protected++;
> > > > >
> > > > > + /*
> > > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > > + */
> > > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > > + lruvec->nr_zswap_protected /= 2;
> >
> > >
> > > I'm wary of adding new locks, so I just re-use this existing lock.
> > > But if lruvec lock is heavily congested (I'm not aware/familar with
> > > this issue), then perhaps a new, dedicated lock would help?
> > > >
> > > > > /**
> > > > > * swap_cluster_readahead - swap in pages in hope we need them soon
> > > > > * @entry: swap entry of this memory
> > > > > @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > > > > lru_add_drain(); /* Push any new pages onto the LRU now */
> > > > > skip:
> > > > > /* The page was likely read above, so no need for plugging here */
> > > > > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > + if (page)
> > > > > + inc_nr_protected(page);
> > > > > +#endif
> > > > > + return page;
> > > > > }
> > > > >
> > > > > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > > > > @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > > > > lru_add_drain();
> > > > > skip:
> > > > > /* The page was likely read above, so no need for plugging here */
> > > > > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > > > > - NULL);
> > > > > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > + if (page)
> > > > > + inc_nr_protected(page);
> > > > > +#endif
> > > > > + return page;
> > > > > }
> > > > >
> > > > > /**
> > > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > > index 1a469e5d5197..79cb18eeb8bf 100644
> > > > > --- a/mm/zswap.c
> > > > > +++ b/mm/zswap.c
> > > > > @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > > > > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > > > > #define ZSWAP_NR_ZPOOLS 32
> > > > >
> > > > > +/*
> > > > > + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> > > > > + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> > > > > + * the shrinker for each memcg.
> > > > > + */
> > > > > +static bool zswap_shrinker_enabled;
> > > > > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > > +{
> > > > > + return zswap_shrinker_enabled &&
> > > > > + atomic_read(&memcg->zswap_shrinker_enabled);
> > > > > +}
> > > > > +#else
> > > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > > +{
> > > > > + return zswap_shrinker_enabled;
> > > > > +}
> > > > > +#endif
> > > > > +
> > > > > /*********************************
> > > > > * data structures
> > > > > **********************************/
> > > > > @@ -174,6 +194,8 @@ struct zswap_pool {
> > > > > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > > > struct list_lru list_lru;
> > > > > struct mem_cgroup *next_shrink;
> > > > > + struct shrinker *shrinker;
> > > > > + atomic_t nr_stored;
> > > > > };
> > > > >
> > > > > /*
> > > > > @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> > > > > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > > > > }
> > > > >
> > > > > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > > > > +{
> > > > > + u64 pool_size = 0;
> > > > > + int i;
> > > > > +
> > > > > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > > > > +
> > > > > + return pool_size;
> > > > > +}
> > > > > +
> > > > > static void zswap_update_total_size(void)
> > > > > {
> > > > > struct zswap_pool *pool;
> > > > > u64 total = 0;
> > > > > - int i;
> > > > >
> > > > > rcu_read_lock();
> > > > >
> > > > > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > > > > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > > - total += zpool_get_total_size(pool->zpools[i]);
> > > > > + total += get_zswap_pool_size(pool);
> > > > >
> > > > > rcu_read_unlock();
> > > > >
> > > > > @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > > > {
> > > > > struct mem_cgroup *memcg = entry->objcg ?
> > > > > get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > > bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > > > > + unsigned long flags, lru_size;
> > > > > +
> > > > > + if (added) {
> > > > > + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > + lruvec->nr_zswap_protected++;
> > > > >
> > > > > + /*
> > > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > > + */
> > > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > > + lruvec->nr_zswap_protected /= 2;
> > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > + }
> > > > > mem_cgroup_put(memcg);
> > > > > return added;
> > > > > }
> > > > > @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > > > > else {
> > > > > zswap_lru_del(&entry->pool->list_lru, entry);
> > > > > zpool_free(zswap_find_zpool(entry), entry->handle);
> > > > > + atomic_dec(&entry->pool->nr_stored);
> > > > > zswap_pool_put(entry->pool);
> > > > > }
> > > > > zswap_entry_cache_free(entry);
> > > > > @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > > > return entry;
> > > > > }
> > > > >
> > > > > +/*********************************
> > > > > +* shrinker functions
> > > > > +**********************************/
> > > > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > > > + spinlock_t *lock, void *arg);
> > > > > +
> > > > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > > > + struct shrink_control *sc)
> > > > > +{
> > > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > > + unsigned long shrink_ret, nr_zswap_protected, flags,
> > > > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > > > + bool encountered_page_in_swapcache = false;
> > > > > +
> > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > + nr_zswap_protected = lruvec->nr_zswap_protected;
> > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > +
> > > > > + /*
> > > > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > > > + * protected region.
> > > > > + */
> > > > > + if (!is_shrinker_enabled(sc->memcg) ||
> > > > > + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> > > > > + sc->nr_scanned = 0;
> > > > > + return SHRINK_STOP;
> > > > > + }
> > > > > +
> > > > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > > > + &encountered_page_in_swapcache);
> > > > > +
> > > > > + if (encountered_page_in_swapcache)
> > > > > + return SHRINK_STOP;
> > > > > +
> > > > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > > > +}
> > > > > +
> > > > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > > > + struct shrink_control *sc)
> > > > > +{
> > > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > > + struct mem_cgroup *memcg = sc->memcg;
> > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > > > + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> > > > > +
> > > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > > > +#else
> > > > > + /* use pool stats instead of memcg stats */
> > > > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > > > + nr_stored = atomic_read(&pool->nr_stored);
> > > > > +#endif
> > > > > +
> > > > > + if (!is_shrinker_enabled(memcg) || !nr_stored)
> > > > > + return 0;
> > > > > +
> > > > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > > > + /*
> > > > > + * Subtract the lru size by an estimate of the number of pages
> > > > > + * that should be protected.
> > > > > + */
> > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> > > > > + nr_freeable - lruvec->nr_zswap_protected : 0;
> > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > +
> > > > > + /*
> > > > > + * Scale the number of freeable pages by the memory saving factor.
> > > > > + * This ensures that the better zswap compresses memory, the fewer
> > > > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > > > + * relatively small memory saving).
> > > > > + */
> > > > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > > > +}
> > > > > +
> > > > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > > > +{
> > > > > + pool->shrinker =
> > > > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > > > + if (!pool->shrinker)
> > > > > + return;
> > > > > +
> > > > > + pool->shrinker->private_data = pool;
> > > > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > > > + pool->shrinker->batch = 0;
> > > > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > > > +}
> > > > > +
> > > > > /*********************************
> > > > > * per-cpu code
> > > > > **********************************/
> > > > > @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > > spinlock_t *lock, void *arg)
> > > > > {
> > > > > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > > > + bool *encountered_page_in_swapcache = (bool *)arg;
> > > > > struct mem_cgroup *memcg;
> > > > > struct zswap_tree *tree;
> > > > > + struct lruvec *lruvec;
> > > > > pgoff_t swpoffset;
> > > > > enum lru_status ret = LRU_REMOVED_RETRY;
> > > > > int writeback_result;
> > > > > + unsigned long flags;
> > > > >
> > > > > /*
> > > > > * Once the lru lock is dropped, the entry might get freed. The
> > > > > @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > > > list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > > > > spin_unlock(lock);
> > > > > - mem_cgroup_put(memcg);
> > > > > ret = LRU_RETRY;
> > > > > +
> > > > > + /*
> > > > > + * Encountering a page already in swap cache is a sign that we are shrinking
> > > > > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > > > > + * shrinker context).
> > > > > + */
> > > > > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > > > > + ret = LRU_SKIP;
> > > > > + *encountered_page_in_swapcache = true;
> > > > > + }
> > > > > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > + /* Increment the protection area to account for the LRU rotation. */
> > > > > + lruvec->nr_zswap_protected++;
> > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > +
> > > > > + mem_cgroup_put(memcg);
> > > > > goto put_unlock;
> > > > > }
> > > > >
> > > > > @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > > &pool->node);
> > > > > if (ret)
> > > > > goto error;
> > > > > +
> > > > > + zswap_alloc_shrinker(pool);
> > > > > + if (!pool->shrinker)
> > > > > + goto error;
> > > > > +
> > > > > pr_debug("using %s compressor\n", pool->tfm_name);
> > > > >
> > > > > /* being the current pool takes 1 ref; this func expects the
> > > > > @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > > kref_init(&pool->kref);
> > > > > INIT_LIST_HEAD(&pool->list);
> > > > > INIT_WORK(&pool->shrink_work, shrink_worker);
> > > > > - list_lru_init_memcg(&pool->list_lru, NULL);
> > > > > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > > > > + goto lru_fail;
> > > > > + shrinker_register(pool->shrinker);
> > > > >
> > > > > zswap_pool_debug("created", pool);
> > > > >
> > > > > return pool;
> > > > >
> > > > > +lru_fail:
> > > > > + list_lru_destroy(&pool->list_lru);
> > > > > + shrinker_free(pool->shrinker);
> > > > > error:
> > > > > if (pool->acomp_ctx)
> > > > > free_percpu(pool->acomp_ctx);
> > > > > @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > > > >
> > > > > zswap_pool_debug("destroying", pool);
> > > > >
> > > > > + shrinker_free(pool->shrinker);
> > > > > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > > > free_percpu(pool->acomp_ctx);
> > > > > list_lru_destroy(&pool->list_lru);
> > > > > @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> > > > > if (entry->length) {
> > > > > INIT_LIST_HEAD(&entry->lru);
> > > > > zswap_lru_add(&pool->list_lru, entry);
> > > > > + atomic_inc(&pool->nr_stored);
> > > > > }
> > > > > spin_unlock(&tree->lock);
> > > > >
> > > > > --
> > > > > 2.34.1
> > > Thanks for the comments/suggestion, Yosry!

2023-09-26 22:28:35

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> +Chris Li
>
> On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> >
> > From: Domenico Cerasuolo <[email protected]>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/[email protected]/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
>
> Hey Nhat,
>
> I didn't take a very close look as I am currently swamped, but going
> through the patch I have some comments/questions below.
>
> I am not very familiar with list_lru, but it seems like the existing
> API derives the node and memcg from the list item itself. Seems like
> we can avoid a lot of changes if we allocate struct zswap_entry from
> the same node as the page, and account it to the same memcg. Would
> this be too much of a change or too strong of a restriction? It's a
> slab allocation and we will free memory on that node/memcg right
> after.

My 2c, but I kind of hate that assumption made by list_lru.

We ran into problems with it with the THP shrinker as well. That one
strings up 'struct page', and virt_to_page(page) results in really fun
to debug issues.

IMO it would be less error prone to have memcg and nid as part of the
regular list_lru_add() function signature. And then have an explicit
list_lru_add_obj() that does a documented memcg lookup.

Because of the overhead, we've been selective about the memory we
charge. I'd hesitate to do it just to work around list_lru.

2023-09-27 00:07:05

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Tue, Sep 26, 2023 at 11:24 AM Johannes Weiner <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> > +Chris Li
> >
> > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > >
> > > From: Domenico Cerasuolo <[email protected]>
> > >
> > > Currently, we only have a single global LRU for zswap. This makes it
> > > impossible to perform worload-specific shrinking - an memcg cannot
> > > determine which pages in the pool it owns, and often ends up writing
> > > pages from other memcgs. This issue has been previously observed in
> > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > >
> > > https://lore.kernel.org/all/[email protected]/T/#u
> > >
> > > This patch fully resolves the issue by replacing the global zswap LRU
> > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > >
> > > a) When a store attempt hits an memcg limit, it now triggers a
> > > synchronous reclaim attempt that, if successful, allows the new
> > > hotter page to be accepted by zswap.
> > > b) If the store attempt instead hits the global zswap limit, it will
> > > trigger an asynchronous reclaim attempt, in which an memcg is
> > > selected for reclaim in a round-robin-like fashion.
> >
> > Hey Nhat,
> >
> > I didn't take a very close look as I am currently swamped, but going
> > through the patch I have some comments/questions below.
> >
> > I am not very familiar with list_lru, but it seems like the existing
> > API derives the node and memcg from the list item itself. Seems like
> > we can avoid a lot of changes if we allocate struct zswap_entry from
> > the same node as the page, and account it to the same memcg. Would
> > this be too much of a change or too strong of a restriction? It's a
> > slab allocation and we will free memory on that node/memcg right
> > after.
>
> My 2c, but I kind of hate that assumption made by list_lru.
>
> We ran into problems with it with the THP shrinker as well. That one
> strings up 'struct page', and virt_to_page(page) results in really fun
> to debug issues.
>
> IMO it would be less error prone to have memcg and nid as part of the
> regular list_lru_add() function signature. And then have an explicit
> list_lru_add_obj() that does a documented memcg lookup.

I also didn't like/understand that assumption, but again I don't have
enough familiarity with the code to judge, and I don't know why it was
done that way. Adding memcg and nid as arguments to the standard
list_lru API makes the pill easier to swallow. In any case, this
should be done in a separate patch to make the diff here more focused
on zswap changes.

>
> Because of the overhead, we've been selective about the memory we
> charge. I'd hesitate to do it just to work around list_lru.

On the other hand I am worried about the continuous growth of struct
zswap_entry. It's now at ~10 words on 64-bit? That's ~2% of the size
of the page getting compressed if I am not mistaken. So I am skeptical
about storing the nid there.

A middle ground would be allocating struct zswap_entry on the correct
node without charging it. We don't need to store the nid and we don't
need to charge struct zswap_entry. It doesn't get rid of
virt_to_page() though.

2023-09-27 20:56:05

by Domenico Cerasuolo

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Mon, Sep 25, 2023 at 10:17 PM Yosry Ahmed <[email protected]> wrote:
>
> +Chris Li
>
> On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> >
> > From: Domenico Cerasuolo <[email protected]>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/[email protected]/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
>
> Hey Nhat,
>
> I didn't take a very close look as I am currently swamped, but going
> through the patch I have some comments/questions below.
>
> I am not very familiar with list_lru, but it seems like the existing
> API derives the node and memcg from the list item itself. Seems like
> we can avoid a lot of changes if we allocate struct zswap_entry from
> the same node as the page, and account it to the same memcg. Would
> this be too much of a change or too strong of a restriction? It's a
> slab allocation and we will free memory on that node/memcg right
> after.
>
> >
> > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > Co-developed-by: Nhat Pham <[email protected]>
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
> > include/linux/list_lru.h | 39 +++++++
> > include/linux/memcontrol.h | 5 +
> > include/linux/zswap.h | 9 ++
> > mm/list_lru.c | 46 ++++++--
> > mm/swap_state.c | 19 ++++
> > mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
> > 6 files changed, 287 insertions(+), 52 deletions(-)
> >
> > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> > index b35968ee9fb5..b517b4e2c7c4 100644
> > --- a/include/linux/list_lru.h
> > +++ b/include/linux/list_lru.h
> > @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
> > */
> > bool list_lru_add(struct list_lru *lru, struct list_head *item);
> >
> > +/**
> > + * __list_lru_add: add an element to a specific sublist.
> > + * @list_lru: the lru pointer
> > + * @item: the item to be added.
> > + * @memcg: the cgroup of the sublist to add the item to.
> > + * @nid: the node id of the sublist to add the item to.
> > + *
> > + * This function is similar to list_lru_add(), but it allows the caller to
> > + * specify the sublist to which the item should be added. This can be useful
> > + * when the list_head node is not necessarily in the same cgroup and NUMA node
> > + * as the data it represents, such as zswap, where the list_head node could be
> > + * from kswapd and the data from a different cgroup altogether.
> > + *
> > + * Return value: true if the list was updated, false otherwise
> > + */
> > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg);
> > +
> > /**
> > * list_lru_del: delete an element to the lru list
> > * @list_lru: the lru pointer
> > @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
> > */
> > bool list_lru_del(struct list_lru *lru, struct list_head *item);
> >
> > +/**
> > + * __list_lru_delete: delete an element from a specific sublist.
> > + * @list_lru: the lru pointer
> > + * @item: the item to be deleted.
> > + * @memcg: the cgroup of the sublist to delete the item from.
> > + * @nid: the node id of the sublist to delete the item from.
> > + *
> > + * Return value: true if the list was updated, false otherwise.
> > + */
> > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg);
> > +
> > /**
> > * list_lru_count_one: return the number of objects currently held by @lru
> > * @lru: the lru pointer.
> > @@ -137,6 +167,15 @@ void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
> > void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> > struct list_head *head);
> >
> > +/*
> > + * list_lru_putback: undo list_lru_isolate.
> > + *
> > + * Since we might have dropped the LRU lock in between, recompute list_lru_one
> > + * from the node's id and memcg.
> > + */
> > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg);
> > +
> > typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
> > struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 67b823dfa47d..05d34b328d9d 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > return NULL;
> > }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > + return NULL;
> > +}
> > +
> > static inline bool folio_memcg_kmem(struct folio *folio)
> > {
> > return false;
> > diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> > index 2a60ce39cfde..04f80b64a09b 100644
> > --- a/include/linux/zswap.h
> > +++ b/include/linux/zswap.h
> > @@ -15,6 +15,8 @@ bool zswap_load(struct folio *folio);
> > void zswap_invalidate(int type, pgoff_t offset);
> > void zswap_swapon(int type);
> > void zswap_swapoff(int type);
> > +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry);
> > +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry);
> >
> > #else
> >
> > @@ -32,6 +34,13 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
> > static inline void zswap_swapon(int type) {}
> > static inline void zswap_swapoff(int type) {}
> >
> > +static inline bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> > +{
> > + return false;
> > +}
> > +
> > +static inline void zswap_insert_swpentry_into_lru(swp_entry_t swpentry) {}
> > +
> > #endif
> >
> > #endif /* _LINUX_ZSWAP_H */
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index a05e5bef3b40..37c5c2ef6c0e 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -119,18 +119,26 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
> > bool list_lru_add(struct list_lru *lru, struct list_head *item)
> > {
> > int nid = page_to_nid(virt_to_page(item));
> > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > + mem_cgroup_from_slab_obj(item) : NULL;
> > +
> > + return __list_lru_add(lru, item, nid, memcg);
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_add);
> > +
> > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg)
> > +{
> > struct list_lru_node *nlru = &lru->node[nid];
> > - struct mem_cgroup *memcg;
> > struct list_lru_one *l;
> >
> > spin_lock(&nlru->lock);
> > if (list_empty(item)) {
> > - l = list_lru_from_kmem(lru, nid, item, &memcg);
> > + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > list_add_tail(item, &l->list);
> > /* Set shrinker bit if the first element was added */
> > if (!l->nr_items++)
> > - set_shrinker_bit(memcg, nid,
> > - lru_shrinker_id(lru));
> > + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
>
> Unrelated diff.
>
> > nlru->nr_items++;
> > spin_unlock(&nlru->lock);
> > return true;
> > @@ -138,17 +146,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
> > spin_unlock(&nlru->lock);
> > return false;
> > }
> > -EXPORT_SYMBOL_GPL(list_lru_add);
> > +EXPORT_SYMBOL_GPL(__list_lru_add);
> >
> > bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > {
> > int nid = page_to_nid(virt_to_page(item));
> > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > + mem_cgroup_from_slab_obj(item) : NULL;
> > +
> > + return __list_lru_del(lru, item, nid, memcg);
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_del);
> > +
> > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg)
> > +{
> > struct list_lru_node *nlru = &lru->node[nid];
> > struct list_lru_one *l;
> >
> > spin_lock(&nlru->lock);
> > if (!list_empty(item)) {
> > - l = list_lru_from_kmem(lru, nid, item, NULL);
>
> If we decide to keep the list_lru.c changes, do we have any other
> callers of list_lru_from_kmem()?

I see a commit already in mm-unstable that removes it.

>
> > + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > list_del_init(item);
> > l->nr_items--;
> > nlru->nr_items--;
> > @@ -158,7 +176,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > spin_unlock(&nlru->lock);
> > return false;
> > }
> > -EXPORT_SYMBOL_GPL(list_lru_del);
> > +EXPORT_SYMBOL_GPL(__list_lru_del);
> >
> > void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
> > {
> > @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> > }
> > EXPORT_SYMBOL_GPL(list_lru_isolate_move);
> >
> > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > + struct mem_cgroup *memcg)
> > +{
> > + struct list_lru_one *list =
> > + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > +
> > + if (list_empty(item)) {
> > + list_add_tail(item, &list->list);
> > + if (!list->nr_items++)
> > + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> > + }
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_putback);
> > +
> > unsigned long list_lru_count_one(struct list_lru *lru,
> > int nid, struct mem_cgroup *memcg)
> > {
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index b3b14bd0dd64..1c826737aacb 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -21,6 +21,7 @@
> > #include <linux/swap_slots.h>
> > #include <linux/huge_mm.h>
> > #include <linux/shmem_fs.h>
> > +#include <linux/zswap.h>
> > #include "internal.h"
> > #include "swap.h"
> >
> > @@ -417,6 +418,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct folio *folio;
> > struct page *page;
> > void *shadow = NULL;
> > + bool zswap_lru_removed = false;
> >
> > *new_page_allocated = false;
> > si = get_swap_device(entry);
> > @@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > __folio_set_locked(folio);
> > __folio_set_swapbacked(folio);
> >
> > + /*
> > + * Page fault might itself trigger reclaim, on a zswap object that
> > + * corresponds to the same swap entry. However, as the swap entry has
> > + * previously been pinned, the task will run into an infinite loop trying
> > + * to pin the swap entry again.
> > + *
> > + * To prevent this from happening, we remove it from the zswap
> > + * LRU to prevent its reclamation.
> > + */
> > + zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
> > +
>
> This will add a zswap lookup (and potentially an insertion below) in
> every single swap fault path, right?. Doesn't this introduce latency
> regressions? I am also not a fan of having zswap-specific details in
> this path.
>
> When you say "pinned", do you mean the call to swapcache_prepare()
> above (i.e. setting SWAP_HAS_CACHE)? IIUC, the scenario you are
> worried about is that the following call to charge the page may invoke
> reclaim, go into zswap, and try to writeback the same page we are
> swapping in here. The writeback call will recurse into
> __read_swap_cache_async(), call swapcache_prepare() and get EEXIST,
> and keep looping indefinitely. Is this correct?
>
> If yes, can we handle this by adding a flag to
> __read_swap_cache_async() that basically says "don't wait for
> SWAP_HAS_CACHE and the swapcache to be consistent, if
> swapcache_prepare() returns EEXIST just fail and return"? The zswap
> writeback path can pass in this flag and skip such pages. We might
> want to modify the writeback code to put back those pages at the end
> of the lru instead of in the beginning.

Thanks for the suggestion, this actually works and it seems cleaner so I think
we'll go for your solution.

>
> > if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
> > goto fail_unlock;
> >
> > @@ -497,6 +510,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > if (shadow)
> > workingset_refault(folio, shadow);
> >
> > + if (zswap_lru_removed)
> > + zswap_insert_swpentry_into_lru(entry);
> > +
> > /* Caller will initiate read into locked folio */
> > folio_add_lru(folio);
> > *new_page_allocated = true;
> > @@ -506,6 +522,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > return page;
> >
> > fail_unlock:
> > + if (zswap_lru_removed)
> > + zswap_insert_swpentry_into_lru(entry);
> > +
> > put_swap_folio(folio, entry);
> > folio_unlock(folio);
> > folio_put(folio);
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 412b1409a0d7..1a469e5d5197 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -34,6 +34,7 @@
> > #include <linux/writeback.h>
> > #include <linux/pagemap.h>
> > #include <linux/workqueue.h>
> > +#include <linux/list_lru.h>
> >
> > #include "swap.h"
> > #include "internal.h"
> > @@ -171,8 +172,8 @@ struct zswap_pool {
> > struct work_struct shrink_work;
> > struct hlist_node node;
> > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > - struct list_head lru;
> > - spinlock_t lru_lock;
> > + struct list_lru list_lru;
> > + struct mem_cgroup *next_shrink;
> > };
> >
> > /*
> > @@ -209,6 +210,7 @@ struct zswap_entry {
> > unsigned long value;
> > };
> > struct obj_cgroup *objcg;
> > + int nid;
> > struct list_head lru;
> > };
>
> Ideally this can be avoided if we can allocate struct zswap_entry on
> the correct node.

We didn't consider allocating the entry on the node without charging it to the
memcg, we'll try it and if it's doable it would be a good compromise to avoid
adding the node id here.

>
> >
> > @@ -309,6 +311,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > kmem_cache_free(zswap_entry_cache, entry);
> > }
> >
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + struct mem_cgroup *memcg = entry->objcg ?
> > + get_mem_cgroup_from_objcg(entry->objcg) : NULL;
>
> This line is repeated at least 3 times, perhaps add a helper for it?
> get_mem_cgroup_from_zswap()?
>
> > + bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > +
> > + mem_cgroup_put(memcg);
> > + return added;
> > +}
> > +
> > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + struct mem_cgroup *memcg = entry->objcg ?
> > + get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > + bool removed = __list_lru_del(list_lru, &entry->lru, entry->nid, memcg);
> > +
> > + mem_cgroup_put(memcg);
> > + return removed;
> > +}
> > +
> > /*********************************
> > * rbtree functions
> > **********************************/
> > @@ -393,9 +418,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > if (!entry->length)
> > atomic_dec(&zswap_same_filled_pages);
> > else {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_del(&entry->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > zpool_free(zswap_find_zpool(entry), entry->handle);
> > zswap_pool_put(entry->pool);
> > }
> > @@ -629,21 +652,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> > zswap_entry_put(tree, entry);
> > }
> >
> > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg)
> > {
> > - struct zswap_entry *entry;
> > + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > + struct mem_cgroup *memcg;
> > struct zswap_tree *tree;
> > pgoff_t swpoffset;
> > - int ret;
> > + enum lru_status ret = LRU_REMOVED_RETRY;
> > + int writeback_result;
> >
> > - /* Get an entry off the LRU */
> > - spin_lock(&pool->lru_lock);
> > - if (list_empty(&pool->lru)) {
> > - spin_unlock(&pool->lru_lock);
> > - return -EINVAL;
> > - }
> > - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > - list_del_init(&entry->lru);
> > /*
> > * Once the lru lock is dropped, the entry might get freed. The
> > * swpoffset is copied to the stack, and entry isn't deref'd again
> > @@ -651,26 +669,35 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > */
> > swpoffset = swp_offset(entry->swpentry);
> > tree = zswap_trees[swp_type(entry->swpentry)];
> > - spin_unlock(&pool->lru_lock);
> > + list_lru_isolate(l, item);
> > + spin_unlock(lock);
> >
> > /* Check for invalidate() race */
> > spin_lock(&tree->lock);
> > if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > - ret = -EAGAIN;
> > goto unlock;
> > }
> > /* Hold a reference to prevent a free during writeback */
> > zswap_entry_get(entry);
> > spin_unlock(&tree->lock);
> >
> > - ret = zswap_writeback_entry(entry, tree);
> > + writeback_result = zswap_writeback_entry(entry, tree);
> >
> > spin_lock(&tree->lock);
> > - if (ret) {
> > - /* Writeback failed, put entry back on LRU */
> > - spin_lock(&pool->lru_lock);
> > - list_move(&entry->lru, &pool->lru);
> > - spin_unlock(&pool->lru_lock);
> > + if (writeback_result) {
> > + zswap_reject_reclaim_fail++;
> > +
> > + /* Check for invalidate() race */
> > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > + goto put_unlock;
> > +
> > + memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > + spin_lock(lock);
> > + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > + list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > + spin_unlock(lock);
> > + mem_cgroup_put(memcg);
> > + ret = LRU_RETRY;
> > goto put_unlock;
> > }
> >
> > @@ -686,19 +713,63 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > zswap_entry_put(tree, entry);
> > unlock:
> > spin_unlock(&tree->lock);
> > - return ret ? -EAGAIN : 0;
> > + spin_lock(lock);
> > + return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > + int nid, shrunk = 0;
> > + bool is_empty = true;
> > +
> > + pool = zswap_pool_current_get();
> > + if (!pool)
> > + return -EINVAL;
> > +
> > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > + unsigned long nr_to_walk = 1;
> > +
> > + if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> > + NULL, &nr_to_walk))
> > + shrunk++;
> > + if (!nr_to_walk)
>
> nr_to_walk will be 0 if we shrunk 1 page, so it's the same condition
> as the above, right?
>
> is_empty seems to be shrunk == 0 if I understand correctly, seems like
> there is no need for both.

It is indeed 0 when we shrunk 1 page, but it could also be 0 also when the
reclaim failed and the list was not empty.

>
> > + is_empty = false;
> > + }
> > + zswap_pool_put(pool);
> > +
> > + if (is_empty)
> > + return -EINVAL;
> > + if (shrunk)
> > + return 0;
> > + return -EAGAIN;
> > }
> >
> > static void shrink_worker(struct work_struct *w)
> > {
> > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > shrink_work);
> > - int ret, failures = 0;
> > + int ret, failures = 0, memcg_selection_failures = 0;
> >
> > + /* global reclaim will select cgroup in a round-robin fashion. */
> > do {
> > - ret = zswap_reclaim_entry(pool);
> > + /* previous next_shrink has become a zombie - restart from the top */
>
> Do we skip zombies because all zswap entries are reparented with the objcg?
>
> If yes, why do we restart from the top instead of just skipping them?
> memcgs after a zombie will not be reachable now IIUC.
>
> Also, why explicitly check for zombies instead of having
> shrink_memcg() just skip memcgs with no zswap entries? The logic is
> slightly complicated.

I think you have a point here, I'm not sure if the iteration can go on once we
get a zombie, if it can, we'll just skip it.

>
> > + if (pool->next_shrink && !mem_cgroup_online(pool->next_shrink)) {
> > + mem_cgroup_put(pool->next_shrink);
> > + pool->next_shrink = NULL;
> > + }
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > +
> > + /* fails to find a suitable cgroup - give the worker another chance. */
> > + if (!pool->next_shrink) {
> > + if (++memcg_selection_failures == 2)
> > + break;
> > + continue;
> > + }
> > +
> > + ret = shrink_memcg(pool->next_shrink);
> > +
> > if (ret) {
> > - zswap_reject_reclaim_fail++;
> > if (ret != -EAGAIN)
> > break;
> > if (++failures == MAX_RECLAIM_RETRIES)
> > @@ -764,9 +835,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > */
> > kref_init(&pool->kref);
> > INIT_LIST_HEAD(&pool->list);
> > - INIT_LIST_HEAD(&pool->lru);
> > - spin_lock_init(&pool->lru_lock);
> > INIT_WORK(&pool->shrink_work, shrink_worker);
> > + list_lru_init_memcg(&pool->list_lru, NULL);
> >
> > zswap_pool_debug("created", pool);
> >
> > @@ -831,6 +901,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > free_percpu(pool->acomp_ctx);
> > + list_lru_destroy(&pool->list_lru);
> > + if (pool->next_shrink)
> > + mem_cgroup_put(pool->next_shrink);
> > for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > zpool_destroy_pool(pool->zpools[i]);
> > kfree(pool);
> > @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> > struct scatterlist input, output;
> > struct crypto_acomp_ctx *acomp_ctx;
> > struct obj_cgroup *objcg = NULL;
> > + struct mem_cgroup *memcg = NULL;
> > struct zswap_pool *pool;
> > struct zpool *zpool;
> > + int lru_alloc_ret;
> > unsigned int dlen = PAGE_SIZE;
> > unsigned long handle, value;
> > char *buf;
> > @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> > if (!zswap_enabled || !tree)
> > return false;
> >
> > - /*
> > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > - * local cgroup limits.
> > - */
> > objcg = get_obj_cgroup_from_folio(folio);
> > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > - goto reject;
> > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (shrink_memcg(memcg)) {
> > + mem_cgroup_put(memcg);
> > + goto reject;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> >
> > /* reclaim space if needed */
> > if (zswap_is_full()) {
> > @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> > else
> > zswap_pool_reached_full = false;
> > }
> > -
> > + pool = zswap_pool_current_get();
> > + if (!pool) {
> > + ret = -EINVAL;
> > + goto reject;
> > + }
> > /* allocate entry */
> > entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > if (!entry) {
> > @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> > entry->length = 0;
> > entry->value = value;
> > atomic_inc(&zswap_same_filled_pages);
> > + zswap_pool_put(pool);
> > goto insert_entry;
> > }
> > kunmap_atomic(src);
> > @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> > if (!zswap_non_same_filled_pages_enabled)
> > goto freepage;
> >
> > + if (objcg) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > + mem_cgroup_put(memcg);
> > +
> > + if (lru_alloc_ret)
> > + goto freepage;
> > + }
> > +
> > /* if entry is successfully added, it keeps the reference */
> > entry->pool = zswap_pool_current_get();
> > if (!entry->pool)
> > @@ -1325,6 +1415,7 @@ bool zswap_store(struct folio *folio)
> >
> > insert_entry:
> > entry->objcg = objcg;
> > + entry->nid = page_to_nid(page);
> > if (objcg) {
> > obj_cgroup_charge_zswap(objcg, entry->length);
> > /* Account before objcg ref is moved to tree */
> > @@ -1338,9 +1429,8 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_add(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + INIT_LIST_HEAD(&entry->lru);
> > + zswap_lru_add(&pool->list_lru, entry);
> > }
> > spin_unlock(&tree->lock);
> >
> > @@ -1447,9 +1537,8 @@ bool zswap_load(struct folio *folio)
> > zswap_invalidate_entry(tree, entry);
> > folio_mark_dirty(folio);
> > } else if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_move(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > zswap_entry_put(tree, entry);
> > spin_unlock(&tree->lock);
> > @@ -1507,6 +1596,48 @@ void zswap_swapoff(int type)
> > zswap_trees[type] = NULL;
> > }
> >
> > +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> > +{
> > + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> > + struct zswap_entry *entry;
> > + struct zswap_pool *pool;
> > + bool removed = false;
> > +
> > + /* get the zswap entry and prevent it from being freed */
> > + spin_lock(&tree->lock);
> > + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> > + /* skip if the entry is already written back or is a same filled page */
> > + if (!entry || !entry->length)
> > + goto tree_unlock;
> > +
> > + pool = entry->pool;
> > + removed = zswap_lru_del(&pool->list_lru, entry);
> > +
> > +tree_unlock:
> > + spin_unlock(&tree->lock);
> > + return removed;
> > +}
> > +
> > +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry)
> > +{
> > + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> > + struct zswap_entry *entry;
> > + struct zswap_pool *pool;
> > +
> > + /* get the zswap entry and prevent it from being freed */
> > + spin_lock(&tree->lock);
> > + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> > + /* skip if the entry is already written back or is a same filled page */
> > + if (!entry || !entry->length)
> > + goto tree_unlock;
> > +
> > + pool = entry->pool;
> > + zswap_lru_add(&pool->list_lru, entry);
> > +
> > +tree_unlock:
> > + spin_unlock(&tree->lock);
> > +}
> > +
> > /*********************************
> > * debugfs functions
> > **********************************/
> > @@ -1560,7 +1691,7 @@ static int zswap_setup(void)
> > struct zswap_pool *pool;
> > int ret;
> >
> > - zswap_entry_cache = KMEM_CACHE(zswap_entry, 0);
> > + zswap_entry_cache = KMEM_CACHE(zswap_entry, SLAB_ACCOUNT);
> > if (!zswap_entry_cache) {
> > pr_err("entry cache creation failed\n");
> > goto cache_fail;
> > --
> > 2.34.1

2023-09-27 21:35:05

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Wed, Sep 27, 2023 at 12:48 PM Domenico Cerasuolo
<[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 10:17 PM Yosry Ahmed <[email protected]> wrote:
> >
> > +Chris Li
> >
> > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > >
> > > From: Domenico Cerasuolo <[email protected]>
> > >
> > > Currently, we only have a single global LRU for zswap. This makes it
> > > impossible to perform worload-specific shrinking - an memcg cannot
> > > determine which pages in the pool it owns, and often ends up writing
> > > pages from other memcgs. This issue has been previously observed in
> > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > >
> > > https://lore.kernel.org/all/[email protected]/T/#u
> > >
> > > This patch fully resolves the issue by replacing the global zswap LRU
> > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > >
> > > a) When a store attempt hits an memcg limit, it now triggers a
> > > synchronous reclaim attempt that, if successful, allows the new
> > > hotter page to be accepted by zswap.
> > > b) If the store attempt instead hits the global zswap limit, it will
> > > trigger an asynchronous reclaim attempt, in which an memcg is
> > > selected for reclaim in a round-robin-like fashion.
> >
> > Hey Nhat,
> >
> > I didn't take a very close look as I am currently swamped, but going
> > through the patch I have some comments/questions below.
> >
> > I am not very familiar with list_lru, but it seems like the existing
> > API derives the node and memcg from the list item itself. Seems like
> > we can avoid a lot of changes if we allocate struct zswap_entry from
> > the same node as the page, and account it to the same memcg. Would
> > this be too much of a change or too strong of a restriction? It's a
> > slab allocation and we will free memory on that node/memcg right
> > after.
> >
> > >
> > > Signed-off-by: Domenico Cerasuolo <[email protected]>
> > > Co-developed-by: Nhat Pham <[email protected]>
> > > Signed-off-by: Nhat Pham <[email protected]>
> > > ---
> > > include/linux/list_lru.h | 39 +++++++
> > > include/linux/memcontrol.h | 5 +
> > > include/linux/zswap.h | 9 ++
> > > mm/list_lru.c | 46 ++++++--
> > > mm/swap_state.c | 19 ++++
> > > mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
> > > 6 files changed, 287 insertions(+), 52 deletions(-)
> > >
> > > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> > > index b35968ee9fb5..b517b4e2c7c4 100644
> > > --- a/include/linux/list_lru.h
> > > +++ b/include/linux/list_lru.h
> > > @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
> > > */
> > > bool list_lru_add(struct list_lru *lru, struct list_head *item);
> > >
> > > +/**
> > > + * __list_lru_add: add an element to a specific sublist.
> > > + * @list_lru: the lru pointer
> > > + * @item: the item to be added.
> > > + * @memcg: the cgroup of the sublist to add the item to.
> > > + * @nid: the node id of the sublist to add the item to.
> > > + *
> > > + * This function is similar to list_lru_add(), but it allows the caller to
> > > + * specify the sublist to which the item should be added. This can be useful
> > > + * when the list_head node is not necessarily in the same cgroup and NUMA node
> > > + * as the data it represents, such as zswap, where the list_head node could be
> > > + * from kswapd and the data from a different cgroup altogether.
> > > + *
> > > + * Return value: true if the list was updated, false otherwise
> > > + */
> > > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg);
> > > +
> > > /**
> > > * list_lru_del: delete an element to the lru list
> > > * @list_lru: the lru pointer
> > > @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
> > > */
> > > bool list_lru_del(struct list_lru *lru, struct list_head *item);
> > >
> > > +/**
> > > + * __list_lru_delete: delete an element from a specific sublist.
> > > + * @list_lru: the lru pointer
> > > + * @item: the item to be deleted.
> > > + * @memcg: the cgroup of the sublist to delete the item from.
> > > + * @nid: the node id of the sublist to delete the item from.
> > > + *
> > > + * Return value: true if the list was updated, false otherwise.
> > > + */
> > > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg);
> > > +
> > > /**
> > > * list_lru_count_one: return the number of objects currently held by @lru
> > > * @lru: the lru pointer.
> > > @@ -137,6 +167,15 @@ void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
> > > void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> > > struct list_head *head);
> > >
> > > +/*
> > > + * list_lru_putback: undo list_lru_isolate.
> > > + *
> > > + * Since we might have dropped the LRU lock in between, recompute list_lru_one
> > > + * from the node's id and memcg.
> > > + */
> > > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg);
> > > +
> > > typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
> > > struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 67b823dfa47d..05d34b328d9d 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > > return NULL;
> > > }
> > >
> > > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > > +{
> > > + return NULL;
> > > +}
> > > +
> > > static inline bool folio_memcg_kmem(struct folio *folio)
> > > {
> > > return false;
> > > diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> > > index 2a60ce39cfde..04f80b64a09b 100644
> > > --- a/include/linux/zswap.h
> > > +++ b/include/linux/zswap.h
> > > @@ -15,6 +15,8 @@ bool zswap_load(struct folio *folio);
> > > void zswap_invalidate(int type, pgoff_t offset);
> > > void zswap_swapon(int type);
> > > void zswap_swapoff(int type);
> > > +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry);
> > > +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry);
> > >
> > > #else
> > >
> > > @@ -32,6 +34,13 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
> > > static inline void zswap_swapon(int type) {}
> > > static inline void zswap_swapoff(int type) {}
> > >
> > > +static inline bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> > > +{
> > > + return false;
> > > +}
> > > +
> > > +static inline void zswap_insert_swpentry_into_lru(swp_entry_t swpentry) {}
> > > +
> > > #endif
> > >
> > > #endif /* _LINUX_ZSWAP_H */
> > > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > > index a05e5bef3b40..37c5c2ef6c0e 100644
> > > --- a/mm/list_lru.c
> > > +++ b/mm/list_lru.c
> > > @@ -119,18 +119,26 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
> > > bool list_lru_add(struct list_lru *lru, struct list_head *item)
> > > {
> > > int nid = page_to_nid(virt_to_page(item));
> > > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > > + mem_cgroup_from_slab_obj(item) : NULL;
> > > +
> > > + return __list_lru_add(lru, item, nid, memcg);
> > > +}
> > > +EXPORT_SYMBOL_GPL(list_lru_add);
> > > +
> > > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg)
> > > +{
> > > struct list_lru_node *nlru = &lru->node[nid];
> > > - struct mem_cgroup *memcg;
> > > struct list_lru_one *l;
> > >
> > > spin_lock(&nlru->lock);
> > > if (list_empty(item)) {
> > > - l = list_lru_from_kmem(lru, nid, item, &memcg);
> > > + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > > list_add_tail(item, &l->list);
> > > /* Set shrinker bit if the first element was added */
> > > if (!l->nr_items++)
> > > - set_shrinker_bit(memcg, nid,
> > > - lru_shrinker_id(lru));
> > > + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> >
> > Unrelated diff.
> >
> > > nlru->nr_items++;
> > > spin_unlock(&nlru->lock);
> > > return true;
> > > @@ -138,17 +146,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
> > > spin_unlock(&nlru->lock);
> > > return false;
> > > }
> > > -EXPORT_SYMBOL_GPL(list_lru_add);
> > > +EXPORT_SYMBOL_GPL(__list_lru_add);
> > >
> > > bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > > {
> > > int nid = page_to_nid(virt_to_page(item));
> > > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > > + mem_cgroup_from_slab_obj(item) : NULL;
> > > +
> > > + return __list_lru_del(lru, item, nid, memcg);
> > > +}
> > > +EXPORT_SYMBOL_GPL(list_lru_del);
> > > +
> > > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg)
> > > +{
> > > struct list_lru_node *nlru = &lru->node[nid];
> > > struct list_lru_one *l;
> > >
> > > spin_lock(&nlru->lock);
> > > if (!list_empty(item)) {
> > > - l = list_lru_from_kmem(lru, nid, item, NULL);
> >
> > If we decide to keep the list_lru.c changes, do we have any other
> > callers of list_lru_from_kmem()?
>
> I see a commit already in mm-unstable that removes it.
>
> >
> > > + l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > > list_del_init(item);
> > > l->nr_items--;
> > > nlru->nr_items--;
> > > @@ -158,7 +176,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > > spin_unlock(&nlru->lock);
> > > return false;
> > > }
> > > -EXPORT_SYMBOL_GPL(list_lru_del);
> > > +EXPORT_SYMBOL_GPL(__list_lru_del);
> > >
> > > void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
> > > {
> > > @@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> > > }
> > > EXPORT_SYMBOL_GPL(list_lru_isolate_move);
> > >
> > > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > > + struct mem_cgroup *memcg)
> > > +{
> > > + struct list_lru_one *list =
> > > + list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > > +
> > > + if (list_empty(item)) {
> > > + list_add_tail(item, &list->list);
> > > + if (!list->nr_items++)
> > > + set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> > > + }
> > > +}
> > > +EXPORT_SYMBOL_GPL(list_lru_putback);
> > > +
> > > unsigned long list_lru_count_one(struct list_lru *lru,
> > > int nid, struct mem_cgroup *memcg)
> > > {
> > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > index b3b14bd0dd64..1c826737aacb 100644
> > > --- a/mm/swap_state.c
> > > +++ b/mm/swap_state.c
> > > @@ -21,6 +21,7 @@
> > > #include <linux/swap_slots.h>
> > > #include <linux/huge_mm.h>
> > > #include <linux/shmem_fs.h>
> > > +#include <linux/zswap.h>
> > > #include "internal.h"
> > > #include "swap.h"
> > >
> > > @@ -417,6 +418,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > struct folio *folio;
> > > struct page *page;
> > > void *shadow = NULL;
> > > + bool zswap_lru_removed = false;
> > >
> > > *new_page_allocated = false;
> > > si = get_swap_device(entry);
> > > @@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > __folio_set_locked(folio);
> > > __folio_set_swapbacked(folio);
> > >
> > > + /*
> > > + * Page fault might itself trigger reclaim, on a zswap object that
> > > + * corresponds to the same swap entry. However, as the swap entry has
> > > + * previously been pinned, the task will run into an infinite loop trying
> > > + * to pin the swap entry again.
> > > + *
> > > + * To prevent this from happening, we remove it from the zswap
> > > + * LRU to prevent its reclamation.
> > > + */
> > > + zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
> > > +
> >
> > This will add a zswap lookup (and potentially an insertion below) in
> > every single swap fault path, right?. Doesn't this introduce latency
> > regressions? I am also not a fan of having zswap-specific details in
> > this path.
> >
> > When you say "pinned", do you mean the call to swapcache_prepare()
> > above (i.e. setting SWAP_HAS_CACHE)? IIUC, the scenario you are
> > worried about is that the following call to charge the page may invoke
> > reclaim, go into zswap, and try to writeback the same page we are
> > swapping in here. The writeback call will recurse into
> > __read_swap_cache_async(), call swapcache_prepare() and get EEXIST,
> > and keep looping indefinitely. Is this correct?
> >
> > If yes, can we handle this by adding a flag to
> > __read_swap_cache_async() that basically says "don't wait for
> > SWAP_HAS_CACHE and the swapcache to be consistent, if
> > swapcache_prepare() returns EEXIST just fail and return"? The zswap
> > writeback path can pass in this flag and skip such pages. We might
> > want to modify the writeback code to put back those pages at the end
> > of the lru instead of in the beginning.
>
> Thanks for the suggestion, this actually works and it seems cleaner so I think
> we'll go for your solution.
>
> >
> > > if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
> > > goto fail_unlock;
> > >
> > > @@ -497,6 +510,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > if (shadow)
> > > workingset_refault(folio, shadow);
> > >
> > > + if (zswap_lru_removed)
> > > + zswap_insert_swpentry_into_lru(entry);
> > > +
> > > /* Caller will initiate read into locked folio */
> > > folio_add_lru(folio);
> > > *new_page_allocated = true;
> > > @@ -506,6 +522,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > return page;
> > >
> > > fail_unlock:
> > > + if (zswap_lru_removed)
> > > + zswap_insert_swpentry_into_lru(entry);
> > > +
> > > put_swap_folio(folio, entry);
> > > folio_unlock(folio);
> > > folio_put(folio);
> > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > index 412b1409a0d7..1a469e5d5197 100644
> > > --- a/mm/zswap.c
> > > +++ b/mm/zswap.c
> > > @@ -34,6 +34,7 @@
> > > #include <linux/writeback.h>
> > > #include <linux/pagemap.h>
> > > #include <linux/workqueue.h>
> > > +#include <linux/list_lru.h>
> > >
> > > #include "swap.h"
> > > #include "internal.h"
> > > @@ -171,8 +172,8 @@ struct zswap_pool {
> > > struct work_struct shrink_work;
> > > struct hlist_node node;
> > > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > - struct list_head lru;
> > > - spinlock_t lru_lock;
> > > + struct list_lru list_lru;
> > > + struct mem_cgroup *next_shrink;
> > > };
> > >
> > > /*
> > > @@ -209,6 +210,7 @@ struct zswap_entry {
> > > unsigned long value;
> > > };
> > > struct obj_cgroup *objcg;
> > > + int nid;
> > > struct list_head lru;
> > > };
> >
> > Ideally this can be avoided if we can allocate struct zswap_entry on
> > the correct node.
>
> We didn't consider allocating the entry on the node without charging it to the
> memcg, we'll try it and if it's doable it would be a good compromise to avoid
> adding the node id here.
>
> >
> > >
> > > @@ -309,6 +311,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > > kmem_cache_free(zswap_entry_cache, entry);
> > > }
> > >
> > > +/*********************************
> > > +* lru functions
> > > +**********************************/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > + struct mem_cgroup *memcg = entry->objcg ?
> > > + get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> >
> > This line is repeated at least 3 times, perhaps add a helper for it?
> > get_mem_cgroup_from_zswap()?
> >
> > > + bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > > +
> > > + mem_cgroup_put(memcg);
> > > + return added;
> > > +}
> > > +
> > > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > + struct mem_cgroup *memcg = entry->objcg ?
> > > + get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > + bool removed = __list_lru_del(list_lru, &entry->lru, entry->nid, memcg);
> > > +
> > > + mem_cgroup_put(memcg);
> > > + return removed;
> > > +}
> > > +
> > > /*********************************
> > > * rbtree functions
> > > **********************************/
> > > @@ -393,9 +418,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > > if (!entry->length)
> > > atomic_dec(&zswap_same_filled_pages);
> > > else {
> > > - spin_lock(&entry->pool->lru_lock);
> > > - list_del(&entry->lru);
> > > - spin_unlock(&entry->pool->lru_lock);
> > > + zswap_lru_del(&entry->pool->list_lru, entry);
> > > zpool_free(zswap_find_zpool(entry), entry->handle);
> > > zswap_pool_put(entry->pool);
> > > }
> > > @@ -629,21 +652,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> > > zswap_entry_put(tree, entry);
> > > }
> > >
> > > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > + spinlock_t *lock, void *arg)
> > > {
> > > - struct zswap_entry *entry;
> > > + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > + struct mem_cgroup *memcg;
> > > struct zswap_tree *tree;
> > > pgoff_t swpoffset;
> > > - int ret;
> > > + enum lru_status ret = LRU_REMOVED_RETRY;
> > > + int writeback_result;
> > >
> > > - /* Get an entry off the LRU */
> > > - spin_lock(&pool->lru_lock);
> > > - if (list_empty(&pool->lru)) {
> > > - spin_unlock(&pool->lru_lock);
> > > - return -EINVAL;
> > > - }
> > > - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > > - list_del_init(&entry->lru);
> > > /*
> > > * Once the lru lock is dropped, the entry might get freed. The
> > > * swpoffset is copied to the stack, and entry isn't deref'd again
> > > @@ -651,26 +669,35 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > */
> > > swpoffset = swp_offset(entry->swpentry);
> > > tree = zswap_trees[swp_type(entry->swpentry)];
> > > - spin_unlock(&pool->lru_lock);
> > > + list_lru_isolate(l, item);
> > > + spin_unlock(lock);
> > >
> > > /* Check for invalidate() race */
> > > spin_lock(&tree->lock);
> > > if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > > - ret = -EAGAIN;
> > > goto unlock;
> > > }
> > > /* Hold a reference to prevent a free during writeback */
> > > zswap_entry_get(entry);
> > > spin_unlock(&tree->lock);
> > >
> > > - ret = zswap_writeback_entry(entry, tree);
> > > + writeback_result = zswap_writeback_entry(entry, tree);
> > >
> > > spin_lock(&tree->lock);
> > > - if (ret) {
> > > - /* Writeback failed, put entry back on LRU */
> > > - spin_lock(&pool->lru_lock);
> > > - list_move(&entry->lru, &pool->lru);
> > > - spin_unlock(&pool->lru_lock);
> > > + if (writeback_result) {
> > > + zswap_reject_reclaim_fail++;
> > > +
> > > + /* Check for invalidate() race */
> > > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > > + goto put_unlock;
> > > +
> > > + memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > + spin_lock(lock);
> > > + /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > + list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > > + spin_unlock(lock);
> > > + mem_cgroup_put(memcg);
> > > + ret = LRU_RETRY;
> > > goto put_unlock;
> > > }
> > >
> > > @@ -686,19 +713,63 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > zswap_entry_put(tree, entry);
> > > unlock:
> > > spin_unlock(&tree->lock);
> > > - return ret ? -EAGAIN : 0;
> > > + spin_lock(lock);
> > > + return ret;
> > > +}
> > > +
> > > +static int shrink_memcg(struct mem_cgroup *memcg)
> > > +{
> > > + struct zswap_pool *pool;
> > > + int nid, shrunk = 0;
> > > + bool is_empty = true;
> > > +
> > > + pool = zswap_pool_current_get();
> > > + if (!pool)
> > > + return -EINVAL;
> > > +
> > > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > > + unsigned long nr_to_walk = 1;
> > > +
> > > + if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> > > + NULL, &nr_to_walk))
> > > + shrunk++;
> > > + if (!nr_to_walk)
> >
> > nr_to_walk will be 0 if we shrunk 1 page, so it's the same condition
> > as the above, right?
> >
> > is_empty seems to be shrunk == 0 if I understand correctly, seems like
> > there is no need for both.
>
> It is indeed 0 when we shrunk 1 page, but it could also be 0 also when the
> reclaim failed and the list was not empty.

I see.

I still think the function can be clearer / more aesthetic, but I
don't really know how off the top of my head :)

>
> >
> > > + is_empty = false;
> > > + }
> > > + zswap_pool_put(pool);
> > > +
> > > + if (is_empty)
> > > + return -EINVAL;
> > > + if (shrunk)
> > > + return 0;
> > > + return -EAGAIN;
> > > }
> > >
> > > static void shrink_worker(struct work_struct *w)
> > > {
> > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > shrink_work);
> > > - int ret, failures = 0;
> > > + int ret, failures = 0, memcg_selection_failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > + /* previous next_shrink has become a zombie - restart from the top */
> >
> > Do we skip zombies because all zswap entries are reparented with the objcg?
> >
> > If yes, why do we restart from the top instead of just skipping them?
> > memcgs after a zombie will not be reachable now IIUC.
> >
> > Also, why explicitly check for zombies instead of having
> > shrink_memcg() just skip memcgs with no zswap entries? The logic is
> > slightly complicated.
>
> I think you have a point here, I'm not sure if the iteration can go on once we
> get a zombie, if it can, we'll just skip it.

If the point of skipping zombies is that we know their list_lrus are
reparented, so we will end up scanning their parents again, then I
believe we can just skip them. This should be "hidden" within
shrink_memcg() in my opinion, with a clear comment explaining why we
skip zombies.

>
> >
> > > + if (pool->next_shrink && !mem_cgroup_online(pool->next_shrink)) {
> > > + mem_cgroup_put(pool->next_shrink);
> > > + pool->next_shrink = NULL;
> > > + }
> > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > > +
> > > + /* fails to find a suitable cgroup - give the worker another chance. */
> > > + if (!pool->next_shrink) {
> > > + if (++memcg_selection_failures == 2)
> > > + break;
> > > + continue;
> > > + }
> > > +
> > > + ret = shrink_memcg(pool->next_shrink);
> > > +
> > > if (ret) {
> > > - zswap_reject_reclaim_fail++;
> > > if (ret != -EAGAIN)
> > > break;
> > > if (++failures == MAX_RECLAIM_RETRIES)
> > > @@ -764,9 +835,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > */
> > > kref_init(&pool->kref);
> > > INIT_LIST_HEAD(&pool->list);
> > > - INIT_LIST_HEAD(&pool->lru);
> > > - spin_lock_init(&pool->lru_lock);
> > > INIT_WORK(&pool->shrink_work, shrink_worker);
> > > + list_lru_init_memcg(&pool->list_lru, NULL);
> > >
> > > zswap_pool_debug("created", pool);
> > >
> > > @@ -831,6 +901,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > >
> > > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > free_percpu(pool->acomp_ctx);
> > > + list_lru_destroy(&pool->list_lru);
> > > + if (pool->next_shrink)
> > > + mem_cgroup_put(pool->next_shrink);
> > > for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > zpool_destroy_pool(pool->zpools[i]);
> > > kfree(pool);
> > > @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> > > struct scatterlist input, output;
> > > struct crypto_acomp_ctx *acomp_ctx;
> > > struct obj_cgroup *objcg = NULL;
> > > + struct mem_cgroup *memcg = NULL;
> > > struct zswap_pool *pool;
> > > struct zpool *zpool;
> > > + int lru_alloc_ret;
> > > unsigned int dlen = PAGE_SIZE;
> > > unsigned long handle, value;
> > > char *buf;
> > > @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> > > if (!zswap_enabled || !tree)
> > > return false;
> > >
> > > - /*
> > > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > - * local cgroup limits.
> > > - */
> > > objcg = get_obj_cgroup_from_folio(folio);
> > > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > > - goto reject;
> > > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> > > + if (shrink_memcg(memcg)) {
> > > + mem_cgroup_put(memcg);
> > > + goto reject;
> > > + }
> > > + mem_cgroup_put(memcg);
> > > + }
> > >
> > > /* reclaim space if needed */
> > > if (zswap_is_full()) {
> > > @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> > > else
> > > zswap_pool_reached_full = false;
> > > }
> > > -
> > > + pool = zswap_pool_current_get();
> > > + if (!pool) {
> > > + ret = -EINVAL;
> > > + goto reject;
> > > + }
> > > /* allocate entry */
> > > entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > > if (!entry) {
> > > @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> > > entry->length = 0;
> > > entry->value = value;
> > > atomic_inc(&zswap_same_filled_pages);
> > > + zswap_pool_put(pool);
> > > goto insert_entry;
> > > }
> > > kunmap_atomic(src);
> > > @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> > > if (!zswap_non_same_filled_pages_enabled)
> > > goto freepage;
> > >
> > > + if (objcg) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> > > + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > > + mem_cgroup_put(memcg);
> > > +
> > > + if (lru_alloc_ret)
> > > + goto freepage;
> > > + }
> > > +
> > > /* if entry is successfully added, it keeps the reference */
> > > entry->pool = zswap_pool_current_get();
> > > if (!entry->pool)
> > > @@ -1325,6 +1415,7 @@ bool zswap_store(struct folio *folio)
> > >
> > > insert_entry:
> > > entry->objcg = objcg;
> > > + entry->nid = page_to_nid(page);
> > > if (objcg) {
> > > obj_cgroup_charge_zswap(objcg, entry->length);
> > > /* Account before objcg ref is moved to tree */
> > > @@ -1338,9 +1429,8 @@ bool zswap_store(struct folio *folio)
> > > zswap_invalidate_entry(tree, dupentry);
> > > }
> > > if (entry->length) {
> > > - spin_lock(&entry->pool->lru_lock);
> > > - list_add(&entry->lru, &entry->pool->lru);
> > > - spin_unlock(&entry->pool->lru_lock);
> > > + INIT_LIST_HEAD(&entry->lru);
> > > + zswap_lru_add(&pool->list_lru, entry);
> > > }
> > > spin_unlock(&tree->lock);
> > >
> > > @@ -1447,9 +1537,8 @@ bool zswap_load(struct folio *folio)
> > > zswap_invalidate_entry(tree, entry);
> > > folio_mark_dirty(folio);
> > > } else if (entry->length) {
> > > - spin_lock(&entry->pool->lru_lock);
> > > - list_move(&entry->lru, &entry->pool->lru);
> > > - spin_unlock(&entry->pool->lru_lock);
> > > + zswap_lru_del(&entry->pool->list_lru, entry);
> > > + zswap_lru_add(&entry->pool->list_lru, entry);
> > > }
> > > zswap_entry_put(tree, entry);
> > > spin_unlock(&tree->lock);
> > > @@ -1507,6 +1596,48 @@ void zswap_swapoff(int type)
> > > zswap_trees[type] = NULL;
> > > }
> > >
> > > +bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
> > > +{
> > > + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> > > + struct zswap_entry *entry;
> > > + struct zswap_pool *pool;
> > > + bool removed = false;
> > > +
> > > + /* get the zswap entry and prevent it from being freed */
> > > + spin_lock(&tree->lock);
> > > + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> > > + /* skip if the entry is already written back or is a same filled page */
> > > + if (!entry || !entry->length)
> > > + goto tree_unlock;
> > > +
> > > + pool = entry->pool;
> > > + removed = zswap_lru_del(&pool->list_lru, entry);
> > > +
> > > +tree_unlock:
> > > + spin_unlock(&tree->lock);
> > > + return removed;
> > > +}
> > > +
> > > +void zswap_insert_swpentry_into_lru(swp_entry_t swpentry)
> > > +{
> > > + struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
> > > + struct zswap_entry *entry;
> > > + struct zswap_pool *pool;
> > > +
> > > + /* get the zswap entry and prevent it from being freed */
> > > + spin_lock(&tree->lock);
> > > + entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
> > > + /* skip if the entry is already written back or is a same filled page */
> > > + if (!entry || !entry->length)
> > > + goto tree_unlock;
> > > +
> > > + pool = entry->pool;
> > > + zswap_lru_add(&pool->list_lru, entry);
> > > +
> > > +tree_unlock:
> > > + spin_unlock(&tree->lock);
> > > +}
> > > +
> > > /*********************************
> > > * debugfs functions
> > > **********************************/
> > > @@ -1560,7 +1691,7 @@ static int zswap_setup(void)
> > > struct zswap_pool *pool;
> > > int ret;
> > >
> > > - zswap_entry_cache = KMEM_CACHE(zswap_entry, 0);
> > > + zswap_entry_cache = KMEM_CACHE(zswap_entry, SLAB_ACCOUNT);
> > > if (!zswap_entry_cache) {
> > > pr_err("entry cache creation failed\n");
> > > goto cache_fail;
> > > --
> > > 2.34.1

2023-09-27 22:55:28

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Wed, Sep 27, 2023 at 09:48:10PM +0200, Domenico Cerasuolo wrote:
> > > @@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > __folio_set_locked(folio);
> > > __folio_set_swapbacked(folio);
> > >
> > > + /*
> > > + * Page fault might itself trigger reclaim, on a zswap object that
> > > + * corresponds to the same swap entry. However, as the swap entry has
> > > + * previously been pinned, the task will run into an infinite loop trying
> > > + * to pin the swap entry again.
> > > + *
> > > + * To prevent this from happening, we remove it from the zswap
> > > + * LRU to prevent its reclamation.
> > > + */
> > > + zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
> > > +
> >
> > This will add a zswap lookup (and potentially an insertion below) in
> > every single swap fault path, right?. Doesn't this introduce latency
> > regressions? I am also not a fan of having zswap-specific details in
> > this path.
> >
> > When you say "pinned", do you mean the call to swapcache_prepare()
> > above (i.e. setting SWAP_HAS_CACHE)? IIUC, the scenario you are
> > worried about is that the following call to charge the page may invoke
> > reclaim, go into zswap, and try to writeback the same page we are
> > swapping in here. The writeback call will recurse into
> > __read_swap_cache_async(), call swapcache_prepare() and get EEXIST,
> > and keep looping indefinitely. Is this correct?

Yeah, exactly.

> > If yes, can we handle this by adding a flag to
> > __read_swap_cache_async() that basically says "don't wait for
> > SWAP_HAS_CACHE and the swapcache to be consistent, if
> > swapcache_prepare() returns EEXIST just fail and return"? The zswap
> > writeback path can pass in this flag and skip such pages. We might
> > want to modify the writeback code to put back those pages at the end
> > of the lru instead of in the beginning.
>
> Thanks for the suggestion, this actually works and it seems cleaner so I think
> we'll go for your solution.

That sounds like a great idea.

It should be pointed out that these aren't perfectly
equivalent. Removing the entry from the LRU eliminates the lock
recursion scenario on that very specific entry.

Having writeback skip on -EEXIST will make it skip *any* pages that
are concurrently entering the swapcache, even when it *could* wait for
them to finish.

However, pages that are concurrently read back into memory are a poor
choice for writeback anyway, and likely to be removed from swap soon.

So it happens to work out just fine in this case. I'd just add a
comment that explains the recursion deadlock, as well as the
implication of skipping any busy entry and why that's okay.

2023-09-27 23:07:16

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Wed, Sep 27, 2023 at 2:02 PM Johannes Weiner <[email protected]> wrote:
>
> On Wed, Sep 27, 2023 at 09:48:10PM +0200, Domenico Cerasuolo wrote:
> > > > @@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > > > __folio_set_locked(folio);
> > > > __folio_set_swapbacked(folio);
> > > >
> > > > + /*
> > > > + * Page fault might itself trigger reclaim, on a zswap object that
> > > > + * corresponds to the same swap entry. However, as the swap entry has
> > > > + * previously been pinned, the task will run into an infinite loop trying
> > > > + * to pin the swap entry again.
> > > > + *
> > > > + * To prevent this from happening, we remove it from the zswap
> > > > + * LRU to prevent its reclamation.
> > > > + */
> > > > + zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
> > > > +
> > >
> > > This will add a zswap lookup (and potentially an insertion below) in
> > > every single swap fault path, right?. Doesn't this introduce latency
> > > regressions? I am also not a fan of having zswap-specific details in
> > > this path.
> > >
> > > When you say "pinned", do you mean the call to swapcache_prepare()
> > > above (i.e. setting SWAP_HAS_CACHE)? IIUC, the scenario you are
> > > worried about is that the following call to charge the page may invoke
> > > reclaim, go into zswap, and try to writeback the same page we are
> > > swapping in here. The writeback call will recurse into
> > > __read_swap_cache_async(), call swapcache_prepare() and get EEXIST,
> > > and keep looping indefinitely. Is this correct?
>
> Yeah, exactly.
>
> > > If yes, can we handle this by adding a flag to
> > > __read_swap_cache_async() that basically says "don't wait for
> > > SWAP_HAS_CACHE and the swapcache to be consistent, if
> > > swapcache_prepare() returns EEXIST just fail and return"? The zswap
> > > writeback path can pass in this flag and skip such pages. We might
> > > want to modify the writeback code to put back those pages at the end
> > > of the lru instead of in the beginning.
> >
> > Thanks for the suggestion, this actually works and it seems cleaner so I think
> > we'll go for your solution.
>
> That sounds like a great idea.
>
> It should be pointed out that these aren't perfectly
> equivalent. Removing the entry from the LRU eliminates the lock
> recursion scenario on that very specific entry.
>
> Having writeback skip on -EEXIST will make it skip *any* pages that
> are concurrently entering the swapcache, even when it *could* wait for
> them to finish.
>
> However, pages that are concurrently read back into memory are a poor
> choice for writeback anyway, and likely to be removed from swap soon.
>
> So it happens to work out just fine in this case. I'd just add a
> comment that explains the recursion deadlock, as well as the
> implication of skipping any busy entry and why that's okay.

Good point, we will indeed skip even if the concurrent insertion from
the swapcache is coming from a different cpu.

As you said, it works out just fine in this case, as the page will be
removed from zswap momentarily anyway. A comment is indeed due.

2023-09-28 03:20:07

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > + is_empty = false;
> > + }
> > + zswap_pool_put(pool);
> > +
> > + if (is_empty)
> > + return -EINVAL;
> > + if (shrunk)
> > + return 0;
> > + return -EAGAIN;
> > }
> >
> > static void shrink_worker(struct work_struct *w)
> > {
> > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > shrink_work);
> > - int ret, failures = 0;
> > + int ret, failures = 0, memcg_selection_failures = 0;
> >
> > + /* global reclaim will select cgroup in a round-robin fashion. */
> > do {
> > - ret = zswap_reclaim_entry(pool);
> > + /* previous next_shrink has become a zombie - restart from the top */
>
> Do we skip zombies because all zswap entries are reparented with the objcg?
>
> If yes, why do we restart from the top instead of just skipping them?
> memcgs after a zombie will not be reachable now IIUC.
>
> Also, why explicitly check for zombies instead of having
> shrink_memcg() just skip memcgs with no zswap entries? The logic is
> slightly complicated.

I think this might actually be a leftover from the initial plan to do
partial walks without holding on to a reference to the last scanned
css. Similar to mem_cgroup_iter() does with the reclaim cookie - if a
dead cgroup is encountered and we lose the tree position, restart.

But now the code actually holds a reference, so I agree the zombie
thing should just be removed.

2023-09-28 03:57:06

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Tue, Sep 26, 2023 at 11:37:17AM -0700, Yosry Ahmed wrote:
> On Tue, Sep 26, 2023 at 11:24 AM Johannes Weiner <[email protected]> wrote:
> >
> > On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> > > +Chris Li
> > >
> > > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > > >
> > > > From: Domenico Cerasuolo <[email protected]>
> > > >
> > > > Currently, we only have a single global LRU for zswap. This makes it
> > > > impossible to perform worload-specific shrinking - an memcg cannot
> > > > determine which pages in the pool it owns, and often ends up writing
> > > > pages from other memcgs. This issue has been previously observed in
> > > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > > >
> > > > https://lore.kernel.org/all/[email protected]/T/#u
> > > >
> > > > This patch fully resolves the issue by replacing the global zswap LRU
> > > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > > >
> > > > a) When a store attempt hits an memcg limit, it now triggers a
> > > > synchronous reclaim attempt that, if successful, allows the new
> > > > hotter page to be accepted by zswap.
> > > > b) If the store attempt instead hits the global zswap limit, it will
> > > > trigger an asynchronous reclaim attempt, in which an memcg is
> > > > selected for reclaim in a round-robin-like fashion.
> > >
> > > Hey Nhat,
> > >
> > > I didn't take a very close look as I am currently swamped, but going
> > > through the patch I have some comments/questions below.
> > >
> > > I am not very familiar with list_lru, but it seems like the existing
> > > API derives the node and memcg from the list item itself. Seems like
> > > we can avoid a lot of changes if we allocate struct zswap_entry from
> > > the same node as the page, and account it to the same memcg. Would
> > > this be too much of a change or too strong of a restriction? It's a
> > > slab allocation and we will free memory on that node/memcg right
> > > after.
> >
> > My 2c, but I kind of hate that assumption made by list_lru.
> >
> > We ran into problems with it with the THP shrinker as well. That one
> > strings up 'struct page', and virt_to_page(page) results in really fun
> > to debug issues.
> >
> > IMO it would be less error prone to have memcg and nid as part of the
> > regular list_lru_add() function signature. And then have an explicit
> > list_lru_add_obj() that does a documented memcg lookup.
>
> I also didn't like/understand that assumption, but again I don't have
> enough familiarity with the code to judge, and I don't know why it was
> done that way. Adding memcg and nid as arguments to the standard
> list_lru API makes the pill easier to swallow. In any case, this
> should be done in a separate patch to make the diff here more focused
> on zswap changes.

Just like the shrinker, it was initially written for slab objects like
dentries and inodes. Since all the users then passed in charged slab
objects, it ended up hardcoding that assumption in the interface.

Once that assumption no longer holds, IMO we should update the
interface. But agreed, a separate patch makes sense.

> > Because of the overhead, we've been selective about the memory we
> > charge. I'd hesitate to do it just to work around list_lru.
>
> On the other hand I am worried about the continuous growth of struct
> zswap_entry. It's now at ~10 words on 64-bit? That's ~2% of the size
> of the page getting compressed if I am not mistaken. So I am skeptical
> about storing the nid there.
>
> A middle ground would be allocating struct zswap_entry on the correct
> node without charging it. We don't need to store the nid and we don't
> need to charge struct zswap_entry. It doesn't get rid of
> virt_to_page() though.

I like that idea.

The current list_lru_add() would do the virt_to_page() too, so from
that POV it's a lateral move. But we'd save the charging and the extra
nid member.

2023-09-28 22:58:06

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Wed, Sep 27, 2023 at 1:51 PM Johannes Weiner <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > > + is_empty = false;
> > > + }
> > > + zswap_pool_put(pool);
> > > +
> > > + if (is_empty)
> > > + return -EINVAL;
> > > + if (shrunk)
> > > + return 0;
> > > + return -EAGAIN;
> > > }
> > >
> > > static void shrink_worker(struct work_struct *w)
> > > {
> > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > shrink_work);
> > > - int ret, failures = 0;
> > > + int ret, failures = 0, memcg_selection_failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > + /* previous next_shrink has become a zombie - restart from the top */
> >
> > Do we skip zombies because all zswap entries are reparented with the objcg?
> >
> > If yes, why do we restart from the top instead of just skipping them?
> > memcgs after a zombie will not be reachable now IIUC.
> >
> > Also, why explicitly check for zombies instead of having
> > shrink_memcg() just skip memcgs with no zswap entries? The logic is
> > slightly complicated.
>
> I think this might actually be a leftover from the initial plan to do
> partial walks without holding on to a reference to the last scanned
> css. Similar to mem_cgroup_iter() does with the reclaim cookie - if a
> dead cgroup is encountered and we lose the tree position, restart.
>
> But now the code actually holds a reference, so I agree the zombie
> thing should just be removed.

It might be nice to keep in shrink_memcg() as an optimization and for
fairness. IIUC, if a memcg is zombified the list_lrus will be
reparented, so we will scan the parent's list_lrus again, which can be
unfair to that parent. It can also slow things down if we have a large
number of zombies, as their number is virtually unbounded.

2023-09-29 12:56:35

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] zswap: shrinks zswap pool based on memory pressure

On Mon, Sep 25, 2023 at 6:12 PM Yosry Ahmed <[email protected]> wrote:
>
> On Mon, Sep 25, 2023 at 5:43 PM Nhat Pham <[email protected]> wrote:
> >
> > On Mon, Sep 25, 2023 at 5:00 PM Yosry Ahmed <[email protected]> wrote:
> > >
> > > On Mon, Sep 25, 2023 at 4:29 PM Nhat Pham <[email protected]> wrote:
> > > >
> > > > On Mon, Sep 25, 2023 at 1:38 PM Yosry Ahmed <[email protected]> wrote:
> > > > >
> > > > > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <[email protected]> wrote:
> > > > > >
> > > > > > Currently, we only shrink the zswap pool when the user-defined limit is
> > > > > > hit. This means that if we set the limit too high, cold data that are
> > > > > > unlikely to be used again will reside in the pool, wasting precious
> > > > > > memory. It is hard to predict how much zswap space will be needed ahead
> > > > > > of time, as this depends on the workload (specifically, on factors such
> > > > > > as memory access patterns and compressibility of the memory pages).
> > > > > >
> > > > > > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > > > > > is initiated when there is memory pressure. The shrinker does not
> > > > > > have any parameter that must be tuned by the user, and can be opted in
> > > > > > or out on a per-memcg basis.
> > > > >
> > > > > What's the use case for having per-memcg opt-in/out?
> > > > >
> > > > > If there is memory pressure, reclaiming swap-backed pages will push
> > > > > pages out of zswap anyway, regardless of this patch. With this patch,
> > > > > any sort of reclaim can push pages out of zswap. Wouldn't that be
> > > > > preferable to reclaiming memory that is currently resident in memory
> > > > > (so arguably hotter than the pages in zswap)? Why would this decision
> > > > > be different per-memcg?
> > > > I'm not quite following your argument here. The point of having this
> > > > be done on a per-memcg basis is that we have different workloads
> > > > with different memory access pattern (and as a result, different memory
> > > > coldness distribution).
> > > >
> > > > In a workload where there is a lot of cold data, we can really benefit
> > > > from reclaiming all of those pages and repurpose the memory reclaimed
> > > > (for e.g for filecache).
> > > >
> > > > On the other hand, in a workload where there aren't a lot of cold data,
> > > > reclaiming its zswapped pages will at best do nothing (wasting CPU
> > > > cycles on compression/decompression), and at worst hurt performance
> > > > (due to increased IO when we need those writtenback pages again).
> > > >
> > > > Such different workloads could co-exist in the same system, and having
> > > > a per-memcg knob allows us to crank on the shrinker only on workloads
> > > > where it makes sense.
> > >
> > > I am not sure we are on the same page here.
> > >
> > > What you're describing sounds more like proactive reclaim, which we
> > > wouldn't invoke unless the workload has cold data anyway.
> > >
> > > IIUC, outside of that, this shrinker will run when there is memory
> > > pressure. This means that we need to free memory anyway, regardless of
> > > its absolute coldness. We want to evict the colder pages in the memcg.
> > > It seems to be that in ~all cases, evicting pages in zswap will be
> > > better than evicting pages in memory, as the pages in memory are
> > > arguably hotter (since they weren't reclaimed first). This seems to be
> > > something that would be true for all workloads.
> > >
> > > What am I missing?
> >
> > Yup, the shrinker is initiated under memory pressure.
> > And with it, we can reclaim memory from zswap when
> > it's (often) not at max capacity.
> >
> > The kernel has no knowledge of absolute coldness, only relative
> > coldness thanks to LRU. We don't have a global LRU of all possible
> > memory pages/objects for a particular memcg either, so we cannot
> > compare the coldness of objects from different sources.
> >
> > The "coldest" pages in zswap LRU could very well be warm enough
> > that swapping them out degrades performance, while there are even
> > colder memory from other sources (other shrinkers registered for this
> > memcg). Alternatively, we can also "evict" uncompressed anonymous
> > memory, which will go to the zswap pool. This also saves memory,
> > and could potentially be better than zswap reclaim (2 compressed
> > pages might be better performance-wise than 1 uncompressed,
> > 1 swapped out)
> >
> > All of this depends on the memory access pattern of the workloads,
> > which could differ cgroup-by-cgroup within the same system.
> > Having a per-memcg knob is a way for admins to influence this
> > decision from userspace, if the admins have knowledge about
> > workload memory access patterns.
> >
> > For e.g, if we know that there is one particular cgroup that populates
> > a bunch of single-use tmpfs pages, then we can target that cgroup
> > specifically, while leaving the other cgroups in the system alone.
>
> I think it's useful to break down the discussion here for cgroup
> reclaim and global reclaim.
>
> For cgroup reclaim, the kernel knows that the pages in the LRUs are
> relatively hotter than the pages in zswap. So I don't see why
> userspace would opt out specific cgroups from zswap shrinking. In my
> experience, most memory usage comes from LRU pages, so let's ignore
> other shrinkers for a second. Yes, in some cases compressing another
> page might be better than moving a compressed page to swap, but how
> would userspace have the intuition to decide this? It varies not only
> based on workload, but also the point in time, the compressibility of
> pages, etc.
>
> In other words, how would a system admin choose to opt a cgroup in or out?
>
> For global reclaim, IIUC you are saying that we want to protect some
> cgroups under global memory pressure because we know that their "cold"
> memory in zswap is hotter than memory elsewhere in the hierarchy,
> right?
>
> Isn't this the case for LRU reclaim as well? I would assume memory
> protections would be used to tune this, not opting a cgroup completely
> from zswap shrinking. Global reclaim can end up reclaiming LRU pages
> from that cgroup if protection is not set up correctly anyway. What do
> we gain by protecting pages in zswap if hotter pages in the LRUs are
> not protected?

Hmm you got a point. I guess our main motivation is just being
extra safe. It's a new feature, so we want to make sure that
we limit unintentional performance regression for everyone
(not just Meta) as much as possible.

However, as you have pointed out, per-cgroup knob might not
help any more than a simple, global knob. I'll remove it in v3
(and we can revisit this decision later on if it turns out to be
necessary after all).

>
> >
> > >
> > > > >
> > > > > >
> > > > > > Furthermore, to make it more robust for many workloads and prevent
> > > > > > overshrinking (i.e evicting warm pages that might be refaulted into
> > > > > > memory), we build in the following heuristics:
> > > > > >
> > > > > > * Estimate the number of warm pages residing in zswap, and attempt to
> > > > > > protect this region of the zswap LRU.
> > > > > > * Scale the number of freeable objects by an estimate of the memory
> > > > > > saving factor. The better zswap compresses the data, the fewer pages
> > > > > > we will evict to swap (as we will otherwise incur IO for relatively
> > > > > > small memory saving).
> > > > > > * During reclaim, if the shrinker encounters a page that is also being
> > > > > > brought into memory, the shrinker will cautiously terminate its
> > > > > > shrinking action, as this is a sign that it is touching the warmer
> > > > > > region of the zswap LRU.
> > > > >
> > > > > I don't have an opinion about the reclaim heuristics here, I will let
> > > > > reclaim experts chip in.
> > > > >
> > > > > >
> > > > > > On a benchmark that we have run:
> > > > >
> > > > > Please add more details (as much as possible) about the benchmarks used here.
> > > > Sure! I built the kernel in a memory-limited cgroup a couple times,
> > > > then measured the build time.
> > > >
> > > > To simulate conditions where there are cold, unused data, I
> > > > also generated a bunch of data in tmpfs (and never touch them
> > > > again).
> > >
> > > Please include such details in the commit message, there is also
> > > another reference below to "another" benchmark.
> >
> > Will do if/when I send v3.
> > The "another" benchmark is just generating even more tmpfs cold data :)
>
> Those benchmarks are heavily synthetic, which is not a showstopper,
> but describing them in the commit message helps people reason about
> the change.
>
> >
> > >
> > >
> > > > >
> > > > > >
> > > > > > (without the shrinker)
> > > > > > real -- mean: 153.27s, median: 153.199s
> > > > > > sys -- mean: 541.652s, median: 541.903s
> > > > > > user -- mean: 4384.9673999999995s, median: 4385.471s
> > > > > >
> > > > > > (with the shrinker)
> > > > > > real -- mean: 151.4956s, median: 151.456s
> > > > > > sys -- mean: 461.14639999999997s, median: 465.656s
> > > > > > user -- mean: 4384.7118s, median: 4384.675s
> > > > > >
> > > > > > We observed a 14-15% reduction in kernel CPU time, which translated to
> > > > > > over 1% reduction in real time.
> > > > > >
> > > > > > On another benchmark, where there was a lot more cold memory residing in
> > > > > > zswap, we observed even more pronounced gains:
> > > > > >
> > > > > > (without the shrinker)
> > > > > > real -- mean: 157.52519999999998s, median: 157.281s
> > > > > > sys -- mean: 769.3082s, median: 780.545s
> > > > > > user -- mean: 4378.1622s, median: 4378.286s
> > > > > >
> > > > > > (with the shrinker)
> > > > > > real -- mean: 152.9608s, median: 152.845s
> > > > > > sys -- mean: 517.4446s, median: 506.749s
> > > > > > user -- mean: 4387.694s, median: 4387.935s
> > > > > >
> > > > > > Here, we saw around 32-35% reduction in kernel CPU time, which
> > > > > > translated to 2.8% reduction in real time. These results confirm our
> > > > > > hypothesis that the shrinker is more helpful the more cold memory we
> > > > > > have.
> > > > > >
> > > > > > Suggested-by: Johannes Weiner <[email protected]>
> > > > > > Signed-off-by: Nhat Pham <[email protected]>
> > > > > > ---
> > > > > > Documentation/admin-guide/mm/zswap.rst | 12 ++
> > > > > > include/linux/memcontrol.h | 1 +
> > > > > > include/linux/mmzone.h | 14 ++
> > > > > > mm/memcontrol.c | 33 +++++
> > > > > > mm/swap_state.c | 31 ++++-
> > > > > > mm/zswap.c | 180 ++++++++++++++++++++++++-
> > > > > > 6 files changed, 263 insertions(+), 8 deletions(-)
> > > > > >
> > > > > > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > > > > > index 45b98390e938..ae8597a67804 100644
> > > > > > --- a/Documentation/admin-guide/mm/zswap.rst
> > > > > > +++ b/Documentation/admin-guide/mm/zswap.rst
> > > > > > @@ -153,6 +153,18 @@ attribute, e. g.::
> > > > > >
> > > > > > Setting this parameter to 100 will disable the hysteresis.
> > > > > >
> > > > > > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > > > > > +can be advantageous to proactively write these cold pages to swap and reclaim
> > > > > > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > > > > > +User can enable it by first switching on the global knob:
> > > > > > +
> > > > > > + echo Y > /sys/module/zswap/par meters/shrinker_enabled
> > > > > > +
> > > > > > +When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
> > > > > > +it on for each cgroup that the shrinker should target:
> > > > > > +
> > > > > > + echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
> > > > > > +
> > > > > > A debugfs interface is provided for various statistic about pool size, number
> > > > > > of pages stored, same-value filled pages and various counters for the reasons
> > > > > > pages are rejected.
> > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > > > > index 05d34b328d9d..f005ea667863 100644
> > > > > > --- a/include/linux/memcontrol.h
> > > > > > +++ b/include/linux/memcontrol.h
> > > > > > @@ -219,6 +219,7 @@ struct mem_cgroup {
> > > > > >
> > > > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > > > unsigned long zswap_max;
> > > > > > + atomic_t zswap_shrinker_enabled;
> > > > > > #endif
> > > > > >
> > > > > > unsigned long soft_limit;
> > > > > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > > > > > index 4106fbc5b4b3..81f4c5ea3e16 100644
> > > > > > --- a/include/linux/mmzone.h
> > > > > > +++ b/include/linux/mmzone.h
> > > > > > @@ -637,6 +637,20 @@ struct lruvec {
> > > > > > #ifdef CONFIG_MEMCG
> > > > > > struct pglist_data *pgdat;
> > > > > > #endif
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > + /*
> > > > > > + * Number of pages in zswap that should be protected from the shrinker.
> > > > > > + * This number is an estimate of the following counts:
> > > > > > + *
> > > > > > + * a) Recent page faults.
> > > > > > + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > > > > > + * as well as recent zswap LRU rotations.
> > > > > > + *
> > > > > > + * These pages are likely to be warm, and might incur IO if the are written
> > > > > > + * to swap.
> > > > > > + */
> > > > > > + unsigned long nr_zswap_protected;
> > > > > > +#endif
> > > > >
> > > > > Would this be better abstracted in a zswap lruvec struct?
> > > > There is just one field, so that sounds like overkill to me.
> > > > But if we need to store more data (for smarter heuristics),
> > > > that'll be a good idea. I'll keep this in mind. Thanks for the
> > > > suggestion, Yosry!
> > >
> > > (A space between the quoted text and the reply usually helps visually :)
> > >
> > > It wasn't really about the number of fields, but rather place this
> > > struct in zswap.h (with the long comment explaining what it's doing),
> > > and adding an abstracted struct member here. The comment will live in
> > > an appropriate file, further modifications don't need to touch
> > > mmzone.h, and struct lruvec is less cluttered for readers that don't
> > > care about zswap (and we can avoid the ifdef).
> > >
> > > Anyway, this is all mostly aesthetic so I don't feel strongly.
> > >
> > > > >
> > > > > > };
> > > > > >
> > > > > > /* Isolate unmapped pages */
> > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > > > > index 9f84b3f7b469..1a2c97cf396f 100644
> > > > > > --- a/mm/memcontrol.c
> > > > > > +++ b/mm/memcontrol.c
> > > > > > @@ -5352,6 +5352,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
> > > > > > WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
> > > > > > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > > > > > memcg->zswap_max = PAGE_COUNTER_MAX;
> > > > > > + /* Disable the shrinker by default */
> > > > > > + atomic_set(&memcg->zswap_shrinker_enabled, 0);
> > > > > > #endif
> > > > > > page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
> > > > > > if (parent) {
> > > > > > @@ -7877,6 +7879,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
> > > > > > return nbytes;
> > > > > > }
> > > > > >
> > > > > > +static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
> > > > > > +{
> > > > > > + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > > > > > +
> > > > > > + seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
> > > > > > + return 0;
> > > > > > +}
> > > > > > +
> > > > > > +static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
> > > > > > + char *buf, size_t nbytes, loff_t off)
> > > > > > +{
> > > > > > + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> > > > > > + int zswap_shrinker_enabled;
> > > > > > + ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
> > > > > > +
> > > > > > + if (parse_ret)
> > > > > > + return parse_ret;
> > > > > > +
> > > > > > + if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
> > > > > > + return -ERANGE;
> > > > > > +
> > > > > > + atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
> > > > > > + return nbytes;
> > > > > > +}
> > > > > > +
> > > > > > static struct cftype zswap_files[] = {
> > > > > > {
> > > > > > .name = "zswap.current",
> > > > > > @@ -7889,6 +7916,12 @@ static struct cftype zswap_files[] = {
> > > > > > .seq_show = zswap_max_show,
> > > > > > .write = zswap_max_write,
> > > > > > },
> > > > > > + {
> > > > > > + .name = "zswap.shrinker.enabled",
> > > > > > + .flags = CFTYPE_NOT_ON_ROOT,
> > > > > > + .seq_show = zswap_shrinker_enabled_show,
> > > > > > + .write = zswap_shrinker_enabled_write,
> > > > > > + },
> > > > > > { } /* terminate */
> > > > > > };
> > > > > > #endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
> > > > > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > > > > index 1c826737aacb..788e36a06c34 100644
> > > > > > --- a/mm/swap_state.c
> > > > > > +++ b/mm/swap_state.c
> > > > > > @@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> > > > > > return pages;
> > > > > > }
> > > > > >
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > +/*
> > > > > > + * Refault is an indication that warmer pages are not resident in memory.
> > > > > > + * Increase the size of zswap's protected area.
> > > > > > + */
> > > > > > +static void inc_nr_protected(struct page *page)
> > > > > > +{
> > > > > > + struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > > > > > + unsigned long flags;
> > > > > > +
> > > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > > + lruvec->nr_zswap_protected++;
> > > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > > +}
> > > > > > +#endif
> > > > > > +
> > > > >
> > > > > A few questions:
> > > > > - Why is this function named in such a generic way?
> > > > Perhaps inc_nr_zswap_protected would be better? :)
> > >
> > > If we use an atomic, the function can go away anyway. See below.
> > >
> > > > > - Why is this function here instead of in mm/zswap.c?
> > > > No particular reason :) It's not being used anywhere else,
> > > > so I just put it as a static function here.
> > >
> > > It is inline in mm/zswap.c in one place. I personally would have
> > > preferred nr_zswap_protected and the helper to be defined in
> > > zswap.h/zswap.c as I mentioned below. Anyway, this function can go
> > > away.
> > >
> > > > > - Why is this protected by the heavily contested lruvec lock instead
> > > > > of being an atomic?
> > > > nr_zswap_protected can be decayed (see zswap_lru_add), which
> > > > I don't think it can be implemented with atomics :( It'd be much
> > > > cleaner indeed.
> > >
> > > I think a cmpxchg (or a try_cmpxchg) loop can be used in this case to
> > > implement it using an atomic?
> > >
> > > See https://docs.kernel.org/core-api/wrappers/atomic_t.html.
> >
> > Ah I did think about this, but that seems overkill at the time.
> > But if lruvec lock is indeed hotly contested, this should help.
>
> I wouldn't say so, we can drop numerous calls to grab/drop the lock,
> and drop the helper. A try_cmpxchg loop here would only be a couple of
> lines, I suspect it would be more concise than the code now:
>
> old = atomic_inc_return(&lruvec->nr_zswap_protected);
> do {
> if (old > lru_size / 4)
> new = old / 2;
> } while (atomic_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
>

Yeah this definitely seems quite clean. Lemme give this a try.

> >
> > >
> > > > > > + lruvec->nr_zswap_protected++;
> > > > > >
> > > > > > + /*
> > > > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > > > + */
> > > > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > > > + lruvec->nr_zswap_protected /= 2;
> > >
> > > >
> > > > I'm wary of adding new locks, so I just re-use this existing lock.
> > > > But if lruvec lock is heavily congested (I'm not aware/familar with
> > > > this issue), then perhaps a new, dedicated lock would help?
> > > > >
> > > > > > /**
> > > > > > * swap_cluster_readahead - swap in pages in hope we need them soon
> > > > > > * @entry: swap entry of this memory
> > > > > > @@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > > > > > lru_add_drain(); /* Push any new pages onto the LRU now */
> > > > > > skip:
> > > > > > /* The page was likely read above, so no need for plugging here */
> > > > > > - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > > > + page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > + if (page)
> > > > > > + inc_nr_protected(page);
> > > > > > +#endif
> > > > > > + return page;
> > > > > > }
> > > > > >
> > > > > > int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > > > > > @@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > > > > > lru_add_drain();
> > > > > > skip:
> > > > > > /* The page was likely read above, so no need for plugging here */
> > > > > > - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > > > > > - NULL);
> > > > > > + page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > + if (page)
> > > > > > + inc_nr_protected(page);
> > > > > > +#endif
> > > > > > + return page;
> > > > > > }
> > > > > >
> > > > > > /**
> > > > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > > > index 1a469e5d5197..79cb18eeb8bf 100644
> > > > > > --- a/mm/zswap.c
> > > > > > +++ b/mm/zswap.c
> > > > > > @@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> > > > > > /* Number of zpools in zswap_pool (empirically determined for scalability) */
> > > > > > #define ZSWAP_NR_ZPOOLS 32
> > > > > >
> > > > > > +/*
> > > > > > + * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
> > > > > > + * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
> > > > > > + * the shrinker for each memcg.
> > > > > > + */
> > > > > > +static bool zswap_shrinker_enabled;
> > > > > > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > > > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > > > +{
> > > > > > + return zswap_shrinker_enabled &&
> > > > > > + atomic_read(&memcg->zswap_shrinker_enabled);
> > > > > > +}
> > > > > > +#else
> > > > > > +static bool is_shrinker_enabled(struct mem_cgroup *memcg)
> > > > > > +{
> > > > > > + return zswap_shrinker_enabled;
> > > > > > +}
> > > > > > +#endif
> > > > > > +
> > > > > > /*********************************
> > > > > > * data structures
> > > > > > **********************************/
> > > > > > @@ -174,6 +194,8 @@ struct zswap_pool {
> > > > > > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > > > > struct list_lru list_lru;
> > > > > > struct mem_cgroup *next_shrink;
> > > > > > + struct shrinker *shrinker;
> > > > > > + atomic_t nr_stored;
> > > > > > };
> > > > > >
> > > > > > /*
> > > > > > @@ -273,17 +295,26 @@ static bool zswap_can_accept(void)
> > > > > > DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> > > > > > }
> > > > > >
> > > > > > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > > > > > +{
> > > > > > + u64 pool_size = 0;
> > > > > > + int i;
> > > > > > +
> > > > > > + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > > > + pool_size += zpool_get_total_size(pool->zpools[i]);
> > > > > > +
> > > > > > + return pool_size;
> > > > > > +}
> > > > > > +
> > > > > > static void zswap_update_total_size(void)
> > > > > > {
> > > > > > struct zswap_pool *pool;
> > > > > > u64 total = 0;
> > > > > > - int i;
> > > > > >
> > > > > > rcu_read_lock();
> > > > > >
> > > > > > list_for_each_entry_rcu(pool, &zswap_pools, list)
> > > > > > - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > > > > - total += zpool_get_total_size(pool->zpools[i]);
> > > > > > + total += get_zswap_pool_size(pool);
> > > > > >
> > > > > > rcu_read_unlock();
> > > > > >
> > > > > > @@ -318,8 +349,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > > > > {
> > > > > > struct mem_cgroup *memcg = entry->objcg ?
> > > > > > get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > > > bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
> > > > > > + unsigned long flags, lru_size;
> > > > > > +
> > > > > > + if (added) {
> > > > > > + lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
> > > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > > + lruvec->nr_zswap_protected++;
> > > > > >
> > > > > > + /*
> > > > > > + * Decay to avoid overflow and adapt to changing workloads.
> > > > > > + * This is based on LRU reclaim cost decaying heuristics.
> > > > > > + */
> > > > > > + if (lruvec->nr_zswap_protected > lru_size / 4)
> > > > > > + lruvec->nr_zswap_protected /= 2;
> > > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > > + }
> > > > > > mem_cgroup_put(memcg);
> > > > > > return added;
> > > > > > }
> > > > > > @@ -420,6 +466,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > > > > > else {
> > > > > > zswap_lru_del(&entry->pool->list_lru, entry);
> > > > > > zpool_free(zswap_find_zpool(entry), entry->handle);
> > > > > > + atomic_dec(&entry->pool->nr_stored);
> > > > > > zswap_pool_put(entry->pool);
> > > > > > }
> > > > > > zswap_entry_cache_free(entry);
> > > > > > @@ -461,6 +508,98 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > > > > return entry;
> > > > > > }
> > > > > >
> > > > > > +/*********************************
> > > > > > +* shrinker functions
> > > > > > +**********************************/
> > > > > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > > > > + spinlock_t *lock, void *arg);
> > > > > > +
> > > > > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > > > > + struct shrink_control *sc)
> > > > > > +{
> > > > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > > > + unsigned long shrink_ret, nr_zswap_protected, flags,
> > > > > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > > > > + bool encountered_page_in_swapcache = false;
> > > > > > +
> > > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > > + nr_zswap_protected = lruvec->nr_zswap_protected;
> > > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > > +
> > > > > > + /*
> > > > > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > > > > + * protected region.
> > > > > > + */
> > > > > > + if (!is_shrinker_enabled(sc->memcg) ||
> > > > > > + nr_zswap_protected >= lru_size - sc->nr_to_scan) {
> > > > > > + sc->nr_scanned = 0;
> > > > > > + return SHRINK_STOP;
> > > > > > + }
> > > > > > +
> > > > > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > > > > + &encountered_page_in_swapcache);
> > > > > > +
> > > > > > + if (encountered_page_in_swapcache)
> > > > > > + return SHRINK_STOP;
> > > > > > +
> > > > > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > > > > +}
> > > > > > +
> > > > > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > > > > + struct shrink_control *sc)
> > > > > > +{
> > > > > > + struct zswap_pool *pool = shrinker->private_data;
> > > > > > + struct mem_cgroup *memcg = sc->memcg;
> > > > > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > > > > + unsigned long nr_backing, nr_stored, nr_freeable, flags;
> > > > > > +
> > > > > > +#ifdef CONFIG_MEMCG_KMEM
> > > > > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > > > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > > > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > > > > +#else
> > > > > > + /* use pool stats instead of memcg stats */
> > > > > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > > > > + nr_stored = atomic_read(&pool->nr_stored);
> > > > > > +#endif
> > > > > > +
> > > > > > + if (!is_shrinker_enabled(memcg) || !nr_stored)
> > > > > > + return 0;
> > > > > > +
> > > > > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > > > > + /*
> > > > > > + * Subtract the lru size by an estimate of the number of pages
> > > > > > + * that should be protected.
> > > > > > + */
> > > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > > + nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
> > > > > > + nr_freeable - lruvec->nr_zswap_protected : 0;
> > > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > > +
> > > > > > + /*
> > > > > > + * Scale the number of freeable pages by the memory saving factor.
> > > > > > + * This ensures that the better zswap compresses memory, the fewer
> > > > > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > > > > + * relatively small memory saving).
> > > > > > + */
> > > > > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > > > > +}
> > > > > > +
> > > > > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > > > > +{
> > > > > > + pool->shrinker =
> > > > > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > > > > + if (!pool->shrinker)
> > > > > > + return;
> > > > > > +
> > > > > > + pool->shrinker->private_data = pool;
> > > > > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > > > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > > > > + pool->shrinker->batch = 0;
> > > > > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > > > > +}
> > > > > > +
> > > > > > /*********************************
> > > > > > * per-cpu code
> > > > > > **********************************/
> > > > > > @@ -656,11 +795,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > > > spinlock_t *lock, void *arg)
> > > > > > {
> > > > > > struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > > > > + bool *encountered_page_in_swapcache = (bool *)arg;
> > > > > > struct mem_cgroup *memcg;
> > > > > > struct zswap_tree *tree;
> > > > > > + struct lruvec *lruvec;
> > > > > > pgoff_t swpoffset;
> > > > > > enum lru_status ret = LRU_REMOVED_RETRY;
> > > > > > int writeback_result;
> > > > > > + unsigned long flags;
> > > > > >
> > > > > > /*
> > > > > > * Once the lru lock is dropped, the entry might get freed. The
> > > > > > @@ -696,8 +838,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > > > > > /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > > > > list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
> > > > > > spin_unlock(lock);
> > > > > > - mem_cgroup_put(memcg);
> > > > > > ret = LRU_RETRY;
> > > > > > +
> > > > > > + /*
> > > > > > + * Encountering a page already in swap cache is a sign that we are shrinking
> > > > > > + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > > > > > + * shrinker context).
> > > > > > + */
> > > > > > + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > > > > > + ret = LRU_SKIP;
> > > > > > + *encountered_page_in_swapcache = true;
> > > > > > + }
> > > > > > + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
> > > > > > + spin_lock_irqsave(&lruvec->lru_lock, flags);
> > > > > > + /* Increment the protection area to account for the LRU rotation. */
> > > > > > + lruvec->nr_zswap_protected++;
> > > > > > + spin_unlock_irqrestore(&lruvec->lru_lock, flags);
> > > > > > +
> > > > > > + mem_cgroup_put(memcg);
> > > > > > goto put_unlock;
> > > > > > }
> > > > > >
> > > > > > @@ -828,6 +986,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > > > &pool->node);
> > > > > > if (ret)
> > > > > > goto error;
> > > > > > +
> > > > > > + zswap_alloc_shrinker(pool);
> > > > > > + if (!pool->shrinker)
> > > > > > + goto error;
> > > > > > +
> > > > > > pr_debug("using %s compressor\n", pool->tfm_name);
> > > > > >
> > > > > > /* being the current pool takes 1 ref; this func expects the
> > > > > > @@ -836,12 +999,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > > > > kref_init(&pool->kref);
> > > > > > INIT_LIST_HEAD(&pool->list);
> > > > > > INIT_WORK(&pool->shrink_work, shrink_worker);
> > > > > > - list_lru_init_memcg(&pool->list_lru, NULL);
> > > > > > + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > > > > > + goto lru_fail;
> > > > > > + shrinker_register(pool->shrinker);
> > > > > >
> > > > > > zswap_pool_debug("created", pool);
> > > > > >
> > > > > > return pool;
> > > > > >
> > > > > > +lru_fail:
> > > > > > + list_lru_destroy(&pool->list_lru);
> > > > > > + shrinker_free(pool->shrinker);
> > > > > > error:
> > > > > > if (pool->acomp_ctx)
> > > > > > free_percpu(pool->acomp_ctx);
> > > > > > @@ -899,6 +1067,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > > > > >
> > > > > > zswap_pool_debug("destroying", pool);
> > > > > >
> > > > > > + shrinker_free(pool->shrinker);
> > > > > > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > > > > free_percpu(pool->acomp_ctx);
> > > > > > list_lru_destroy(&pool->list_lru);
> > > > > > @@ -1431,6 +1600,7 @@ bool zswap_store(struct folio *folio)
> > > > > > if (entry->length) {
> > > > > > INIT_LIST_HEAD(&entry->lru);
> > > > > > zswap_lru_add(&pool->list_lru, entry);
> > > > > > + atomic_inc(&pool->nr_stored);
> > > > > > }
> > > > > > spin_unlock(&tree->lock);
> > > > > >
> > > > > > --
> > > > > > 2.34.1
> > > > Thanks for the comments/suggestion, Yosry!

2023-10-17 17:44:23

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On 19/09/2023 18:14, Nhat Pham wrote:
> From: Domenico Cerasuolo <[email protected]>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/[email protected]/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.
>
> Signed-off-by: Domenico Cerasuolo <[email protected]>
> Co-developed-by: Nhat Pham <[email protected]>
> Signed-off-by: Nhat Pham <[email protected]>
> ---
> include/linux/list_lru.h | 39 +++++++
> include/linux/memcontrol.h | 5 +
> include/linux/zswap.h | 9 ++
> mm/list_lru.c | 46 ++++++--
> mm/swap_state.c | 19 ++++
> mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
> 6 files changed, 287 insertions(+), 52 deletions(-)
>

[...]

> @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> struct obj_cgroup *objcg = NULL;
> + struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> struct zpool *zpool;
> + int lru_alloc_ret;
> unsigned int dlen = PAGE_SIZE;
> unsigned long handle, value;
> char *buf;
> @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> if (!zswap_enabled || !tree)
> return false;
>
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> else
> zswap_pool_reached_full = false;
> }
> -
> + pool = zswap_pool_current_get();
> + if (!pool) {
> + ret = -EINVAL;
> + goto reject;
> + }


Hi, I'm working to add support for large folios within zswap, and noticed this
piece of code added by this change. I don't see any corresponding put. Have I
missed some detail or is there a bug here?


> /* allocate entry */
> entry = zswap_entry_cache_alloc(GFP_KERNEL);
> if (!entry) {
> @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> entry->length = 0;
> entry->value = value;
> atomic_inc(&zswap_same_filled_pages);
> + zswap_pool_put(pool);

I see you put it in this error path, but after that, there is no further mention.

> goto insert_entry;
> }
> kunmap_atomic(src);
> @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> if (!zswap_non_same_filled_pages_enabled)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> + mem_cgroup_put(memcg);
> +
> + if (lru_alloc_ret)
> + goto freepage;
> + }
> +
> /* if entry is successfully added, it keeps the reference */
> entry->pool = zswap_pool_current_get();

The entry takes it's reference to the pool here.

Thanks,
Ryan


2023-10-17 18:26:09

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On 17/10/2023 18:56, Domenico Cerasuolo wrote:
>
>
> On Tue, Oct 17, 2023 at 7:44 PM Ryan Roberts <[email protected]
> <mailto:[email protected]>> wrote:
>
> On 19/09/2023 18:14, Nhat Pham wrote:
> > From: Domenico Cerasuolo <[email protected]
> <mailto:[email protected]>>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> >
> https://lore.kernel.org/all/[email protected]/T/#u
> <https://lore.kernel.org/all/[email protected]/T/#u>
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> >    synchronous reclaim attempt that, if successful, allows the new
> >    hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> >    trigger an asynchronous reclaim attempt, in which an memcg is
> >    selected for reclaim in a round-robin-like fashion.
> >
> > Signed-off-by: Domenico Cerasuolo <[email protected]
> <mailto:[email protected]>>
> > Co-developed-by: Nhat Pham <[email protected] <mailto:[email protected]>>
> > Signed-off-by: Nhat Pham <[email protected] <mailto:[email protected]>>
> > ---
> >  include/linux/list_lru.h   |  39 +++++++
> >  include/linux/memcontrol.h |   5 +
> >  include/linux/zswap.h      |   9 ++
> >  mm/list_lru.c              |  46 ++++++--
> >  mm/swap_state.c            |  19 ++++
> >  mm/zswap.c                 | 221 +++++++++++++++++++++++++++++--------
> >  6 files changed, 287 insertions(+), 52 deletions(-)
> >
>
> [...]
>
> > @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> >       struct scatterlist input, output;
> >       struct crypto_acomp_ctx *acomp_ctx;
> >       struct obj_cgroup *objcg = NULL;
> > +     struct mem_cgroup *memcg = NULL;
> >       struct zswap_pool *pool;
> >       struct zpool *zpool;
> > +     int lru_alloc_ret;
> >       unsigned int dlen = PAGE_SIZE;
> >       unsigned long handle, value;
> >       char *buf;
> > @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> >       if (!zswap_enabled || !tree)
> >               return false;
> > 
> > -     /*
> > -      * XXX: zswap reclaim does not work with cgroups yet. Without a
> > -      * cgroup-aware entry LRU, we will push out entries system-wide based on
> > -      * local cgroup limits.
> > -      */
> >       objcg = get_obj_cgroup_from_folio(folio);
> > -     if (objcg && !obj_cgroup_may_zswap(objcg))
> > -             goto reject;
> > +     if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > +             memcg = get_mem_cgroup_from_objcg(objcg);
> > +             if (shrink_memcg(memcg)) {
> > +                     mem_cgroup_put(memcg);
> > +                     goto reject;
> > +             }
> > +             mem_cgroup_put(memcg);
> > +     }
> > 
> >       /* reclaim space if needed */
> >       if (zswap_is_full()) {
> > @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> >               else
> >                       zswap_pool_reached_full = false;
> >       }
> > -
> > +     pool = zswap_pool_current_get();
> > +     if (!pool) {
> > +             ret = -EINVAL;
> > +             goto reject;
> > +     }
>
>
> Hi, I'm working to add support for large folios within zswap, and noticed this
> piece of code added by this change. I don't see any corresponding put. Have I
> missed some detail or is there a bug here?
>
>
> >       /* allocate entry */
> >       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> >       if (!entry) {
> > @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> >                       entry->length = 0;
> >                       entry->value = value;
> >                       atomic_inc(&zswap_same_filled_pages);
> > +                     zswap_pool_put(pool);
>
> I see you put it in this error path, but after that, there is no further
> mention.
>
> >                       goto insert_entry;
> >               }
> >               kunmap_atomic(src);
> > @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> >       if (!zswap_non_same_filled_pages_enabled)
> >               goto freepage;
> > 
> > +     if (objcg) {
> > +             memcg = get_mem_cgroup_from_objcg(objcg);
> > +             lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru,
> GFP_KERNEL);
> > +             mem_cgroup_put(memcg);
> > +
> > +             if (lru_alloc_ret)
> > +                     goto freepage;
> > +     }
> > +
> >       /* if entry is successfully added, it keeps the reference */
> >       entry->pool = zswap_pool_current_get();
>
> The entry takes it's reference to the pool here.
>
> Thanks,
> Ryan
>
>
> Thanks Ryan, I think you're right. Coincidentally, we're about to send a new
> version of the series, and will make sure to address this too.

Ahh... I'm on top of mm-unstable - for some reason I thought I was on an rc and
this was already in. I guess it's less of an issue in that case.