2023-01-17 22:32:55

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v6 0/3] cachestat: a new syscall for page cache state of files

Changelog:
v6:
* Add a missing fdput() (suggested by Brian Foster) (patch 2)
* Replace cstat_size with cstat_version (suggested by Brian Foster)
(patch 2)
* Add conditional resched to the xas walk. (suggested by Hillf Danton)
(patch 2)
v5:
* Separate first patch into its own series.
(suggested by Andrew Morton)
* Expose filemap_cachestat() to non-syscall usage
(patch 2) (suggested by Brian Foster).
* Fix some build errors from last version.
(patch 2)
* Explain eviction and recent eviction in the draft man page and
documentation (suggested by Andrew Morton).
(patch 2)
v4:
* Refactor cachestat and move it to mm/filemap.c (patch 3)
(suggested by Brian Foster)
* Remove redundant checks (!folio, access_ok)
(patch 3) (suggested by Matthew Wilcox and Al Viro)
* Fix a bug in handling multipages folio.
(patch 3) (suggested by Matthew Wilcox)
* Add a selftest for shmem files, which can be used to test huge
pages (patch 4) (suggested by Johannes Weiner)
v3:
* Fix some minor formatting issues and build errors.
* Add the new syscall entry to missing architecture syscall tables.
(patch 3).
* Add flags argument for the syscall. (patch 3).
* Clean up the recency refactoring (patch 2) (suggested by Yu Zhao)
* Add the new Kconfig (CONFIG_CACHESTAT) to disable the syscall.
(patch 3) (suggested by Josh Triplett)
v2:
* len == 0 means query to EOF. len < 0 is invalid.
(patch 3) (suggested by Brian Foster)
* Make cachestat extensible by adding the `cstat_size` argument in the
syscall (patch 3)

There is currently no good way to query the page cache state of large
file sets and directory trees. There is mincore(), but it scales poorly:
the kernel writes out a lot of bitmap data that userspace has to
aggregate, when the user really doesn not care about per-page information
in that case. The user also needs to mmap and unmap each file as it goes
along, which can be quite slow as well.

This series of patches introduces a new system call, cachestat, that
summarizes the page cache statistics (number of cached pages, dirty
pages, pages marked for writeback, evicted pages etc.) of a file, in a
specified range of bytes. It also include a selftest suite that tests some
typical usage

This interface is inspired by past discussion and concerns with fincore,
which has a similar design (and as a result, issues) as mincore.
Relevant links:

https://lkml.indiana.edu/hypermail/linux/kernel/1302.1/04207.html
https://lkml.indiana.edu/hypermail/linux/kernel/1302.1/04209.html

For comparison with mincore, I ran both syscalls on a 2TB sparse file:

Using mincore:
real 0m37.510s
user 0m2.934s
sys 0m34.558s

Using cachestat:
real 0m0.009s
user 0m0.000s
sys 0m0.009s

This series should be applied on top of:

workingset: fix confusion around eviction vs refault container
https://lkml.org/lkml/2023/1/4/1066

This series consist of 3 patches:

Nhat Pham (3):
workingset: refactor LRU refault to expose refault recency check
cachestat: implement cachestat syscall
selftests: Add selftests for cachestat

MAINTAINERS | 7 +
arch/alpha/kernel/syscalls/syscall.tbl | 1 +
arch/arm/tools/syscall.tbl | 1 +
arch/ia64/kernel/syscalls/syscall.tbl | 1 +
arch/m68k/kernel/syscalls/syscall.tbl | 1 +
arch/microblaze/kernel/syscalls/syscall.tbl | 1 +
arch/parisc/kernel/syscalls/syscall.tbl | 1 +
arch/powerpc/kernel/syscalls/syscall.tbl | 1 +
arch/s390/kernel/syscalls/syscall.tbl | 1 +
arch/sh/kernel/syscalls/syscall.tbl | 1 +
arch/sparc/kernel/syscalls/syscall.tbl | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/xtensa/kernel/syscalls/syscall.tbl | 1 +
include/linux/fs.h | 3 +
include/linux/swap.h | 1 +
include/linux/syscalls.h | 4 +
include/uapi/asm-generic/unistd.h | 5 +-
include/uapi/linux/mman.h | 9 +
init/Kconfig | 10 +
kernel/sys_ni.c | 1 +
mm/filemap.c | 154 +++++++++++
mm/workingset.c | 129 ++++++---
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/cachestat/.gitignore | 2 +
tools/testing/selftests/cachestat/Makefile | 8 +
.../selftests/cachestat/test_cachestat.c | 260 ++++++++++++++++++
27 files changed, 568 insertions(+), 39 deletions(-)
create mode 100644 tools/testing/selftests/cachestat/.gitignore
create mode 100644 tools/testing/selftests/cachestat/Makefile
create mode 100644 tools/testing/selftests/cachestat/test_cachestat.c


base-commit: 1440f576022887004f719883acb094e7e0dd4944
prerequisite-patch-id: 171a43d333e1b267ce14188a5beaea2f313787fb
--
2.30.2


2023-01-17 22:34:05

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v6 3/3] selftests: Add selftests for cachestat

Test cachestat on a newly created file, /dev/ files, and /proc/ files.
Also test on a shmem file (which can also be tested with huge pages
since tmpfs supports huge pages).

Signed-off-by: Nhat Pham <[email protected]>
---
MAINTAINERS | 7 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/cachestat/.gitignore | 2 +
tools/testing/selftests/cachestat/Makefile | 8 +
.../selftests/cachestat/test_cachestat.c | 260 ++++++++++++++++++
5 files changed, 278 insertions(+)
create mode 100644 tools/testing/selftests/cachestat/.gitignore
create mode 100644 tools/testing/selftests/cachestat/Makefile
create mode 100644 tools/testing/selftests/cachestat/test_cachestat.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a198da986146..792a866353ec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4552,6 +4552,13 @@ S: Supported
F: Documentation/filesystems/caching/cachefiles.rst
F: fs/cachefiles/

+CACHESTAT: PAGE CACHE STATS FOR A FILE
+M: Nhat Pham <[email protected]>
+M: Johannes Weiner <[email protected]>
+L: [email protected]
+S: Maintained
+F: tools/testing/selftests/cachestat/test_cachestat.c
+
CADENCE MIPI-CSI2 BRIDGES
M: Maxime Ripard <[email protected]>
L: [email protected]
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 0464b2c6c1e4..3cad0b38c5c2 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -4,6 +4,7 @@ TARGETS += amd-pstate
TARGETS += arm64
TARGETS += bpf
TARGETS += breakpoints
+TARGETS += cachestat
TARGETS += capabilities
TARGETS += cgroup
TARGETS += clone3
diff --git a/tools/testing/selftests/cachestat/.gitignore b/tools/testing/selftests/cachestat/.gitignore
new file mode 100644
index 000000000000..d6c30b43a4bb
--- /dev/null
+++ b/tools/testing/selftests/cachestat/.gitignore
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+test_cachestat
diff --git a/tools/testing/selftests/cachestat/Makefile b/tools/testing/selftests/cachestat/Makefile
new file mode 100644
index 000000000000..fca73aaa7d14
--- /dev/null
+++ b/tools/testing/selftests/cachestat/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+TEST_GEN_PROGS := test_cachestat
+
+CFLAGS += $(KHDR_INCLUDES)
+CFLAGS += -Wall
+CFLAGS += -lrt
+
+include ../lib.mk
diff --git a/tools/testing/selftests/cachestat/test_cachestat.c b/tools/testing/selftests/cachestat/test_cachestat.c
new file mode 100644
index 000000000000..dc2894028eee
--- /dev/null
+++ b/tools/testing/selftests/cachestat/test_cachestat.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <linux/kernel.h>
+#include <linux/mman.h>
+#include <sys/mman.h>
+#include <sys/shm.h>
+#include <sys/syscall.h>
+#include <unistd.h>
+#include <string.h>
+#include <fcntl.h>
+#include <errno.h>
+
+#include "../kselftest.h"
+
+static const char * const dev_files[] = {
+ "/dev/zero", "/dev/null", "/dev/urandom",
+ "/proc/version", "/proc"
+};
+static const int cachestat_nr = 451;
+static const int cstat_version = 1; /* first version */
+
+void print_cachestat(struct cachestat *cs)
+{
+ ksft_print_msg(
+ "Using cachestat: Cached: %lu, Dirty: %lu, Writeback: %lu, Evicted: %lu, Recently Evicted: %lu\n",
+ cs->nr_cache, cs->nr_dirty, cs->nr_writeback,
+ cs->nr_evicted, cs->nr_recently_evicted);
+}
+
+bool write_exactly(int fd, size_t filesize)
+{
+ char data[filesize];
+ bool ret = true;
+ int random_fd = open("/dev/urandom", O_RDONLY);
+
+ if (random_fd < 0) {
+ ksft_print_msg("Unable to access urandom.\n");
+ ret = false;
+ goto out;
+ } else {
+ int remained = filesize;
+ char *cursor = data;
+
+ while (remained) {
+ ssize_t read_len = read(random_fd, cursor, remained);
+
+ if (read_len <= 0) {
+ ksft_print_msg("Unable to read from urandom.\n");
+ ret = false;
+ goto close_random_fd;
+ }
+
+ remained -= read_len;
+ cursor += read_len;
+ }
+
+ /* write random data to fd */
+ remained = filesize;
+ cursor = data;
+ while (remained) {
+ ssize_t write_len = write(fd, cursor, remained);
+
+ if (write_len <= 0) {
+ ksft_print_msg("Unable write random data to file.\n");
+ ret = false;
+ goto close_random_fd;
+ }
+
+ remained -= write_len;
+ cursor += write_len;
+ }
+ }
+
+close_random_fd:
+ close(random_fd);
+out:
+ return ret;
+}
+
+/*
+ * Open/create the file at filename, (optionally) write random data to it
+ * (exactly num_pages), then test the cachestat syscall on this file.
+ *
+ * If test_fsync == true, fsync the file, then check the number of dirty
+ * pages.
+ */
+bool test_cachestat(const char *filename, bool write_random, bool create,
+ bool test_fsync, unsigned long num_pages, int open_flags,
+ mode_t open_mode)
+{
+ size_t PS = sysconf(_SC_PAGESIZE);
+ int filesize = num_pages * PS;
+ bool ret = true;
+ long syscall_ret;
+ struct cachestat cs;
+
+ int fd = open(filename, open_flags, open_mode);
+
+ if (fd == -1) {
+ ksft_print_msg("Unable to create/open file.\n");
+ goto out;
+ } else {
+ ksft_print_msg("Create/open %s\n", filename);
+ }
+
+ if (write_random) {
+ if (!write_exactly(fd, filesize)) {
+ ksft_print_msg("Unable to access urandom.\n");
+ ret = false;
+ goto out1;
+ }
+ }
+
+ syscall_ret = syscall(cachestat_nr, fd, 0, filesize,
+ cstat_version, &cs, 0);
+
+ ksft_print_msg("Cachestat call returned %ld\n", syscall_ret);
+
+ if (syscall_ret) {
+ ksft_print_msg("Cachestat returned non-zero.\n");
+ ret = false;
+ goto out1;
+
+ } else {
+ print_cachestat(&cs);
+
+ if (write_random) {
+ if (cs.nr_cache + cs.nr_evicted != num_pages) {
+ ksft_print_msg(
+ "Total number of cached and evicted pages is off.\n");
+ ret = false;
+ }
+ }
+ }
+
+ if (test_fsync) {
+ if (fsync(fd)) {
+ ksft_print_msg("fsync fails.\n");
+ ret = false;
+ } else {
+ syscall_ret = syscall(cachestat_nr, fd, 0, filesize,
+ cstat_version, &cs, 0);
+
+ ksft_print_msg("Cachestat call (after fsync) returned %ld\n",
+ syscall_ret);
+
+ if (!syscall_ret) {
+ print_cachestat(&cs);
+
+ if (cs.nr_dirty) {
+ ret = false;
+ ksft_print_msg(
+ "Number of dirty should be zero after fsync.\n");
+ }
+ } else {
+ ksft_print_msg("Cachestat (after fsync) returned non-zero.\n");
+ ret = false;
+ goto out1;
+ }
+ }
+ }
+
+out1:
+ close(fd);
+
+ if (create)
+ remove(filename);
+out:
+ return ret;
+}
+
+bool test_cachestat_shmem(void)
+{
+ size_t PS = sysconf(_SC_PAGESIZE);
+ size_t filesize = PS * 512 * 2; /* 2 2MB huge pages */
+ int syscall_ret;
+ off_t off = PS;
+ size_t compute_len = PS * 512;
+ char *filename = "tmpshmcstat";
+ struct cachestat cs;
+ bool ret = true;
+ unsigned long num_pages = compute_len / PS;
+ int fd = shm_open(filename, O_CREAT | O_RDWR, 0600);
+
+ if (fd < 0) {
+ ksft_print_msg("Unable to create shmem file.\n");
+ ret = false;
+ goto out;
+ }
+
+ if (ftruncate(fd, filesize)) {
+ ksft_print_msg("Unable to trucate shmem file.\n");
+ ret = false;
+ goto close_fd;
+ }
+
+ if (!write_exactly(fd, filesize)) {
+ ksft_print_msg("Unable to write to shmem file.\n");
+ ret = false;
+ goto close_fd;
+ }
+
+ syscall_ret = syscall(cachestat_nr, fd, off, compute_len,
+ cstat_version, &cs, 0);
+
+ if (syscall_ret) {
+ ksft_print_msg("Cachestat returned non-zero.\n");
+ ret = false;
+ goto close_fd;
+ } else {
+ print_cachestat(&cs);
+ if (cs.nr_cache + cs.nr_evicted != num_pages) {
+ ksft_print_msg(
+ "Total number of cached and evicted pages is off.\n");
+ ret = false;
+ }
+ }
+
+close_fd:
+ shm_unlink(filename);
+out:
+ return ret;
+}
+
+int main(void)
+{
+ int ret = 0;
+
+ for (int i = 0; i < 5; i++) {
+ const char *dev_filename = dev_files[i];
+
+ if (test_cachestat(dev_filename, false, false, false,
+ 4, O_RDONLY, 0400))
+ ksft_test_result_pass("cachestat works with %s\n", dev_filename);
+ else {
+ ksft_test_result_fail("cachestat fails with %s\n", dev_filename);
+ ret = 1;
+ }
+ }
+
+ if (test_cachestat("tmpfilecachestat", true, true,
+ true, 4, O_CREAT | O_RDWR, 0400 | 0600))
+ ksft_test_result_pass("cachestat works with a normal file\n");
+ else {
+ ksft_test_result_fail("cachestat fails with normal file\n");
+ ret = 1;
+ }
+
+ if (test_cachestat_shmem())
+ ksft_test_result_pass("cachestat works with a shmem file\n");
+ else {
+ ksft_test_result_fail("cachestat fails with a shmem file\n");
+ ret = 1;
+ }
+
+ return ret;
+}
--
2.30.2

2023-01-17 23:06:21

by Nhat Pham

[permalink] [raw]
Subject: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

In preparation for computing recently evicted pages in cachestat,
refactor workingset_refault and lru_gen_refault to expose a helper
function that would test if an evicted page is recently evicted.

Signed-off-by: Nhat Pham <[email protected]>
---
include/linux/swap.h | 1 +
mm/workingset.c | 129 ++++++++++++++++++++++++++++++-------------
2 files changed, 92 insertions(+), 38 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a18cf4b7c724..dae6f6f955eb 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
}

/* linux/mm/workingset.c */
+bool workingset_test_recent(void *shadow, bool file, bool *workingset);
void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
void workingset_refault(struct folio *folio, void *shadow);
diff --git a/mm/workingset.c b/mm/workingset.c
index 79585d55c45d..006482c4e0bd 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
}

+/*
+ * Test if the folio is recently evicted.
+ *
+ * As a side effect, also populates the references with
+ * values unpacked from the shadow of the evicted folio.
+ */
+static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
+{
+ struct mem_cgroup *eviction_memcg;
+ struct lruvec *lruvec;
+ struct lru_gen_struct *lrugen;
+ unsigned long min_seq;
+
+ int memcgid;
+ struct pglist_data *pgdat;
+ unsigned long token;
+
+ unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
+ eviction_memcg = mem_cgroup_from_id(memcgid);
+
+ lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
+ lrugen = &lruvec->lrugen;
+
+ min_seq = READ_ONCE(lrugen->min_seq[file]);
+ return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
+}
+
static void lru_gen_refault(struct folio *folio, void *shadow)
{
int hist, tier, refs;
@@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
return NULL;
}

+static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
+{
+ return true;
+}
+
static void lru_gen_refault(struct folio *folio, void *shadow)
{
}
@@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
folio_test_workingset(folio));
}

-/**
- * workingset_refault - Evaluate the refault of a previously evicted folio.
- * @folio: The freshly allocated replacement folio.
- * @shadow: Shadow entry of the evicted folio.
+/*
+ * Test if the folio is recently evicted by checking if
+ * refault distance of shadow exceeds workingset size.
*
- * Calculates and evaluates the refault distance of the previously
- * evicted folio in the context of the node and the memcg whose memory
- * pressure caused the eviction.
+ * As a side effect, populate workingset with the value
+ * unpacked from shadow.
*/
-void workingset_refault(struct folio *folio, void *shadow)
+bool workingset_test_recent(void *shadow, bool file, bool *workingset)
{
- bool file = folio_is_file_lru(folio);
struct mem_cgroup *eviction_memcg;
struct lruvec *eviction_lruvec;
unsigned long refault_distance;
unsigned long workingset_size;
- struct pglist_data *pgdat;
- struct mem_cgroup *memcg;
- unsigned long eviction;
- struct lruvec *lruvec;
unsigned long refault;
- bool workingset;
+
int memcgid;
- long nr;
+ struct pglist_data *pgdat;
+ unsigned long eviction;

- if (lru_gen_enabled()) {
- lru_gen_refault(folio, shadow);
- return;
- }
+ if (lru_gen_enabled())
+ return lru_gen_test_recent(shadow, file, workingset);

- unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
+ unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
eviction <<= bucket_order;

- rcu_read_lock();
/*
* Look up the memcg associated with the stored ID. It might
* have been deleted since the folio's eviction.
@@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
*/
eviction_memcg = mem_cgroup_from_id(memcgid);
if (!mem_cgroup_disabled() && !eviction_memcg)
- goto out;
+ return false;
+
eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
refault = atomic_long_read(&eviction_lruvec->nonresident_age);

@@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
*/
refault_distance = (refault - eviction) & EVICTION_MASK;

- /*
- * The activation decision for this folio is made at the level
- * where the eviction occurred, as that is where the LRU order
- * during folio reclaim is being determined.
- *
- * However, the cgroup that will own the folio is the one that
- * is actually experiencing the refault event.
- */
- nr = folio_nr_pages(folio);
- memcg = folio_memcg(folio);
- pgdat = folio_pgdat(folio);
- lruvec = mem_cgroup_lruvec(memcg, pgdat);
-
- mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
-
mem_cgroup_flush_stats_delayed();
/*
* Compare the distance to the existing workingset size. We
@@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
NR_INACTIVE_ANON);
}
}
- if (refault_distance > workingset_size)
+
+ return refault_distance <= workingset_size;
+}
+
+/**
+ * workingset_refault - Evaluate the refault of a previously evicted folio.
+ * @folio: The freshly allocated replacement folio.
+ * @shadow: Shadow entry of the evicted folio.
+ *
+ * Calculates and evaluates the refault distance of the previously
+ * evicted folio in the context of the node and the memcg whose memory
+ * pressure caused the eviction.
+ */
+void workingset_refault(struct folio *folio, void *shadow)
+{
+ bool file = folio_is_file_lru(folio);
+ struct pglist_data *pgdat;
+ struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
+ bool workingset;
+ long nr;
+
+ if (lru_gen_enabled()) {
+ lru_gen_refault(folio, shadow);
+ return;
+ }
+
+ rcu_read_lock();
+
+ nr = folio_nr_pages(folio);
+ memcg = folio_memcg(folio);
+ pgdat = folio_pgdat(folio);
+ lruvec = mem_cgroup_lruvec(memcg, pgdat);
+
+ if (!workingset_test_recent(shadow, file, &workingset)) {
+ /*
+ * The activation decision for this folio is made at the level
+ * where the eviction occurred, as that is where the LRU order
+ * during folio reclaim is being determined.
+ *
+ * However, the cgroup that will own the folio is the one that
+ * is actually experiencing the refault event.
+ */
goto out;
+ }

folio_set_active(folio);
workingset_age_nonresident(lruvec, nr);
@@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
}
out:
+ mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
rcu_read_unlock();
}

--
2.30.2

2023-01-20 14:39:00

by Brian Foster

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> In preparation for computing recently evicted pages in cachestat,
> refactor workingset_refault and lru_gen_refault to expose a helper
> function that would test if an evicted page is recently evicted.
>
> Signed-off-by: Nhat Pham <[email protected]>
> ---

Hi Nhat,

I'm not terribly familiar with the workingset management code, but a few
thoughts now that I've stared at it a bit...

> include/linux/swap.h | 1 +
> mm/workingset.c | 129 ++++++++++++++++++++++++++++++-------------
> 2 files changed, 92 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index a18cf4b7c724..dae6f6f955eb 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
> }
>
> /* linux/mm/workingset.c */
> +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
> void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
> void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
> void workingset_refault(struct folio *folio, void *shadow);
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 79585d55c45d..006482c4e0bd 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
> return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
> }
>
> +/*
> + * Test if the folio is recently evicted.
> + *
> + * As a side effect, also populates the references with
> + * values unpacked from the shadow of the evicted folio.
> + */
> +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> +{
> + struct mem_cgroup *eviction_memcg;
> + struct lruvec *lruvec;
> + struct lru_gen_struct *lrugen;
> + unsigned long min_seq;
> +

Extra whitespace looks a bit funny here.

> + int memcgid;
> + struct pglist_data *pgdat;
> + unsigned long token;
> +
> + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> + eviction_memcg = mem_cgroup_from_id(memcgid);
> +
> + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> + lrugen = &lruvec->lrugen;
> +
> + min_seq = READ_ONCE(lrugen->min_seq[file]);
> + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));

I think this might be more readable without the double negative.

Also it looks like this logic is pulled from lru_gen_refault(). Any
reason the caller isn't refactored to use this helper, similar to how
workingset_refault() is modified? It seems like a potential landmine to
duplicate the logic here for cachestat purposes and somewhere else for
actual workingset management.

> +}
> +
> static void lru_gen_refault(struct folio *folio, void *shadow)
> {
> int hist, tier, refs;
> @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
> return NULL;
> }
>
> +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> +{
> + return true;
> +}

I guess this is a no-op for !MGLRU but given the context (i.e. special
treatment for "recent" refaults), perhaps false is a more sane default?

> +
> static void lru_gen_refault(struct folio *folio, void *shadow)
> {
> }
> @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
> folio_test_workingset(folio));
> }
>
> -/**
> - * workingset_refault - Evaluate the refault of a previously evicted folio.
> - * @folio: The freshly allocated replacement folio.
> - * @shadow: Shadow entry of the evicted folio.
> +/*
> + * Test if the folio is recently evicted by checking if
> + * refault distance of shadow exceeds workingset size.
> *
> - * Calculates and evaluates the refault distance of the previously
> - * evicted folio in the context of the node and the memcg whose memory
> - * pressure caused the eviction.
> + * As a side effect, populate workingset with the value
> + * unpacked from shadow.
> */
> -void workingset_refault(struct folio *folio, void *shadow)
> +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
> {
> - bool file = folio_is_file_lru(folio);
> struct mem_cgroup *eviction_memcg;
> struct lruvec *eviction_lruvec;
> unsigned long refault_distance;
> unsigned long workingset_size;
> - struct pglist_data *pgdat;
> - struct mem_cgroup *memcg;
> - unsigned long eviction;
> - struct lruvec *lruvec;
> unsigned long refault;
> - bool workingset;
> +
> int memcgid;
> - long nr;
> + struct pglist_data *pgdat;
> + unsigned long eviction;
>
> - if (lru_gen_enabled()) {
> - lru_gen_refault(folio, shadow);
> - return;
> - }
> + if (lru_gen_enabled())
> + return lru_gen_test_recent(shadow, file, workingset);

Hmm.. so this function is only called by workingset_refault() when
lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
which as noted above duplicates some of the recency logic.

I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
can just call workingset_test_recent(). That seems reasonable, but makes
me wonder...

>
> - unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
> eviction <<= bucket_order;
>
> - rcu_read_lock();
> /*
> * Look up the memcg associated with the stored ID. It might
> * have been deleted since the folio's eviction.
> @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
> */
> eviction_memcg = mem_cgroup_from_id(memcgid);
> if (!mem_cgroup_disabled() && !eviction_memcg)
> - goto out;
> + return false;
> +
> eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> refault = atomic_long_read(&eviction_lruvec->nonresident_age);
>
> @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
> */
> refault_distance = (refault - eviction) & EVICTION_MASK;
>
> - /*
> - * The activation decision for this folio is made at the level
> - * where the eviction occurred, as that is where the LRU order
> - * during folio reclaim is being determined.
> - *
> - * However, the cgroup that will own the folio is the one that
> - * is actually experiencing the refault event.
> - */
> - nr = folio_nr_pages(folio);
> - memcg = folio_memcg(folio);
> - pgdat = folio_pgdat(folio);
> - lruvec = mem_cgroup_lruvec(memcg, pgdat);
> -
> - mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> -
> mem_cgroup_flush_stats_delayed();
> /*
> * Compare the distance to the existing workingset size. We
> @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
> NR_INACTIVE_ANON);
> }
> }
> - if (refault_distance > workingset_size)
> +
> + return refault_distance <= workingset_size;
> +}
> +
> +/**
> + * workingset_refault - Evaluate the refault of a previously evicted folio.
> + * @folio: The freshly allocated replacement folio.
> + * @shadow: Shadow entry of the evicted folio.
> + *
> + * Calculates and evaluates the refault distance of the previously
> + * evicted folio in the context of the node and the memcg whose memory
> + * pressure caused the eviction.
> + */
> +void workingset_refault(struct folio *folio, void *shadow)
> +{
> + bool file = folio_is_file_lru(folio);
> + struct pglist_data *pgdat;
> + struct mem_cgroup *memcg;
> + struct lruvec *lruvec;
> + bool workingset;
> + long nr;
> +
> + if (lru_gen_enabled()) {
> + lru_gen_refault(folio, shadow);
> + return;
> + }

... if perhaps this should call workingset_test_recent() a bit earlier,
since it also covers the lru_gen_*() case..? That may or may not be
cleaner. It _seems like_ it might produce a bit more consistent logic,
but just a thought and I could easily be missing details.

> +
> + rcu_read_lock();
> +
> + nr = folio_nr_pages(folio);
> + memcg = folio_memcg(folio);
> + pgdat = folio_pgdat(folio);
> + lruvec = mem_cgroup_lruvec(memcg, pgdat);
> +
> + if (!workingset_test_recent(shadow, file, &workingset)) {
> + /*
> + * The activation decision for this folio is made at the level
> + * where the eviction occurred, as that is where the LRU order
> + * during folio reclaim is being determined.
> + *
> + * However, the cgroup that will own the folio is the one that
> + * is actually experiencing the refault event.
> + */

IIUC, this comment is explaining the difference between using the
eviction lru (based on the shadow entry) to calculate recency vs. the
lru for the current folio to process the refault. If so, perhaps it
should go right above the workingset_test_recent() call? (Then the if
braces could go away as well..).

> goto out;
> + }
>
> folio_set_active(folio);
> workingset_age_nonresident(lruvec, nr);
> @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
> mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
> }
> out:
> + mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);

Why not just leave this up earlier in the function (i.e. before the
recency check) as it was originally?

Brian

> rcu_read_unlock();
> }
>
> --
> 2.30.2
>

2023-01-20 16:57:35

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > + int memcgid;
> > + struct pglist_data *pgdat;
> > + unsigned long token;
> > +
> > + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > + eviction_memcg = mem_cgroup_from_id(memcgid);
> > +
> > + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > + lrugen = &lruvec->lrugen;
> > +
> > + min_seq = READ_ONCE(lrugen->min_seq[file]);
> > + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
>
> I think this might be more readable without the double negative.
>
> Also it looks like this logic is pulled from lru_gen_refault(). Any
> reason the caller isn't refactored to use this helper, similar to how
> workingset_refault() is modified? It seems like a potential landmine to
> duplicate the logic here for cachestat purposes and somewhere else for
> actual workingset management.

The initial version was refactored. Yu explicitly requested it be
duplicated [1] to cut down on some boiler plate.

I have to agree with Brian on this one, though. The factored version
is better for maintenance than duplicating the core logic here. Even
if it ends up a bit more boiler plate - it's harder to screw that up,
and easier to catch at compile time, than the duplicates diverging.

[1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/

2023-01-20 17:50:20

by Nhat Pham

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Fri, Jan 20, 2023 at 6:33 AM Brian Foster <[email protected]> wrote:
>
> On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > In preparation for computing recently evicted pages in cachestat,
> > refactor workingset_refault and lru_gen_refault to expose a helper
> > function that would test if an evicted page is recently evicted.
> >
> > Signed-off-by: Nhat Pham <[email protected]>
> > ---
>
> Hi Nhat,
>
> I'm not terribly familiar with the workingset management code, but a few
> thoughts now that I've stared at it a bit...
>
> > include/linux/swap.h | 1 +
> > mm/workingset.c | 129 ++++++++++++++++++++++++++++++-------------
> > 2 files changed, 92 insertions(+), 38 deletions(-)
> >
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index a18cf4b7c724..dae6f6f955eb 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
> > }
> >
> > /* linux/mm/workingset.c */
> > +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
> > void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
> > void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
> > void workingset_refault(struct folio *folio, void *shadow);
> > diff --git a/mm/workingset.c b/mm/workingset.c
> > index 79585d55c45d..006482c4e0bd 100644
> > --- a/mm/workingset.c
> > +++ b/mm/workingset.c
> > @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
> > return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
> > }
> >
> > +/*
> > + * Test if the folio is recently evicted.
> > + *
> > + * As a side effect, also populates the references with
> > + * values unpacked from the shadow of the evicted folio.
> > + */
> > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > +{
> > + struct mem_cgroup *eviction_memcg;
> > + struct lruvec *lruvec;
> > + struct lru_gen_struct *lrugen;
> > + unsigned long min_seq;
> > +
>
> Extra whitespace looks a bit funny here.
>
> > + int memcgid;
> > + struct pglist_data *pgdat;
> > + unsigned long token;
> > +
> > + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > + eviction_memcg = mem_cgroup_from_id(memcgid);
> > +
> > + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > + lrugen = &lruvec->lrugen;
> > +
> > + min_seq = READ_ONCE(lrugen->min_seq[file]);
> > + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
>
> I think this might be more readable without the double negative.

Hmm indeed. I was just making sure that I did not mess up Yu's
original logic here (by just wrapping it in a parentheses and
negate the whole thing), but if I understand it correctly it's just
an equality check. I'll fix it in the next version to make it cleaner.

>
> Also it looks like this logic is pulled from lru_gen_refault(). Any
> reason the caller isn't refactored to use this helper, similar to how
> workingset_refault() is modified? It seems like a potential landmine to
> duplicate the logic here for cachestat purposes and somewhere else for
> actual workingset management.

In V2, it is actually refactored analogously as well - but we had a discussion
about it here:

https://lkml.org/lkml/2022/12/5/1321

>
> > +}
> > +
> > static void lru_gen_refault(struct folio *folio, void *shadow)
> > {
> > int hist, tier, refs;
> > @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
> > return NULL;
> > }
> >
> > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > +{
> > + return true;
> > +}
>
> I guess this is a no-op for !MGLRU but given the context (i.e. special
> treatment for "recent" refaults), perhaps false is a more sane default?

Hmm, fair point. Let me fix that in the next version.

>
> > +
> > static void lru_gen_refault(struct folio *folio, void *shadow)
> > {
> > }
> > @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
> > folio_test_workingset(folio));
> > }
> >
> > -/**
> > - * workingset_refault - Evaluate the refault of a previously evicted folio.
> > - * @folio: The freshly allocated replacement folio.
> > - * @shadow: Shadow entry of the evicted folio.
> > +/*
> > + * Test if the folio is recently evicted by checking if
> > + * refault distance of shadow exceeds workingset size.
> > *
> > - * Calculates and evaluates the refault distance of the previously
> > - * evicted folio in the context of the node and the memcg whose memory
> > - * pressure caused the eviction.
> > + * As a side effect, populate workingset with the value
> > + * unpacked from shadow.
> > */
> > -void workingset_refault(struct folio *folio, void *shadow)
> > +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
> > {
> > - bool file = folio_is_file_lru(folio);
> > struct mem_cgroup *eviction_memcg;
> > struct lruvec *eviction_lruvec;
> > unsigned long refault_distance;
> > unsigned long workingset_size;
> > - struct pglist_data *pgdat;
> > - struct mem_cgroup *memcg;
> > - unsigned long eviction;
> > - struct lruvec *lruvec;
> > unsigned long refault;
> > - bool workingset;
> > +
> > int memcgid;
> > - long nr;
> > + struct pglist_data *pgdat;
> > + unsigned long eviction;
> >
> > - if (lru_gen_enabled()) {
> > - lru_gen_refault(folio, shadow);
> > - return;
> > - }
> > + if (lru_gen_enabled())
> > + return lru_gen_test_recent(shadow, file, workingset);
>
> Hmm.. so this function is only called by workingset_refault() when
> lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
> which as noted above duplicates some of the recency logic.
>
> I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
> can just call workingset_test_recent(). That seems reasonable, but makes
> me wonder...

You're right. It's a bit clunky...

>
> >
> > - unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> > + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
> > eviction <<= bucket_order;
> >
> > - rcu_read_lock();
> > /*
> > * Look up the memcg associated with the stored ID. It might
> > * have been deleted since the folio's eviction.
> > @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
> > */
> > eviction_memcg = mem_cgroup_from_id(memcgid);
> > if (!mem_cgroup_disabled() && !eviction_memcg)
> > - goto out;
> > + return false;
> > +
> > eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > refault = atomic_long_read(&eviction_lruvec->nonresident_age);
> >
> > @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
> > */
> > refault_distance = (refault - eviction) & EVICTION_MASK;
> >
> > - /*
> > - * The activation decision for this folio is made at the level
> > - * where the eviction occurred, as that is where the LRU order
> > - * during folio reclaim is being determined.
> > - *
> > - * However, the cgroup that will own the folio is the one that
> > - * is actually experiencing the refault event.
> > - */
> > - nr = folio_nr_pages(folio);
> > - memcg = folio_memcg(folio);
> > - pgdat = folio_pgdat(folio);
> > - lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > -
> > - mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> > -
> > mem_cgroup_flush_stats_delayed();
> > /*
> > * Compare the distance to the existing workingset size. We
> > @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
> > NR_INACTIVE_ANON);
> > }
> > }
> > - if (refault_distance > workingset_size)
> > +
> > + return refault_distance <= workingset_size;
> > +}
> > +
> > +/**
> > + * workingset_refault - Evaluate the refault of a previously evicted folio.
> > + * @folio: The freshly allocated replacement folio.
> > + * @shadow: Shadow entry of the evicted folio.
> > + *
> > + * Calculates and evaluates the refault distance of the previously
> > + * evicted folio in the context of the node and the memcg whose memory
> > + * pressure caused the eviction.
> > + */
> > +void workingset_refault(struct folio *folio, void *shadow)
> > +{
> > + bool file = folio_is_file_lru(folio);
> > + struct pglist_data *pgdat;
> > + struct mem_cgroup *memcg;
> > + struct lruvec *lruvec;
> > + bool workingset;
> > + long nr;
> > +
> > + if (lru_gen_enabled()) {
> > + lru_gen_refault(folio, shadow);
> > + return;
> > + }
>
> ... if perhaps this should call workingset_test_recent() a bit earlier,
> since it also covers the lru_gen_*() case..? That may or may not be
> cleaner. It _seems like_ it might produce a bit more consistent logic,
> but just a thought and I could easily be missing details.

Hmm you mean before/in place of the lru_gen_refault call?
workingset_test_recent only covers lru_gen_test_recent,
not the rest of the logic of lru_gen_refault I believe.


>
> > +
> > + rcu_read_lock();
> > +
> > + nr = folio_nr_pages(folio);
> > + memcg = folio_memcg(folio);
> > + pgdat = folio_pgdat(folio);
> > + lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > +
> > + if (!workingset_test_recent(shadow, file, &workingset)) {
> > + /*
> > + * The activation decision for this folio is made at the level
> > + * where the eviction occurred, as that is where the LRU order
> > + * during folio reclaim is being determined.
> > + *
> > + * However, the cgroup that will own the folio is the one that
> > + * is actually experiencing the refault event.
> > + */
>
> IIUC, this comment is explaining the difference between using the
> eviction lru (based on the shadow entry) to calculate recency vs. the
> lru for the current folio to process the refault. If so, perhaps it
> should go right above the workingset_test_recent() call? (Then the if
> braces could go away as well..).

You're right! I think it should go above `nr = folio_nr_pages(folio);` call.

>
> > goto out;
> > + }
> >
> > folio_set_active(folio);
> > workingset_age_nonresident(lruvec, nr);
> > @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
> > mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
> > }
> > out:
> > + mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
>
> Why not just leave this up earlier in the function (i.e. before the
> recency check) as it was originally?

Let me double check, but I think this is a relic from the old (and incorrect)
version of workingset code.

Originally, mod_lruvec_state uses the lruvec computed from a variable
(pgdat) that was unpacked from the shadow. So this mod_lruvec_state
has to go after the unpack_shadow call (which has been moved inside
of workingset_test_recent).

This is actually wrong - we actually want the pgdat from the folio. It
has been fixed in a separate patch:

https://lore.kernel.org/all/[email protected]/T/#u

But I didn't update it here. Let me stare at it a bit more to make sure,
and then fix it in the next version. It should not change the behavior,
but it should be cleaner.

>
> Brian
>
> > rcu_read_unlock();
> > }
> >
> > --
> > 2.30.2
> >
>

2023-01-20 19:19:32

by Brian Foster

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Fri, Jan 20, 2023 at 09:29:46AM -0800, Nhat Pham wrote:
> On Fri, Jan 20, 2023 at 6:33 AM Brian Foster <[email protected]> wrote:
> >
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > In preparation for computing recently evicted pages in cachestat,
> > > refactor workingset_refault and lru_gen_refault to expose a helper
> > > function that would test if an evicted page is recently evicted.
> > >
> > > Signed-off-by: Nhat Pham <[email protected]>
> > > ---
> >
> > Hi Nhat,
> >
> > I'm not terribly familiar with the workingset management code, but a few
> > thoughts now that I've stared at it a bit...
> >
> > > include/linux/swap.h | 1 +
> > > mm/workingset.c | 129 ++++++++++++++++++++++++++++++-------------
> > > 2 files changed, 92 insertions(+), 38 deletions(-)
> > >
> > > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > > index a18cf4b7c724..dae6f6f955eb 100644
> > > --- a/include/linux/swap.h
> > > +++ b/include/linux/swap.h
> > > @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
> > > }
> > >
> > > /* linux/mm/workingset.c */
> > > +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
> > > void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
> > > void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
> > > void workingset_refault(struct folio *folio, void *shadow);
> > > diff --git a/mm/workingset.c b/mm/workingset.c
> > > index 79585d55c45d..006482c4e0bd 100644
> > > --- a/mm/workingset.c
> > > +++ b/mm/workingset.c
> > > @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
> > > return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
> > > }
> > >
> > > +/*
> > > + * Test if the folio is recently evicted.
> > > + *
> > > + * As a side effect, also populates the references with
> > > + * values unpacked from the shadow of the evicted folio.
> > > + */
> > > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > > +{
> > > + struct mem_cgroup *eviction_memcg;
> > > + struct lruvec *lruvec;
> > > + struct lru_gen_struct *lrugen;
> > > + unsigned long min_seq;
> > > +
> >
> > Extra whitespace looks a bit funny here.
> >
> > > + int memcgid;
> > > + struct pglist_data *pgdat;
> > > + unsigned long token;
> > > +
> > > + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > + eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > + lrugen = &lruvec->lrugen;
> > > +
> > > + min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> >
> > I think this might be more readable without the double negative.
>
> Hmm indeed. I was just making sure that I did not mess up Yu's
> original logic here (by just wrapping it in a parentheses and
> negate the whole thing), but if I understand it correctly it's just
> an equality check. I'll fix it in the next version to make it cleaner.
>
> >
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
>
> In V2, it is actually refactored analogously as well - but we had a discussion
> about it here:
>
> https://lkml.org/lkml/2022/12/5/1321
>

Yeah, sorry.. replied to Johannes.

> >
> > > +}
> > > +
> > > static void lru_gen_refault(struct folio *folio, void *shadow)
> > > {
> > > int hist, tier, refs;
> > > @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
> > > return NULL;
> > > }
> > >
> > > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > > +{
> > > + return true;
> > > +}
> >
> > I guess this is a no-op for !MGLRU but given the context (i.e. special
> > treatment for "recent" refaults), perhaps false is a more sane default?
>
> Hmm, fair point. Let me fix that in the next version.
>
> >
> > > +
> > > static void lru_gen_refault(struct folio *folio, void *shadow)
> > > {
> > > }
> > > @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
> > > folio_test_workingset(folio));
> > > }
> > >
> > > -/**
> > > - * workingset_refault - Evaluate the refault of a previously evicted folio.
> > > - * @folio: The freshly allocated replacement folio.
> > > - * @shadow: Shadow entry of the evicted folio.
> > > +/*
> > > + * Test if the folio is recently evicted by checking if
> > > + * refault distance of shadow exceeds workingset size.
> > > *
> > > - * Calculates and evaluates the refault distance of the previously
> > > - * evicted folio in the context of the node and the memcg whose memory
> > > - * pressure caused the eviction.
> > > + * As a side effect, populate workingset with the value
> > > + * unpacked from shadow.
> > > */
> > > -void workingset_refault(struct folio *folio, void *shadow)
> > > +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
> > > {
> > > - bool file = folio_is_file_lru(folio);
> > > struct mem_cgroup *eviction_memcg;
> > > struct lruvec *eviction_lruvec;
> > > unsigned long refault_distance;
> > > unsigned long workingset_size;
> > > - struct pglist_data *pgdat;
> > > - struct mem_cgroup *memcg;
> > > - unsigned long eviction;
> > > - struct lruvec *lruvec;
> > > unsigned long refault;
> > > - bool workingset;
> > > +
> > > int memcgid;
> > > - long nr;
> > > + struct pglist_data *pgdat;
> > > + unsigned long eviction;
> > >
> > > - if (lru_gen_enabled()) {
> > > - lru_gen_refault(folio, shadow);
> > > - return;
> > > - }
> > > + if (lru_gen_enabled())
> > > + return lru_gen_test_recent(shadow, file, workingset);
> >
> > Hmm.. so this function is only called by workingset_refault() when
> > lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
> > which as noted above duplicates some of the recency logic.
> >
> > I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
> > can just call workingset_test_recent(). That seems reasonable, but makes
> > me wonder...
>
> You're right. It's a bit clunky...
>
> >
> > >
> > > - unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> > > + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
> > > eviction <<= bucket_order;
> > >
> > > - rcu_read_lock();
> > > /*
> > > * Look up the memcg associated with the stored ID. It might
> > > * have been deleted since the folio's eviction.
> > > @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
> > > */
> > > eviction_memcg = mem_cgroup_from_id(memcgid);
> > > if (!mem_cgroup_disabled() && !eviction_memcg)
> > > - goto out;
> > > + return false;
> > > +
> > > eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > refault = atomic_long_read(&eviction_lruvec->nonresident_age);
> > >
> > > @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
> > > */
> > > refault_distance = (refault - eviction) & EVICTION_MASK;
> > >
> > > - /*
> > > - * The activation decision for this folio is made at the level
> > > - * where the eviction occurred, as that is where the LRU order
> > > - * during folio reclaim is being determined.
> > > - *
> > > - * However, the cgroup that will own the folio is the one that
> > > - * is actually experiencing the refault event.
> > > - */
> > > - nr = folio_nr_pages(folio);
> > > - memcg = folio_memcg(folio);
> > > - pgdat = folio_pgdat(folio);
> > > - lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > -
> > > - mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> > > -
> > > mem_cgroup_flush_stats_delayed();
> > > /*
> > > * Compare the distance to the existing workingset size. We
> > > @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
> > > NR_INACTIVE_ANON);
> > > }
> > > }
> > > - if (refault_distance > workingset_size)
> > > +
> > > + return refault_distance <= workingset_size;
> > > +}
> > > +
> > > +/**
> > > + * workingset_refault - Evaluate the refault of a previously evicted folio.
> > > + * @folio: The freshly allocated replacement folio.
> > > + * @shadow: Shadow entry of the evicted folio.
> > > + *
> > > + * Calculates and evaluates the refault distance of the previously
> > > + * evicted folio in the context of the node and the memcg whose memory
> > > + * pressure caused the eviction.
> > > + */
> > > +void workingset_refault(struct folio *folio, void *shadow)
> > > +{
> > > + bool file = folio_is_file_lru(folio);
> > > + struct pglist_data *pgdat;
> > > + struct mem_cgroup *memcg;
> > > + struct lruvec *lruvec;
> > > + bool workingset;
> > > + long nr;
> > > +
> > > + if (lru_gen_enabled()) {
> > > + lru_gen_refault(folio, shadow);
> > > + return;
> > > + }
> >
> > ... if perhaps this should call workingset_test_recent() a bit earlier,
> > since it also covers the lru_gen_*() case..? That may or may not be
> > cleaner. It _seems like_ it might produce a bit more consistent logic,
> > but just a thought and I could easily be missing details.
>
> Hmm you mean before/in place of the lru_gen_refault call?
> workingset_test_recent only covers lru_gen_test_recent,
> not the rest of the logic of lru_gen_refault I believe.
>

When reading through the code I got the impression that if
workingset_test_recent() -> lru_gen_test_recent() returned false, then
lru_gen_refault() didn't really do anything. I.e., the token/min_seq
check fails and it skips out to the end of the function. That had me
wondering whether workingset_refault() could just skip out of either
mode if workingset_test_recent() returns false (since it calls
lru_gen_test_recent()), but that may not work. Specifically I'm not sure
about that mod_lruvec_state()) call (discussed below) for the MGLRU
case. If that call is correct as is for !MGLRU, maybe it could be made
conditional based on lru_gen_enabled()..?

I guess what I was trying to get at here is that you've created this
nice workingset_test_recent() helper to cover both modes for
filemap_cachestat(), so it would be nice if it could be used in that way
in the core workingset code as well. :)

I suppose yet another option could be to skip the lru_gen_test_recent()
check in workingset_test_recent() and instead call it from
lru_gen_refault(), but then I guess you'd have to open code both checks
in the filemap_cachestat() call, which sounds kind of ugly..

>
> >
> > > +
> > > + rcu_read_lock();
> > > +
> > > + nr = folio_nr_pages(folio);
> > > + memcg = folio_memcg(folio);
> > > + pgdat = folio_pgdat(folio);
> > > + lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > +
> > > + if (!workingset_test_recent(shadow, file, &workingset)) {
> > > + /*
> > > + * The activation decision for this folio is made at the level
> > > + * where the eviction occurred, as that is where the LRU order
> > > + * during folio reclaim is being determined.
> > > + *
> > > + * However, the cgroup that will own the folio is the one that
> > > + * is actually experiencing the refault event.
> > > + */
> >
> > IIUC, this comment is explaining the difference between using the
> > eviction lru (based on the shadow entry) to calculate recency vs. the
> > lru for the current folio to process the refault. If so, perhaps it
> > should go right above the workingset_test_recent() call? (Then the if
> > braces could go away as well..).
>
> You're right! I think it should go above `nr = folio_nr_pages(folio);` call.
>

Sounds good.

> >
> > > goto out;
> > > + }
> > >
> > > folio_set_active(folio);
> > > workingset_age_nonresident(lruvec, nr);
> > > @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
> > > mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
> > > }
> > > out:
> > > + mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> >
> > Why not just leave this up earlier in the function (i.e. before the
> > recency check) as it was originally?
>
> Let me double check, but I think this is a relic from the old (and incorrect)
> version of workingset code.
>
> Originally, mod_lruvec_state uses the lruvec computed from a variable
> (pgdat) that was unpacked from the shadow. So this mod_lruvec_state
> has to go after the unpack_shadow call (which has been moved inside
> of workingset_test_recent).
>
> This is actually wrong - we actually want the pgdat from the folio. It
> has been fixed in a separate patch:
>
> https://lore.kernel.org/all/[email protected]/T/#u
>

Yep, I had applied this series on top of that one..

> But I didn't update it here. Let me stare at it a bit more to make sure,
> and then fix it in the next version. It should not change the behavior,
> but it should be cleaner.
>

Sounds good. FWIW it looked like the logic hadn't changed with this
series so I just assumed it was correct, just possibly moved around
unnecessarily. I'll try to make more sense of it in the next version.
Thanks.

Brian

> >
> > Brian
> >
> > > rcu_read_unlock();
> > > }
> > >
> > > --
> > > 2.30.2
> > >
> >
>

2023-01-20 20:08:16

by Brian Foster

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Fri, Jan 20, 2023 at 11:27:12AM -0500, Johannes Weiner wrote:
> On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > + int memcgid;
> > > + struct pglist_data *pgdat;
> > > + unsigned long token;
> > > +
> > > + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > + eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > + lrugen = &lruvec->lrugen;
> > > +
> > > + min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> >
> > I think this might be more readable without the double negative.
> >
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
>
> The initial version was refactored. Yu explicitly requested it be
> duplicated [1] to cut down on some boiler plate.
>

Ah, sorry for missing the previous discussion. TBH I wasn't terribly
comfortable reviewing this one until I had made enough passes at the
second patch..

> I have to agree with Brian on this one, though. The factored version
> is better for maintenance than duplicating the core logic here. Even
> if it ends up a bit more boiler plate - it's harder to screw that up,
> and easier to catch at compile time, than the duplicates diverging.
>

It seems more elegant to me, FWIW. Glad I'm not totally off the rails at
least. ;) But I'll defer to those who know the code better and the
author, so that's just my .02. I don't want to cause this to go around
in circles..

Brian

> [1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/
>

2023-01-21 04:05:59

by Yu Zhao

[permalink] [raw]
Subject: Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check

On Fri, Jan 20, 2023 at 9:26 AM Johannes Weiner <[email protected]> wrote:
>
> On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > + int memcgid;
> > > + struct pglist_data *pgdat;
> > > + unsigned long token;
> > > +
> > > + unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > + eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > + lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > + lrugen = &lruvec->lrugen;
> > > +
> > > + min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > + return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> >
> > I think this might be more readable without the double negative.
> >
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
>
> The initial version was refactored. Yu explicitly requested it be
> duplicated [1] to cut down on some boiler plate.
>
> I have to agree with Brian on this one, though. The factored version
> is better for maintenance than duplicating the core logic here. Even
> if it ends up a bit more boiler plate - it's harder to screw that up,
> and easier to catch at compile time, than the duplicates diverging.
>
> [1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/

No objections to either way. I'll take a look at the final version and
we are good as long as it works as intended.