2024-03-04 02:30:42

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Sorry for noise. I should've applied v5's change in v4.

Changes from v4:
1. Make other scans start with may_cache_trim_mode = 1.

Changes from v3:
1. Update the test result in the commit message with v4.
2. Retry the whole priority loop with cache_trim_mode off again,
rather than forcing the mode off at the highest priority,
when the mode doesn't work. (feedbacked by Johannes Weiner)

Changes from v2:
1. Change the condition to stop cache_trim_mode.

From - Stop it if it's at high scan priorities, 0 or 1.
To - Stop it if it's at high scan priorities, 0 or 1, and
the mode didn't work in the previous turn.

(feedbacked by Huang Ying)

2. Change the test result in the commit message after testing
with the new logic.

Changes from v1:
1. Add a comment describing why this change is necessary in code
and rewrite the commit message with how to reproduce and what
the result is using vmstat. (feedbacked by Andrew Morton and
Yu Zhao)
2. Change the condition to avoid cache_trim_mode from
'sc->priority != 1' to 'sc->priority > 1' to reflect cases
where the priority goes to zero all the way. (feedbacked by
Yu Zhao)
--->8---
From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001
From: Byungchul Park <[email protected]>
Date: Mon, 4 Mar 2024 11:24:40 +0900
Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
pages. However, it should be more careful to use the mode because it's
going to prevent anon pages from being reclaimed even if there are a
huge number of anon pages that are cold and should be reclaimed. Even
worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
stopping kswapd from functioning until direct reclaim eventually works
to resume kswapd.

So kswapd needs to retry its scan priority loop with cache_trim_mode
off again if the mode doesn't work for reclaim.

The problematic behavior can be reproduced by:

CONFIG_NUMA_BALANCING enabled
sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
numa node0 (8GB local memory, 16 CPUs)
numa node1 (8GB slow tier memory, no CPUs)

Sequence:

1) echo 3 > /proc/sys/vm/drop_caches
2) To emulate the system with full of cold memory in local DRAM, run
the following dummy program and never touch the region:

mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);

3) Run any memory intensive work e.g. XSBench.
4) Check if numa balancing is working e.i. promotion/demotion.
5) Iterate 1) ~ 4) until numa balancing stops.

With this, you could see that promotion/demotion are not working because
kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.

Interesting vmstat delta's differences between before and after are like:

+-----------------------+-------------------------------+
| interesting vmstat | before | after |
+-----------------------+-------------------------------+
| nr_inactive_anon | 321935 | 1646193 |
| nr_active_anon | 1780700 | 456388 |
| nr_inactive_file | 30425 | 27836 |
| nr_active_file | 14961 | 1217 |
| pgpromote_success | 356 | 1310120 |
| pgpromote_candidate | 21953245 | 1736872 |
| pgactivate | 1844523 | 3292443 |
| pgdeactivate | 50634 | 1526701 |
| pgfault | 31100294 | 6715375 |
| pgdemote_kswapd | 30856 | 1954199 |
| pgscan_kswapd | 1861981 | 7100099 |
| pgscan_anon | 1822930 | 7061135 |
| pgscan_file | 39051 | 38964 |
| pgsteal_anon | 386 | 1925214 |
| pgsteal_file | 30470 | 28985 |
| pageoutrun | 30 | 500 |
| numa_hint_faults | 27418279 | 3090773 |
| numa_pages_migrated | 356 | 1310120 |
+-----------------------+-------------------------------+

Signed-off-by: Byungchul Park <[email protected]>
---
mm/vmscan.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index bba207f41b14..77948b0f8b5b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -108,6 +108,9 @@ struct scan_control {
/* Can folios be swapped as part of reclaim? */
unsigned int may_swap:1;

+ /* Can cache_trim_mode be turned on as part of reclaim? */
+ unsigned int may_cache_trim_mode:1;
+
/* Proactive reclaim invoked by userspace through memory.reclaim */
unsigned int proactive:1;

@@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.may_unmap = 1,
+ .may_cache_trim_mode = 1,
};
struct reclaim_stat stat;
unsigned int nr_reclaimed;
@@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
.may_writepage = 1,
.may_unmap = 1,
.may_swap = 1,
+ .may_cache_trim_mode = 1,
.no_demotion = 1,
};

@@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
* anonymous pages.
*/
file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
- if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
+ if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
+ sc->may_cache_trim_mode)
sc->cache_trim_mode = 1;
else
sc->cache_trim_mode = 0;
@@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
.may_writepage = true,
.may_unmap = true,
.may_swap = true,
+ .may_cache_trim_mode = 1,
.reclaim_idx = MAX_NR_ZONES - 1,
.gfp_mask = GFP_KERNEL,
};
@@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = 1,
+ .may_cache_trim_mode = 1,
};

/*
@@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
.may_unmap = 1,
.reclaim_idx = MAX_NR_ZONES - 1,
.may_swap = !noswap,
+ .may_cache_trim_mode = 1,
};

WARN_ON_ONCE(!current->reclaim_state);
@@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),
+ .may_cache_trim_mode = 1,
.proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),
};
/*
@@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
.gfp_mask = GFP_KERNEL,
.order = order,
.may_unmap = 1,
+ .may_cache_trim_mode = 1,
};

set_task_reclaim_state(current, &sc.reclaim_state);
@@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
sc.priority--;
} while (sc.priority >= 1);

- if (!sc.nr_reclaimed)
+ if (!sc.nr_reclaimed) {
+ if (sc.may_cache_trim_mode) {
+ sc.may_cache_trim_mode = 0;
+ goto restart;
+ }
+
pgdat->kswapd_failures++;
+ }

out:
clear_reclaim_active(pgdat, highest_zoneidx);
@@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
.may_writepage = 1,
.may_unmap = 1,
.may_swap = 1,
+ .may_cache_trim_mode = 1,
.hibernation_mode = 1,
};
struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
@@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
.may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
.may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
.may_swap = 1,
+ .may_cache_trim_mode = 1,
.reclaim_idx = gfp_zone(gfp_mask),
};
unsigned long pflags;
--
2.17.1



2024-03-04 02:55:19

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> Sorry for noise. I should've applied v5's change in v4.
>
> Changes from v4:
> 1. Make other scans start with may_cache_trim_mode = 1.
>
> Changes from v3:
> 1. Update the test result in the commit message with v4.
> 2. Retry the whole priority loop with cache_trim_mode off again,
> rather than forcing the mode off at the highest priority,
> when the mode doesn't work. (feedbacked by Johannes Weiner)
>
> Changes from v2:
> 1. Change the condition to stop cache_trim_mode.
>
> From - Stop it if it's at high scan priorities, 0 or 1.
> To - Stop it if it's at high scan priorities, 0 or 1, and
> the mode didn't work in the previous turn.
>
> (feedbacked by Huang Ying)
>
> 2. Change the test result in the commit message after testing
> with the new logic.
>
> Changes from v1:
> 1. Add a comment describing why this change is necessary in code
> and rewrite the commit message with how to reproduce and what
> the result is using vmstat. (feedbacked by Andrew Morton and
> Yu Zhao)
> 2. Change the condition to avoid cache_trim_mode from
> 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> where the priority goes to zero all the way. (feedbacked by
> Yu Zhao)
> --->8---
> From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001
> From: Byungchul Park <[email protected]>
> Date: Mon, 4 Mar 2024 11:24:40 +0900
> Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>
> With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> pages. However, it should be more careful to use the mode because it's
> going to prevent anon pages from being reclaimed even if there are a
> huge number of anon pages that are cold and should be reclaimed. Even
> worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> stopping kswapd from functioning until direct reclaim eventually works
> to resume kswapd.
>
> So kswapd needs to retry its scan priority loop with cache_trim_mode
> off again if the mode doesn't work for reclaim.
>
> The problematic behavior can be reproduced by:
>
> CONFIG_NUMA_BALANCING enabled
> sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> numa node0 (8GB local memory, 16 CPUs)
> numa node1 (8GB slow tier memory, no CPUs)
>
> Sequence:
>
> 1) echo 3 > /proc/sys/vm/drop_caches
> 2) To emulate the system with full of cold memory in local DRAM, run
> the following dummy program and never touch the region:
>
> mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>
> 3) Run any memory intensive work e.g. XSBench.
> 4) Check if numa balancing is working e.i. promotion/demotion.
> 5) Iterate 1) ~ 4) until numa balancing stops.
>
> With this, you could see that promotion/demotion are not working because
> kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>
> Interesting vmstat delta's differences between before and after are like:
>
> +-----------------------+-------------------------------+
> | interesting vmstat | before | after |
> +-----------------------+-------------------------------+
> | nr_inactive_anon | 321935 | 1646193 |
> | nr_active_anon | 1780700 | 456388 |
> | nr_inactive_file | 30425 | 27836 |
> | nr_active_file | 14961 | 1217 |
> | pgpromote_success | 356 | 1310120 |
> | pgpromote_candidate | 21953245 | 1736872 |
> | pgactivate | 1844523 | 3292443 |
> | pgdeactivate | 50634 | 1526701 |
> | pgfault | 31100294 | 6715375 |
> | pgdemote_kswapd | 30856 | 1954199 |
> | pgscan_kswapd | 1861981 | 7100099 |
> | pgscan_anon | 1822930 | 7061135 |
> | pgscan_file | 39051 | 38964 |
> | pgsteal_anon | 386 | 1925214 |
> | pgsteal_file | 30470 | 28985 |
> | pageoutrun | 30 | 500 |
> | numa_hint_faults | 27418279 | 3090773 |
> | numa_pages_migrated | 356 | 1310120 |
> +-----------------------+-------------------------------+
>
> Signed-off-by: Byungchul Park <[email protected]>
> ---
> mm/vmscan.c | 23 +++++++++++++++++++++--
> 1 file changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..77948b0f8b5b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -108,6 +108,9 @@ struct scan_control {
> /* Can folios be swapped as part of reclaim? */
> unsigned int may_swap:1;
>
> + /* Can cache_trim_mode be turned on as part of reclaim? */
> + unsigned int may_cache_trim_mode:1;
> +

Although it's generally not good to use negative logic, I think that
it's better to name the flag as something like "no_cache_trim_mode" to
make it easier to initialize the flag to its default value ("0").

> /* Proactive reclaim invoked by userspace through memory.reclaim */
> unsigned int proactive:1;
>
> @@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> .may_unmap = 1,
> + .may_cache_trim_mode = 1,
> };
> struct reclaim_stat stat;
> unsigned int nr_reclaimed;
> @@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
> .may_writepage = 1,
> .may_unmap = 1,
> .may_swap = 1,
> + .may_cache_trim_mode = 1,
> .no_demotion = 1,
> };
>
> @@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> * anonymous pages.
> */
> file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> + sc->may_cache_trim_mode)
> sc->cache_trim_mode = 1;
> else
> sc->cache_trim_mode = 0;
> @@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
> .may_writepage = true,
> .may_unmap = true,
> .may_swap = true,
> + .may_cache_trim_mode = 1,
> .reclaim_idx = MAX_NR_ZONES - 1,
> .gfp_mask = GFP_KERNEL,
> };
> @@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = 1,
> + .may_cache_trim_mode = 1,
> };
>
> /*
> @@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
> .may_unmap = 1,
> .reclaim_idx = MAX_NR_ZONES - 1,
> .may_swap = !noswap,
> + .may_cache_trim_mode = 1,
> };
>
> WARN_ON_ONCE(!current->reclaim_state);
> @@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),
> + .may_cache_trim_mode = 1,
> .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),
> };
> /*
> @@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> .gfp_mask = GFP_KERNEL,
> .order = order,
> .may_unmap = 1,
> + .may_cache_trim_mode = 1,
> };
>
> set_task_reclaim_state(current, &sc.reclaim_state);
> @@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> sc.priority--;
> } while (sc.priority >= 1);
>
> - if (!sc.nr_reclaimed)
> + if (!sc.nr_reclaimed) {
> + if (sc.may_cache_trim_mode) {

sc.may_cache_trim_mode && cache_trim_mode ?

> + sc.may_cache_trim_mode = 0;
> + goto restart;
> + }
> +
> pgdat->kswapd_failures++;
> + }
>
> out:
> clear_reclaim_active(pgdat, highest_zoneidx);
> @@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> .may_writepage = 1,
> .may_unmap = 1,
> .may_swap = 1,
> + .may_cache_trim_mode = 1,
> .hibernation_mode = 1,
> };
> struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> @@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
> .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
> .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
> .may_swap = 1,
> + .may_cache_trim_mode = 1,
> .reclaim_idx = gfp_zone(gfp_mask),
> };
> unsigned long pflags;

--
Best Regards,
Huang, Ying

2024-03-04 03:04:34

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Mon, Mar 04, 2024 at 10:53:11AM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > Sorry for noise. I should've applied v5's change in v4.
> >
> > Changes from v4:
> > 1. Make other scans start with may_cache_trim_mode = 1.
> >
> > Changes from v3:
> > 1. Update the test result in the commit message with v4.
> > 2. Retry the whole priority loop with cache_trim_mode off again,
> > rather than forcing the mode off at the highest priority,
> > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >
> > Changes from v2:
> > 1. Change the condition to stop cache_trim_mode.
> >
> > From - Stop it if it's at high scan priorities, 0 or 1.
> > To - Stop it if it's at high scan priorities, 0 or 1, and
> > the mode didn't work in the previous turn.
> >
> > (feedbacked by Huang Ying)
> >
> > 2. Change the test result in the commit message after testing
> > with the new logic.
> >
> > Changes from v1:
> > 1. Add a comment describing why this change is necessary in code
> > and rewrite the commit message with how to reproduce and what
> > the result is using vmstat. (feedbacked by Andrew Morton and
> > Yu Zhao)
> > 2. Change the condition to avoid cache_trim_mode from
> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > where the priority goes to zero all the way. (feedbacked by
> > Yu Zhao)
> > --->8---
> > From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001
> > From: Byungchul Park <[email protected]>
> > Date: Mon, 4 Mar 2024 11:24:40 +0900
> > Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >
> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > pages. However, it should be more careful to use the mode because it's
> > going to prevent anon pages from being reclaimed even if there are a
> > huge number of anon pages that are cold and should be reclaimed. Even
> > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > stopping kswapd from functioning until direct reclaim eventually works
> > to resume kswapd.
> >
> > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > off again if the mode doesn't work for reclaim.
> >
> > The problematic behavior can be reproduced by:
> >
> > CONFIG_NUMA_BALANCING enabled
> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > numa node0 (8GB local memory, 16 CPUs)
> > numa node1 (8GB slow tier memory, no CPUs)
> >
> > Sequence:
> >
> > 1) echo 3 > /proc/sys/vm/drop_caches
> > 2) To emulate the system with full of cold memory in local DRAM, run
> > the following dummy program and never touch the region:
> >
> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >
> > 3) Run any memory intensive work e.g. XSBench.
> > 4) Check if numa balancing is working e.i. promotion/demotion.
> > 5) Iterate 1) ~ 4) until numa balancing stops.
> >
> > With this, you could see that promotion/demotion are not working because
> > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >
> > Interesting vmstat delta's differences between before and after are like:
> >
> > +-----------------------+-------------------------------+
> > | interesting vmstat | before | after |
> > +-----------------------+-------------------------------+
> > | nr_inactive_anon | 321935 | 1646193 |
> > | nr_active_anon | 1780700 | 456388 |
> > | nr_inactive_file | 30425 | 27836 |
> > | nr_active_file | 14961 | 1217 |
> > | pgpromote_success | 356 | 1310120 |
> > | pgpromote_candidate | 21953245 | 1736872 |
> > | pgactivate | 1844523 | 3292443 |
> > | pgdeactivate | 50634 | 1526701 |
> > | pgfault | 31100294 | 6715375 |
> > | pgdemote_kswapd | 30856 | 1954199 |
> > | pgscan_kswapd | 1861981 | 7100099 |
> > | pgscan_anon | 1822930 | 7061135 |
> > | pgscan_file | 39051 | 38964 |
> > | pgsteal_anon | 386 | 1925214 |
> > | pgsteal_file | 30470 | 28985 |
> > | pageoutrun | 30 | 500 |
> > | numa_hint_faults | 27418279 | 3090773 |
> > | numa_pages_migrated | 356 | 1310120 |
> > +-----------------------+-------------------------------+
> >
> > Signed-off-by: Byungchul Park <[email protected]>
> > ---
> > mm/vmscan.c | 23 +++++++++++++++++++++--
> > 1 file changed, 21 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bba207f41b14..77948b0f8b5b 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -108,6 +108,9 @@ struct scan_control {
> > /* Can folios be swapped as part of reclaim? */
> > unsigned int may_swap:1;
> >
> > + /* Can cache_trim_mode be turned on as part of reclaim? */
> > + unsigned int may_cache_trim_mode:1;
> > +
>
> Although it's generally not good to use negative logic, I think that
> it's better to name the flag as something like "no_cache_trim_mode" to
> make it easier to initialize the flag to its default value ("0").

No preference to me. But don't think it's better to use another of may_*
in scan_control as Johannes Weiner suggested?

> > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > unsigned int proactive:1;
> >
> > @@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> > struct scan_control sc = {
> > .gfp_mask = GFP_KERNEL,
> > .may_unmap = 1,
> > + .may_cache_trim_mode = 1,
> > };
> > struct reclaim_stat stat;
> > unsigned int nr_reclaimed;
> > @@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
> > .may_writepage = 1,
> > .may_unmap = 1,
> > .may_swap = 1,
> > + .may_cache_trim_mode = 1,
> > .no_demotion = 1,
> > };
> >
> > @@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > * anonymous pages.
> > */
> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > + sc->may_cache_trim_mode)
> > sc->cache_trim_mode = 1;
> > else
> > sc->cache_trim_mode = 0;
> > @@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
> > .may_writepage = true,
> > .may_unmap = true,
> > .may_swap = true,
> > + .may_cache_trim_mode = 1,
> > .reclaim_idx = MAX_NR_ZONES - 1,
> > .gfp_mask = GFP_KERNEL,
> > };
> > @@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> > .may_writepage = !laptop_mode,
> > .may_unmap = 1,
> > .may_swap = 1,
> > + .may_cache_trim_mode = 1,
> > };
> >
> > /*
> > @@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
> > .may_unmap = 1,
> > .reclaim_idx = MAX_NR_ZONES - 1,
> > .may_swap = !noswap,
> > + .may_cache_trim_mode = 1,
> > };
> >
> > WARN_ON_ONCE(!current->reclaim_state);
> > @@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> > .may_writepage = !laptop_mode,
> > .may_unmap = 1,
> > .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),
> > + .may_cache_trim_mode = 1,
> > .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),
> > };
> > /*
> > @@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > .gfp_mask = GFP_KERNEL,
> > .order = order,
> > .may_unmap = 1,
> > + .may_cache_trim_mode = 1,
> > };
> >
> > set_task_reclaim_state(current, &sc.reclaim_state);
> > @@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > sc.priority--;
> > } while (sc.priority >= 1);
> >
> > - if (!sc.nr_reclaimed)
> > + if (!sc.nr_reclaimed) {
> > + if (sc.may_cache_trim_mode) {
>
> sc.may_cache_trim_mode && cache_trim_mode ?

I don't think so. cache_trim_mode has a chance to switch every
prepare_scan_control() like:

if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
sc->may_cache_trim_mode)
sc->cache_trim_mode = 1;
else
sc->cache_trim_mode = 0;

So referring to the last value is not a good idea.

Byungchul

> > + sc.may_cache_trim_mode = 0;
> > + goto restart;
> > + }
> > +
> > pgdat->kswapd_failures++;
> > + }
> >
> > out:
> > clear_reclaim_active(pgdat, highest_zoneidx);
> > @@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> > .may_writepage = 1,
> > .may_unmap = 1,
> > .may_swap = 1,
> > + .may_cache_trim_mode = 1,
> > .hibernation_mode = 1,
> > };
> > struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> > @@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
> > .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
> > .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
> > .may_swap = 1,
> > + .may_cache_trim_mode = 1,
> > .reclaim_idx = gfp_zone(gfp_mask),
> > };
> > unsigned long pflags;
>
> --
> Best Regards,
> Huang, Ying

2024-03-04 03:31:15

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> On Mon, Mar 04, 2024 at 10:53:11AM +0800, Huang, Ying wrote:
>> Byungchul Park <[email protected]> writes:
>>
>> > Sorry for noise. I should've applied v5's change in v4.
>> >
>> > Changes from v4:
>> > 1. Make other scans start with may_cache_trim_mode = 1.
>> >
>> > Changes from v3:
>> > 1. Update the test result in the commit message with v4.
>> > 2. Retry the whole priority loop with cache_trim_mode off again,
>> > rather than forcing the mode off at the highest priority,
>> > when the mode doesn't work. (feedbacked by Johannes Weiner)
>> >
>> > Changes from v2:
>> > 1. Change the condition to stop cache_trim_mode.
>> >
>> > From - Stop it if it's at high scan priorities, 0 or 1.
>> > To - Stop it if it's at high scan priorities, 0 or 1, and
>> > the mode didn't work in the previous turn.
>> >
>> > (feedbacked by Huang Ying)
>> >
>> > 2. Change the test result in the commit message after testing
>> > with the new logic.
>> >
>> > Changes from v1:
>> > 1. Add a comment describing why this change is necessary in code
>> > and rewrite the commit message with how to reproduce and what
>> > the result is using vmstat. (feedbacked by Andrew Morton and
>> > Yu Zhao)
>> > 2. Change the condition to avoid cache_trim_mode from
>> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> > where the priority goes to zero all the way. (feedbacked by
>> > Yu Zhao)
>> > --->8---
>> > From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001
>> > From: Byungchul Park <[email protected]>
>> > Date: Mon, 4 Mar 2024 11:24:40 +0900
>> > Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>> >
>> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> > pages. However, it should be more careful to use the mode because it's
>> > going to prevent anon pages from being reclaimed even if there are a
>> > huge number of anon pages that are cold and should be reclaimed. Even
>> > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> > stopping kswapd from functioning until direct reclaim eventually works
>> > to resume kswapd.
>> >
>> > So kswapd needs to retry its scan priority loop with cache_trim_mode
>> > off again if the mode doesn't work for reclaim.
>> >
>> > The problematic behavior can be reproduced by:
>> >
>> > CONFIG_NUMA_BALANCING enabled
>> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> > numa node0 (8GB local memory, 16 CPUs)
>> > numa node1 (8GB slow tier memory, no CPUs)
>> >
>> > Sequence:
>> >
>> > 1) echo 3 > /proc/sys/vm/drop_caches
>> > 2) To emulate the system with full of cold memory in local DRAM, run
>> > the following dummy program and never touch the region:
>> >
>> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> >
>> > 3) Run any memory intensive work e.g. XSBench.
>> > 4) Check if numa balancing is working e.i. promotion/demotion.
>> > 5) Iterate 1) ~ 4) until numa balancing stops.
>> >
>> > With this, you could see that promotion/demotion are not working because
>> > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>> >
>> > Interesting vmstat delta's differences between before and after are like:
>> >
>> > +-----------------------+-------------------------------+
>> > | interesting vmstat | before | after |
>> > +-----------------------+-------------------------------+
>> > | nr_inactive_anon | 321935 | 1646193 |
>> > | nr_active_anon | 1780700 | 456388 |
>> > | nr_inactive_file | 30425 | 27836 |
>> > | nr_active_file | 14961 | 1217 |
>> > | pgpromote_success | 356 | 1310120 |
>> > | pgpromote_candidate | 21953245 | 1736872 |
>> > | pgactivate | 1844523 | 3292443 |
>> > | pgdeactivate | 50634 | 1526701 |
>> > | pgfault | 31100294 | 6715375 |
>> > | pgdemote_kswapd | 30856 | 1954199 |
>> > | pgscan_kswapd | 1861981 | 7100099 |
>> > | pgscan_anon | 1822930 | 7061135 |
>> > | pgscan_file | 39051 | 38964 |
>> > | pgsteal_anon | 386 | 1925214 |
>> > | pgsteal_file | 30470 | 28985 |
>> > | pageoutrun | 30 | 500 |
>> > | numa_hint_faults | 27418279 | 3090773 |
>> > | numa_pages_migrated | 356 | 1310120 |
>> > +-----------------------+-------------------------------+
>> >
>> > Signed-off-by: Byungchul Park <[email protected]>
>> > ---
>> > mm/vmscan.c | 23 +++++++++++++++++++++--
>> > 1 file changed, 21 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > index bba207f41b14..77948b0f8b5b 100644
>> > --- a/mm/vmscan.c
>> > +++ b/mm/vmscan.c
>> > @@ -108,6 +108,9 @@ struct scan_control {
>> > /* Can folios be swapped as part of reclaim? */
>> > unsigned int may_swap:1;
>> >
>> > + /* Can cache_trim_mode be turned on as part of reclaim? */
>> > + unsigned int may_cache_trim_mode:1;
>> > +
>>
>> Although it's generally not good to use negative logic, I think that
>> it's better to name the flag as something like "no_cache_trim_mode" to
>> make it easier to initialize the flag to its default value ("0").
>
> No preference to me. But don't think it's better to use another of may_*
> in scan_control as Johannes Weiner suggested?
>
>> > /* Proactive reclaim invoked by userspace through memory.reclaim */
>> > unsigned int proactive:1;
>> >
>> > @@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>> > struct scan_control sc = {
>> > .gfp_mask = GFP_KERNEL,
>> > .may_unmap = 1,
>> > + .may_cache_trim_mode = 1,
>> > };
>> > struct reclaim_stat stat;
>> > unsigned int nr_reclaimed;
>> > @@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
>> > .may_writepage = 1,
>> > .may_unmap = 1,
>> > .may_swap = 1,
>> > + .may_cache_trim_mode = 1,
>> > .no_demotion = 1,
>> > };
>> >
>> > @@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> > * anonymous pages.
>> > */
>> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
>> > + sc->may_cache_trim_mode)
>> > sc->cache_trim_mode = 1;
>> > else
>> > sc->cache_trim_mode = 0;
>> > @@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
>> > .may_writepage = true,
>> > .may_unmap = true,
>> > .may_swap = true,
>> > + .may_cache_trim_mode = 1,
>> > .reclaim_idx = MAX_NR_ZONES - 1,
>> > .gfp_mask = GFP_KERNEL,
>> > };
>> > @@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>> > .may_writepage = !laptop_mode,
>> > .may_unmap = 1,
>> > .may_swap = 1,
>> > + .may_cache_trim_mode = 1,
>> > };
>> >
>> > /*
>> > @@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
>> > .may_unmap = 1,
>> > .reclaim_idx = MAX_NR_ZONES - 1,
>> > .may_swap = !noswap,
>> > + .may_cache_trim_mode = 1,
>> > };
>> >
>> > WARN_ON_ONCE(!current->reclaim_state);
>> > @@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
>> > .may_writepage = !laptop_mode,
>> > .may_unmap = 1,
>> > .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),
>> > + .may_cache_trim_mode = 1,
>> > .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),
>> > };
>> > /*
>> > @@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> > .gfp_mask = GFP_KERNEL,
>> > .order = order,
>> > .may_unmap = 1,
>> > + .may_cache_trim_mode = 1,
>> > };
>> >
>> > set_task_reclaim_state(current, &sc.reclaim_state);
>> > @@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> > sc.priority--;
>> > } while (sc.priority >= 1);
>> >
>> > - if (!sc.nr_reclaimed)
>> > + if (!sc.nr_reclaimed) {
>> > + if (sc.may_cache_trim_mode) {
>>
>> sc.may_cache_trim_mode && cache_trim_mode ?
>
> I don't think so. cache_trim_mode has a chance to switch every
> prepare_scan_control() like:
>
> if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> sc->may_cache_trim_mode)
> sc->cache_trim_mode = 1;
> else
> sc->cache_trim_mode = 0;
>
> So referring to the last value is not a good idea.

We should only restart without cache_trim_mode if cache_trim_mode causes
issue. If it isn't enabled with highest priority (lowest value), it
doesn't help to disable cache_trim_mode.

And, please take care of other "break" in the loop, for example, if
kthread_should_stop(), etc.

--
Best Regards,
Huang, Ying

> Byungchul
>
>> > + sc.may_cache_trim_mode = 0;
>> > + goto restart;
>> > + }
>> > +
>> > pgdat->kswapd_failures++;
>> > + }
>> >
>> > out:
>> > clear_reclaim_active(pgdat, highest_zoneidx);
>> > @@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
>> > .may_writepage = 1,
>> > .may_unmap = 1,
>> > .may_swap = 1,
>> > + .may_cache_trim_mode = 1,
>> > .hibernation_mode = 1,
>> > };
>> > struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
>> > @@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
>> > .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
>> > .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
>> > .may_swap = 1,
>> > + .may_cache_trim_mode = 1,
>> > .reclaim_idx = gfp_zone(gfp_mask),
>> > };
>> > unsigned long pflags;
>>
>> --
>> Best Regards,
>> Huang, Ying

2024-03-04 03:36:43

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Mon, Mar 04, 2024 at 11:29:06AM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > On Mon, Mar 04, 2024 at 10:53:11AM +0800, Huang, Ying wrote:
> >> Byungchul Park <[email protected]> writes:
> >>
> >> > Sorry for noise. I should've applied v5's change in v4.
> >> >
> >> > Changes from v4:
> >> > 1. Make other scans start with may_cache_trim_mode = 1.
> >> >
> >> > Changes from v3:
> >> > 1. Update the test result in the commit message with v4.
> >> > 2. Retry the whole priority loop with cache_trim_mode off again,
> >> > rather than forcing the mode off at the highest priority,
> >> > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >> >
> >> > Changes from v2:
> >> > 1. Change the condition to stop cache_trim_mode.
> >> >
> >> > From - Stop it if it's at high scan priorities, 0 or 1.
> >> > To - Stop it if it's at high scan priorities, 0 or 1, and
> >> > the mode didn't work in the previous turn.
> >> >
> >> > (feedbacked by Huang Ying)
> >> >
> >> > 2. Change the test result in the commit message after testing
> >> > with the new logic.
> >> >
> >> > Changes from v1:
> >> > 1. Add a comment describing why this change is necessary in code
> >> > and rewrite the commit message with how to reproduce and what
> >> > the result is using vmstat. (feedbacked by Andrew Morton and
> >> > Yu Zhao)
> >> > 2. Change the condition to avoid cache_trim_mode from
> >> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> >> > where the priority goes to zero all the way. (feedbacked by
> >> > Yu Zhao)
> >> > --->8---
> >> > From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001
> >> > From: Byungchul Park <[email protected]>
> >> > Date: Mon, 4 Mar 2024 11:24:40 +0900
> >> > Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >> >
> >> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> >> > pages. However, it should be more careful to use the mode because it's
> >> > going to prevent anon pages from being reclaimed even if there are a
> >> > huge number of anon pages that are cold and should be reclaimed. Even
> >> > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> >> > stopping kswapd from functioning until direct reclaim eventually works
> >> > to resume kswapd.
> >> >
> >> > So kswapd needs to retry its scan priority loop with cache_trim_mode
> >> > off again if the mode doesn't work for reclaim.
> >> >
> >> > The problematic behavior can be reproduced by:
> >> >
> >> > CONFIG_NUMA_BALANCING enabled
> >> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >> > numa node0 (8GB local memory, 16 CPUs)
> >> > numa node1 (8GB slow tier memory, no CPUs)
> >> >
> >> > Sequence:
> >> >
> >> > 1) echo 3 > /proc/sys/vm/drop_caches
> >> > 2) To emulate the system with full of cold memory in local DRAM, run
> >> > the following dummy program and never touch the region:
> >> >
> >> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> >> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >> >
> >> > 3) Run any memory intensive work e.g. XSBench.
> >> > 4) Check if numa balancing is working e.i. promotion/demotion.
> >> > 5) Iterate 1) ~ 4) until numa balancing stops.
> >> >
> >> > With this, you could see that promotion/demotion are not working because
> >> > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >> >
> >> > Interesting vmstat delta's differences between before and after are like:
> >> >
> >> > +-----------------------+-------------------------------+
> >> > | interesting vmstat | before | after |
> >> > +-----------------------+-------------------------------+
> >> > | nr_inactive_anon | 321935 | 1646193 |
> >> > | nr_active_anon | 1780700 | 456388 |
> >> > | nr_inactive_file | 30425 | 27836 |
> >> > | nr_active_file | 14961 | 1217 |
> >> > | pgpromote_success | 356 | 1310120 |
> >> > | pgpromote_candidate | 21953245 | 1736872 |
> >> > | pgactivate | 1844523 | 3292443 |
> >> > | pgdeactivate | 50634 | 1526701 |
> >> > | pgfault | 31100294 | 6715375 |
> >> > | pgdemote_kswapd | 30856 | 1954199 |
> >> > | pgscan_kswapd | 1861981 | 7100099 |
> >> > | pgscan_anon | 1822930 | 7061135 |
> >> > | pgscan_file | 39051 | 38964 |
> >> > | pgsteal_anon | 386 | 1925214 |
> >> > | pgsteal_file | 30470 | 28985 |
> >> > | pageoutrun | 30 | 500 |
> >> > | numa_hint_faults | 27418279 | 3090773 |
> >> > | numa_pages_migrated | 356 | 1310120 |
> >> > +-----------------------+-------------------------------+
> >> >
> >> > Signed-off-by: Byungchul Park <[email protected]>
> >> > ---
> >> > mm/vmscan.c | 23 +++++++++++++++++++++--
> >> > 1 file changed, 21 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> > index bba207f41b14..77948b0f8b5b 100644
> >> > --- a/mm/vmscan.c
> >> > +++ b/mm/vmscan.c
> >> > @@ -108,6 +108,9 @@ struct scan_control {
> >> > /* Can folios be swapped as part of reclaim? */
> >> > unsigned int may_swap:1;
> >> >
> >> > + /* Can cache_trim_mode be turned on as part of reclaim? */
> >> > + unsigned int may_cache_trim_mode:1;
> >> > +
> >>
> >> Although it's generally not good to use negative logic, I think that
> >> it's better to name the flag as something like "no_cache_trim_mode" to
> >> make it easier to initialize the flag to its default value ("0").
> >
> > No preference to me. But don't think it's better to use another of may_*
> > in scan_control as Johannes Weiner suggested?
> >
> >> > /* Proactive reclaim invoked by userspace through memory.reclaim */
> >> > unsigned int proactive:1;
> >> >
> >> > @@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> >> > struct scan_control sc = {
> >> > .gfp_mask = GFP_KERNEL,
> >> > .may_unmap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > };
> >> > struct reclaim_stat stat;
> >> > unsigned int nr_reclaimed;
> >> > @@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
> >> > .may_writepage = 1,
> >> > .may_unmap = 1,
> >> > .may_swap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > .no_demotion = 1,
> >> > };
> >> >
> >> > @@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> >> > * anonymous pages.
> >> > */
> >> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> >> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> >> > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> >> > + sc->may_cache_trim_mode)
> >> > sc->cache_trim_mode = 1;
> >> > else
> >> > sc->cache_trim_mode = 0;
> >> > @@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
> >> > .may_writepage = true,
> >> > .may_unmap = true,
> >> > .may_swap = true,
> >> > + .may_cache_trim_mode = 1,
> >> > .reclaim_idx = MAX_NR_ZONES - 1,
> >> > .gfp_mask = GFP_KERNEL,
> >> > };
> >> > @@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> >> > .may_writepage = !laptop_mode,
> >> > .may_unmap = 1,
> >> > .may_swap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > };
> >> >
> >> > /*
> >> > @@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
> >> > .may_unmap = 1,
> >> > .reclaim_idx = MAX_NR_ZONES - 1,
> >> > .may_swap = !noswap,
> >> > + .may_cache_trim_mode = 1,
> >> > };
> >> >
> >> > WARN_ON_ONCE(!current->reclaim_state);
> >> > @@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> >> > .may_writepage = !laptop_mode,
> >> > .may_unmap = 1,
> >> > .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),
> >> > + .may_cache_trim_mode = 1,
> >> > .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),
> >> > };
> >> > /*
> >> > @@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >> > .gfp_mask = GFP_KERNEL,
> >> > .order = order,
> >> > .may_unmap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > };
> >> >
> >> > set_task_reclaim_state(current, &sc.reclaim_state);
> >> > @@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >> > sc.priority--;
> >> > } while (sc.priority >= 1);
> >> >
> >> > - if (!sc.nr_reclaimed)
> >> > + if (!sc.nr_reclaimed) {
> >> > + if (sc.may_cache_trim_mode) {
> >>
> >> sc.may_cache_trim_mode && cache_trim_mode ?
> >
> > I don't think so. cache_trim_mode has a chance to switch every
> > prepare_scan_control() like:
> >
> > if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > sc->may_cache_trim_mode)
> > sc->cache_trim_mode = 1;
> > else
> > sc->cache_trim_mode = 0;
> >
> > So referring to the last value is not a good idea.
>
> We should only restart without cache_trim_mode if cache_trim_mode causes
> issue. If it isn't enabled with highest priority (lowest value), it
> doesn't help to disable cache_trim_mode.

Yes, right. Lemme think it more and apply the consideration.

> And, please take care of other "break" in the loop, for example, if
> kthread_should_stop(), etc.

I will. Thank you.

Byungchul

> --
> Best Regards,
> Huang, Ying
>
> > Byungchul
> >
> >> > + sc.may_cache_trim_mode = 0;
> >> > + goto restart;
> >> > + }
> >> > +
> >> > pgdat->kswapd_failures++;
> >> > + }
> >> >
> >> > out:
> >> > clear_reclaim_active(pgdat, highest_zoneidx);
> >> > @@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> >> > .may_writepage = 1,
> >> > .may_unmap = 1,
> >> > .may_swap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > .hibernation_mode = 1,
> >> > };
> >> > struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> >> > @@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
> >> > .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE),
> >> > .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
> >> > .may_swap = 1,
> >> > + .may_cache_trim_mode = 1,
> >> > .reclaim_idx = gfp_zone(gfp_mask),
> >> > };
> >> > unsigned long pflags;
> >>
> >> --
> >> Best Regards,
> >> Huang, Ying

2024-03-04 08:22:46

by Byungchul Park

[permalink] [raw]
Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Changes from v5:
1. Make it retry the kswapd's scan priority loop with
cache_trim_mode off *only if* the mode didn't work in the
previous loop. (feedbacked by Huang Ying)
2. Take into account 'break's from the priority loop when making
the decision whether to retry. (feedbacked by Huang Ying)
3. Update the test result in the commit message.

Changes from v4:
1. Make other scans start with may_cache_trim_mode = 1.

Changes from v3:
1. Update the test result in the commit message with v4.
2. Retry the whole priority loop with cache_trim_mode off again,
rather than forcing the mode off at the highest priority,
when the mode doesn't work. (feedbacked by Johannes Weiner)

Changes from v2:
1. Change the condition to stop cache_trim_mode.

From - Stop it if it's at high scan priorities, 0 or 1.
To - Stop it if it's at high scan priorities, 0 or 1, and
the mode didn't work in the previous turn.

(feedbacked by Huang Ying)

2. Change the test result in the commit message after testing
with the new logic.

Changes from v1:
1. Add a comment describing why this change is necessary in code
and rewrite the commit message with how to reproduce and what
the result is using vmstat. (feedbacked by Andrew Morton and
Yu Zhao)
2. Change the condition to avoid cache_trim_mode from
'sc->priority != 1' to 'sc->priority > 1' to reflect cases
where the priority goes to zero all the way. (feedbacked by
Yu Zhao)

--->8---
From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
From: Byungchul Park <[email protected]>
Date: Mon, 4 Mar 2024 15:27:37 +0900
Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
pages. However, it should be more careful to use the mode because it's
going to prevent anon pages from being reclaimed even if there are a
huge number of anon pages that are cold and should be reclaimed. Even
worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
stopping kswapd from functioning until direct reclaim eventually works
to resume kswapd.

So kswapd needs to retry its scan priority loop with cache_trim_mode
off again if the mode doesn't work for reclaim.

The problematic behavior can be reproduced by:

CONFIG_NUMA_BALANCING enabled
sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
numa node0 (8GB local memory, 16 CPUs)
numa node1 (8GB slow tier memory, no CPUs)

Sequence:

1) echo 3 > /proc/sys/vm/drop_caches
2) To emulate the system with full of cold memory in local DRAM, run
the following dummy program and never touch the region:

mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);

3) Run any memory intensive work e.g. XSBench.
4) Check if numa balancing is working e.i. promotion/demotion.
5) Iterate 1) ~ 4) until numa balancing stops.

With this, you could see that promotion/demotion are not working because
kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.

Interesting vmstat delta's differences between before and after are like:

+-----------------------+-------------------------------+
| interesting vmstat | before | after |
+-----------------------+-------------------------------+
| nr_inactive_anon | 321935 | 1664772 |
| nr_active_anon | 1780700 | 437834 |
| nr_inactive_file | 30425 | 40882 |
| nr_active_file | 14961 | 3012 |
| pgpromote_success | 356 | 1293122 |
| pgpromote_candidate | 21953245 | 1824148 |
| pgactivate | 1844523 | 3311907 |
| pgdeactivate | 50634 | 1554069 |
| pgfault | 31100294 | 6518806 |
| pgdemote_kswapd | 30856 | 2230821 |
| pgscan_kswapd | 1861981 | 7667629 |
| pgscan_anon | 1822930 | 7610583 |
| pgscan_file | 39051 | 57046 |
| pgsteal_anon | 386 | 2192033 |
| pgsteal_file | 30470 | 38788 |
| pageoutrun | 30 | 412 |
| numa_hint_faults | 27418279 | 2875955 |
| numa_pages_migrated | 356 | 1293122 |
+-----------------------+-------------------------------+

Signed-off-by: Byungchul Park <[email protected]>
---
mm/vmscan.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index bba207f41b14..6fe45eca7766 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -108,6 +108,12 @@ struct scan_control {
/* Can folios be swapped as part of reclaim? */
unsigned int may_swap:1;

+ /* Not allow cache_trim_mode to be turned on as part of reclaim? */
+ unsigned int no_cache_trim_mode:1;
+
+ /* Has cache_trim_mode failed at least once? */
+ unsigned int cache_trim_mode_failed:1;
+
/* Proactive reclaim invoked by userspace through memory.reclaim */
unsigned int proactive:1;

@@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
* anonymous pages.
*/
file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
- if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
+ if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
+ !sc->no_cache_trim_mode)
sc->cache_trim_mode = 1;
else
sc->cache_trim_mode = 0;
@@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
*/
if (reclaimable)
pgdat->kswapd_failures = 0;
+ else if (sc->cache_trim_mode)
+ sc->cache_trim_mode_failed = 1;
}

/*
@@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
sc.priority--;
} while (sc.priority >= 1);

+ /*
+ * Restart only if it went through the priority loop all the way,
+ * but cache_trim_mode didn't work.
+ */
+ if (!sc.nr_reclaimed && sc.priority < 1 &&
+ !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
+ sc.no_cache_trim_mode = 1;
+ goto restart;
+ }
+
if (!sc.nr_reclaimed)
pgdat->kswapd_failures++;

--
2.17.1


2024-03-05 01:56:25

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> Changes from v5:
> 1. Make it retry the kswapd's scan priority loop with
> cache_trim_mode off *only if* the mode didn't work in the
> previous loop. (feedbacked by Huang Ying)
> 2. Take into account 'break's from the priority loop when making
> the decision whether to retry. (feedbacked by Huang Ying)
> 3. Update the test result in the commit message.
>
> Changes from v4:
> 1. Make other scans start with may_cache_trim_mode = 1.
>
> Changes from v3:
> 1. Update the test result in the commit message with v4.
> 2. Retry the whole priority loop with cache_trim_mode off again,
> rather than forcing the mode off at the highest priority,
> when the mode doesn't work. (feedbacked by Johannes Weiner)
>
> Changes from v2:
> 1. Change the condition to stop cache_trim_mode.
>
> From - Stop it if it's at high scan priorities, 0 or 1.
> To - Stop it if it's at high scan priorities, 0 or 1, and
> the mode didn't work in the previous turn.
>
> (feedbacked by Huang Ying)
>
> 2. Change the test result in the commit message after testing
> with the new logic.
>
> Changes from v1:
> 1. Add a comment describing why this change is necessary in code
> and rewrite the commit message with how to reproduce and what
> the result is using vmstat. (feedbacked by Andrew Morton and
> Yu Zhao)
> 2. Change the condition to avoid cache_trim_mode from
> 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> where the priority goes to zero all the way. (feedbacked by
> Yu Zhao)
>
> --->8---
> From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> From: Byungchul Park <[email protected]>
> Date: Mon, 4 Mar 2024 15:27:37 +0900
> Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>
> With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> pages. However, it should be more careful to use the mode because it's
> going to prevent anon pages from being reclaimed even if there are a
> huge number of anon pages that are cold and should be reclaimed. Even
> worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> stopping kswapd from functioning until direct reclaim eventually works
> to resume kswapd.
>
> So kswapd needs to retry its scan priority loop with cache_trim_mode
> off again if the mode doesn't work for reclaim.
>
> The problematic behavior can be reproduced by:
>
> CONFIG_NUMA_BALANCING enabled
> sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> numa node0 (8GB local memory, 16 CPUs)
> numa node1 (8GB slow tier memory, no CPUs)
>
> Sequence:
>
> 1) echo 3 > /proc/sys/vm/drop_caches
> 2) To emulate the system with full of cold memory in local DRAM, run
> the following dummy program and never touch the region:
>
> mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>
> 3) Run any memory intensive work e.g. XSBench.
> 4) Check if numa balancing is working e.i. promotion/demotion.
> 5) Iterate 1) ~ 4) until numa balancing stops.
>
> With this, you could see that promotion/demotion are not working because
> kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>
> Interesting vmstat delta's differences between before and after are like:
>
> +-----------------------+-------------------------------+
> | interesting vmstat | before | after |
> +-----------------------+-------------------------------+
> | nr_inactive_anon | 321935 | 1664772 |
> | nr_active_anon | 1780700 | 437834 |
> | nr_inactive_file | 30425 | 40882 |
> | nr_active_file | 14961 | 3012 |
> | pgpromote_success | 356 | 1293122 |
> | pgpromote_candidate | 21953245 | 1824148 |
> | pgactivate | 1844523 | 3311907 |
> | pgdeactivate | 50634 | 1554069 |
> | pgfault | 31100294 | 6518806 |
> | pgdemote_kswapd | 30856 | 2230821 |
> | pgscan_kswapd | 1861981 | 7667629 |
> | pgscan_anon | 1822930 | 7610583 |
> | pgscan_file | 39051 | 57046 |
> | pgsteal_anon | 386 | 2192033 |
> | pgsteal_file | 30470 | 38788 |
> | pageoutrun | 30 | 412 |
> | numa_hint_faults | 27418279 | 2875955 |
> | numa_pages_migrated | 356 | 1293122 |
> +-----------------------+-------------------------------+
>
> Signed-off-by: Byungchul Park <[email protected]>
> ---
> mm/vmscan.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..6fe45eca7766 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -108,6 +108,12 @@ struct scan_control {
> /* Can folios be swapped as part of reclaim? */
> unsigned int may_swap:1;
>
> + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> + unsigned int no_cache_trim_mode:1;
> +
> + /* Has cache_trim_mode failed at least once? */
> + unsigned int cache_trim_mode_failed:1;
> +
> /* Proactive reclaim invoked by userspace through memory.reclaim */
> unsigned int proactive:1;
>
> @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> * anonymous pages.
> */
> file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> + !sc->no_cache_trim_mode)
> sc->cache_trim_mode = 1;
> else
> sc->cache_trim_mode = 0;
> @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> */
> if (reclaimable)
> pgdat->kswapd_failures = 0;
> + else if (sc->cache_trim_mode)
> + sc->cache_trim_mode_failed = 1;
> }
>
> /*
> @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> sc.priority--;
> } while (sc.priority >= 1);
>
> + /*
> + * Restart only if it went through the priority loop all the way,
> + * but cache_trim_mode didn't work.
> + */
> + if (!sc.nr_reclaimed && sc.priority < 1 &&
> + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {

Can we just use sc.cache_trim_mode (instead of
sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
for priority == 1 and failed to reclaim, we will restart. If this
works, we can avoid to add another flag.

> + sc.no_cache_trim_mode = 1;
> + goto restart;
> + }
> +
> if (!sc.nr_reclaimed)
> pgdat->kswapd_failures++;

--
Best Regards,
Huang, Ying

2024-03-05 02:38:12

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > Changes from v5:
> > 1. Make it retry the kswapd's scan priority loop with
> > cache_trim_mode off *only if* the mode didn't work in the
> > previous loop. (feedbacked by Huang Ying)
> > 2. Take into account 'break's from the priority loop when making
> > the decision whether to retry. (feedbacked by Huang Ying)
> > 3. Update the test result in the commit message.
> >
> > Changes from v4:
> > 1. Make other scans start with may_cache_trim_mode = 1.
> >
> > Changes from v3:
> > 1. Update the test result in the commit message with v4.
> > 2. Retry the whole priority loop with cache_trim_mode off again,
> > rather than forcing the mode off at the highest priority,
> > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >
> > Changes from v2:
> > 1. Change the condition to stop cache_trim_mode.
> >
> > From - Stop it if it's at high scan priorities, 0 or 1.
> > To - Stop it if it's at high scan priorities, 0 or 1, and
> > the mode didn't work in the previous turn.
> >
> > (feedbacked by Huang Ying)
> >
> > 2. Change the test result in the commit message after testing
> > with the new logic.
> >
> > Changes from v1:
> > 1. Add a comment describing why this change is necessary in code
> > and rewrite the commit message with how to reproduce and what
> > the result is using vmstat. (feedbacked by Andrew Morton and
> > Yu Zhao)
> > 2. Change the condition to avoid cache_trim_mode from
> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > where the priority goes to zero all the way. (feedbacked by
> > Yu Zhao)
> >
> > --->8---
> > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> > From: Byungchul Park <[email protected]>
> > Date: Mon, 4 Mar 2024 15:27:37 +0900
> > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >
> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > pages. However, it should be more careful to use the mode because it's
> > going to prevent anon pages from being reclaimed even if there are a
> > huge number of anon pages that are cold and should be reclaimed. Even
> > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > stopping kswapd from functioning until direct reclaim eventually works
> > to resume kswapd.
> >
> > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > off again if the mode doesn't work for reclaim.
> >
> > The problematic behavior can be reproduced by:
> >
> > CONFIG_NUMA_BALANCING enabled
> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > numa node0 (8GB local memory, 16 CPUs)
> > numa node1 (8GB slow tier memory, no CPUs)
> >
> > Sequence:
> >
> > 1) echo 3 > /proc/sys/vm/drop_caches
> > 2) To emulate the system with full of cold memory in local DRAM, run
> > the following dummy program and never touch the region:
> >
> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >
> > 3) Run any memory intensive work e.g. XSBench.
> > 4) Check if numa balancing is working e.i. promotion/demotion.
> > 5) Iterate 1) ~ 4) until numa balancing stops.
> >
> > With this, you could see that promotion/demotion are not working because
> > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >
> > Interesting vmstat delta's differences between before and after are like:
> >
> > +-----------------------+-------------------------------+
> > | interesting vmstat | before | after |
> > +-----------------------+-------------------------------+
> > | nr_inactive_anon | 321935 | 1664772 |
> > | nr_active_anon | 1780700 | 437834 |
> > | nr_inactive_file | 30425 | 40882 |
> > | nr_active_file | 14961 | 3012 |
> > | pgpromote_success | 356 | 1293122 |
> > | pgpromote_candidate | 21953245 | 1824148 |
> > | pgactivate | 1844523 | 3311907 |
> > | pgdeactivate | 50634 | 1554069 |
> > | pgfault | 31100294 | 6518806 |
> > | pgdemote_kswapd | 30856 | 2230821 |
> > | pgscan_kswapd | 1861981 | 7667629 |
> > | pgscan_anon | 1822930 | 7610583 |
> > | pgscan_file | 39051 | 57046 |
> > | pgsteal_anon | 386 | 2192033 |
> > | pgsteal_file | 30470 | 38788 |
> > | pageoutrun | 30 | 412 |
> > | numa_hint_faults | 27418279 | 2875955 |
> > | numa_pages_migrated | 356 | 1293122 |
> > +-----------------------+-------------------------------+
> >
> > Signed-off-by: Byungchul Park <[email protected]>
> > ---
> > mm/vmscan.c | 21 ++++++++++++++++++++-
> > 1 file changed, 20 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bba207f41b14..6fe45eca7766 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -108,6 +108,12 @@ struct scan_control {
> > /* Can folios be swapped as part of reclaim? */
> > unsigned int may_swap:1;
> >
> > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> > + unsigned int no_cache_trim_mode:1;
> > +
> > + /* Has cache_trim_mode failed at least once? */
> > + unsigned int cache_trim_mode_failed:1;
> > +
> > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > unsigned int proactive:1;
> >
> > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > * anonymous pages.
> > */
> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > + !sc->no_cache_trim_mode)
> > sc->cache_trim_mode = 1;
> > else
> > sc->cache_trim_mode = 0;
> > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> > */
> > if (reclaimable)
> > pgdat->kswapd_failures = 0;
> > + else if (sc->cache_trim_mode)
> > + sc->cache_trim_mode_failed = 1;
> > }
> >
> > /*
> > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > sc.priority--;
> > } while (sc.priority >= 1);
> >
> > + /*
> > + * Restart only if it went through the priority loop all the way,
> > + * but cache_trim_mode didn't work.
> > + */
> > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
>
> Can we just use sc.cache_trim_mode (instead of
> sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled

As Johannes mentioned, within a priority scan, all the numa nodes are
scanned each with its own value of cache_trim_mode. So we cannot use
cache_trim_mode for that purpose.

Byungchul

> for priority == 1 and failed to reclaim, we will restart. If this
> works, we can avoid to add another flag.
>
> > + sc.no_cache_trim_mode = 1;
> > + goto restart;
> > + }
> > +
> > if (!sc.nr_reclaimed)
> > pgdat->kswapd_failures++;
>
> --
> Best Regards,
> Huang, Ying

2024-03-05 02:44:05

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> > Byungchul Park <[email protected]> writes:
> >
> > > Changes from v5:
> > > 1. Make it retry the kswapd's scan priority loop with
> > > cache_trim_mode off *only if* the mode didn't work in the
> > > previous loop. (feedbacked by Huang Ying)
> > > 2. Take into account 'break's from the priority loop when making
> > > the decision whether to retry. (feedbacked by Huang Ying)
> > > 3. Update the test result in the commit message.
> > >
> > > Changes from v4:
> > > 1. Make other scans start with may_cache_trim_mode = 1.
> > >
> > > Changes from v3:
> > > 1. Update the test result in the commit message with v4.
> > > 2. Retry the whole priority loop with cache_trim_mode off again,
> > > rather than forcing the mode off at the highest priority,
> > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> > >
> > > Changes from v2:
> > > 1. Change the condition to stop cache_trim_mode.
> > >
> > > From - Stop it if it's at high scan priorities, 0 or 1.
> > > To - Stop it if it's at high scan priorities, 0 or 1, and
> > > the mode didn't work in the previous turn.
> > >
> > > (feedbacked by Huang Ying)
> > >
> > > 2. Change the test result in the commit message after testing
> > > with the new logic.
> > >
> > > Changes from v1:
> > > 1. Add a comment describing why this change is necessary in code
> > > and rewrite the commit message with how to reproduce and what
> > > the result is using vmstat. (feedbacked by Andrew Morton and
> > > Yu Zhao)
> > > 2. Change the condition to avoid cache_trim_mode from
> > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > > where the priority goes to zero all the way. (feedbacked by
> > > Yu Zhao)
> > >
> > > --->8---
> > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> > > From: Byungchul Park <[email protected]>
> > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> > >
> > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > > pages. However, it should be more careful to use the mode because it's
> > > going to prevent anon pages from being reclaimed even if there are a
> > > huge number of anon pages that are cold and should be reclaimed. Even
> > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > > stopping kswapd from functioning until direct reclaim eventually works
> > > to resume kswapd.
> > >
> > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > > off again if the mode doesn't work for reclaim.
> > >
> > > The problematic behavior can be reproduced by:
> > >
> > > CONFIG_NUMA_BALANCING enabled
> > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > > numa node0 (8GB local memory, 16 CPUs)
> > > numa node1 (8GB slow tier memory, no CPUs)
> > >
> > > Sequence:
> > >
> > > 1) echo 3 > /proc/sys/vm/drop_caches
> > > 2) To emulate the system with full of cold memory in local DRAM, run
> > > the following dummy program and never touch the region:
> > >
> > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > >
> > > 3) Run any memory intensive work e.g. XSBench.
> > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > > 5) Iterate 1) ~ 4) until numa balancing stops.
> > >
> > > With this, you could see that promotion/demotion are not working because
> > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> > >
> > > Interesting vmstat delta's differences between before and after are like:
> > >
> > > +-----------------------+-------------------------------+
> > > | interesting vmstat | before | after |
> > > +-----------------------+-------------------------------+
> > > | nr_inactive_anon | 321935 | 1664772 |
> > > | nr_active_anon | 1780700 | 437834 |
> > > | nr_inactive_file | 30425 | 40882 |
> > > | nr_active_file | 14961 | 3012 |
> > > | pgpromote_success | 356 | 1293122 |
> > > | pgpromote_candidate | 21953245 | 1824148 |
> > > | pgactivate | 1844523 | 3311907 |
> > > | pgdeactivate | 50634 | 1554069 |
> > > | pgfault | 31100294 | 6518806 |
> > > | pgdemote_kswapd | 30856 | 2230821 |
> > > | pgscan_kswapd | 1861981 | 7667629 |
> > > | pgscan_anon | 1822930 | 7610583 |
> > > | pgscan_file | 39051 | 57046 |
> > > | pgsteal_anon | 386 | 2192033 |
> > > | pgsteal_file | 30470 | 38788 |
> > > | pageoutrun | 30 | 412 |
> > > | numa_hint_faults | 27418279 | 2875955 |
> > > | numa_pages_migrated | 356 | 1293122 |
> > > +-----------------------+-------------------------------+
> > >
> > > Signed-off-by: Byungchul Park <[email protected]>
> > > ---
> > > mm/vmscan.c | 21 ++++++++++++++++++++-
> > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index bba207f41b14..6fe45eca7766 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -108,6 +108,12 @@ struct scan_control {
> > > /* Can folios be swapped as part of reclaim? */
> > > unsigned int may_swap:1;
> > >
> > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> > > + unsigned int no_cache_trim_mode:1;
> > > +
> > > + /* Has cache_trim_mode failed at least once? */
> > > + unsigned int cache_trim_mode_failed:1;
> > > +
> > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > > unsigned int proactive:1;
> > >
> > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > > * anonymous pages.
> > > */
> > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > > + !sc->no_cache_trim_mode)
> > > sc->cache_trim_mode = 1;
> > > else
> > > sc->cache_trim_mode = 0;
> > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> > > */
> > > if (reclaimable)
> > > pgdat->kswapd_failures = 0;
> > > + else if (sc->cache_trim_mode)
> > > + sc->cache_trim_mode_failed = 1;
> > > }
> > >
> > > /*
> > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > > sc.priority--;
> > > } while (sc.priority >= 1);
> > >
> > > + /*
> > > + * Restart only if it went through the priority loop all the way,
> > > + * but cache_trim_mode didn't work.
> > > + */
> > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> >
> > Can we just use sc.cache_trim_mode (instead of
> > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
>
> As Johannes mentioned, within a priority scan, all the numa nodes are
> scanned each with its own value of cache_trim_mode. So we cannot use
> cache_trim_mode for that purpose.

Ah, okay. Confining to kswapd, that might make sense. I will apply it if
there's no objection to it. Thanks.

Byungchul
>
> Byungchul
>
> > for priority == 1 and failed to reclaim, we will restart. If this
> > works, we can avoid to add another flag.
> >
> > > + sc.no_cache_trim_mode = 1;
> > > + goto restart;
> > > + }
> > > +
> > > if (!sc.nr_reclaimed)
> > > pgdat->kswapd_failures++;
> >
> > --
> > Best Regards,
> > Huang, Ying

2024-03-05 02:47:29

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
>> Byungchul Park <[email protected]> writes:
>>
>> > Changes from v5:
>> > 1. Make it retry the kswapd's scan priority loop with
>> > cache_trim_mode off *only if* the mode didn't work in the
>> > previous loop. (feedbacked by Huang Ying)
>> > 2. Take into account 'break's from the priority loop when making
>> > the decision whether to retry. (feedbacked by Huang Ying)
>> > 3. Update the test result in the commit message.
>> >
>> > Changes from v4:
>> > 1. Make other scans start with may_cache_trim_mode = 1.
>> >
>> > Changes from v3:
>> > 1. Update the test result in the commit message with v4.
>> > 2. Retry the whole priority loop with cache_trim_mode off again,
>> > rather than forcing the mode off at the highest priority,
>> > when the mode doesn't work. (feedbacked by Johannes Weiner)
>> >
>> > Changes from v2:
>> > 1. Change the condition to stop cache_trim_mode.
>> >
>> > From - Stop it if it's at high scan priorities, 0 or 1.
>> > To - Stop it if it's at high scan priorities, 0 or 1, and
>> > the mode didn't work in the previous turn.
>> >
>> > (feedbacked by Huang Ying)
>> >
>> > 2. Change the test result in the commit message after testing
>> > with the new logic.
>> >
>> > Changes from v1:
>> > 1. Add a comment describing why this change is necessary in code
>> > and rewrite the commit message with how to reproduce and what
>> > the result is using vmstat. (feedbacked by Andrew Morton and
>> > Yu Zhao)
>> > 2. Change the condition to avoid cache_trim_mode from
>> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> > where the priority goes to zero all the way. (feedbacked by
>> > Yu Zhao)
>> >
>> > --->8---
>> > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
>> > From: Byungchul Park <[email protected]>
>> > Date: Mon, 4 Mar 2024 15:27:37 +0900
>> > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>> >
>> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> > pages. However, it should be more careful to use the mode because it's
>> > going to prevent anon pages from being reclaimed even if there are a
>> > huge number of anon pages that are cold and should be reclaimed. Even
>> > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> > stopping kswapd from functioning until direct reclaim eventually works
>> > to resume kswapd.
>> >
>> > So kswapd needs to retry its scan priority loop with cache_trim_mode
>> > off again if the mode doesn't work for reclaim.
>> >
>> > The problematic behavior can be reproduced by:
>> >
>> > CONFIG_NUMA_BALANCING enabled
>> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> > numa node0 (8GB local memory, 16 CPUs)
>> > numa node1 (8GB slow tier memory, no CPUs)
>> >
>> > Sequence:
>> >
>> > 1) echo 3 > /proc/sys/vm/drop_caches
>> > 2) To emulate the system with full of cold memory in local DRAM, run
>> > the following dummy program and never touch the region:
>> >
>> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> >
>> > 3) Run any memory intensive work e.g. XSBench.
>> > 4) Check if numa balancing is working e.i. promotion/demotion.
>> > 5) Iterate 1) ~ 4) until numa balancing stops.
>> >
>> > With this, you could see that promotion/demotion are not working because
>> > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>> >
>> > Interesting vmstat delta's differences between before and after are like:
>> >
>> > +-----------------------+-------------------------------+
>> > | interesting vmstat | before | after |
>> > +-----------------------+-------------------------------+
>> > | nr_inactive_anon | 321935 | 1664772 |
>> > | nr_active_anon | 1780700 | 437834 |
>> > | nr_inactive_file | 30425 | 40882 |
>> > | nr_active_file | 14961 | 3012 |
>> > | pgpromote_success | 356 | 1293122 |
>> > | pgpromote_candidate | 21953245 | 1824148 |
>> > | pgactivate | 1844523 | 3311907 |
>> > | pgdeactivate | 50634 | 1554069 |
>> > | pgfault | 31100294 | 6518806 |
>> > | pgdemote_kswapd | 30856 | 2230821 |
>> > | pgscan_kswapd | 1861981 | 7667629 |
>> > | pgscan_anon | 1822930 | 7610583 |
>> > | pgscan_file | 39051 | 57046 |
>> > | pgsteal_anon | 386 | 2192033 |
>> > | pgsteal_file | 30470 | 38788 |
>> > | pageoutrun | 30 | 412 |
>> > | numa_hint_faults | 27418279 | 2875955 |
>> > | numa_pages_migrated | 356 | 1293122 |
>> > +-----------------------+-------------------------------+
>> >
>> > Signed-off-by: Byungchul Park <[email protected]>
>> > ---
>> > mm/vmscan.c | 21 ++++++++++++++++++++-
>> > 1 file changed, 20 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > index bba207f41b14..6fe45eca7766 100644
>> > --- a/mm/vmscan.c
>> > +++ b/mm/vmscan.c
>> > @@ -108,6 +108,12 @@ struct scan_control {
>> > /* Can folios be swapped as part of reclaim? */
>> > unsigned int may_swap:1;
>> >
>> > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
>> > + unsigned int no_cache_trim_mode:1;
>> > +
>> > + /* Has cache_trim_mode failed at least once? */
>> > + unsigned int cache_trim_mode_failed:1;
>> > +
>> > /* Proactive reclaim invoked by userspace through memory.reclaim */
>> > unsigned int proactive:1;
>> >
>> > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> > * anonymous pages.
>> > */
>> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
>> > + !sc->no_cache_trim_mode)
>> > sc->cache_trim_mode = 1;
>> > else
>> > sc->cache_trim_mode = 0;
>> > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>> > */
>> > if (reclaimable)
>> > pgdat->kswapd_failures = 0;
>> > + else if (sc->cache_trim_mode)
>> > + sc->cache_trim_mode_failed = 1;
>> > }
>> >
>> > /*
>> > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> > sc.priority--;
>> > } while (sc.priority >= 1);
>> >
>> > + /*
>> > + * Restart only if it went through the priority loop all the way,
>> > + * but cache_trim_mode didn't work.
>> > + */
>> > + if (!sc.nr_reclaimed && sc.priority < 1 &&
>> > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
>>
>> Can we just use sc.cache_trim_mode (instead of
>> sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
>
> As Johannes mentioned, within a priority scan, all the numa nodes are
> scanned each with its own value of cache_trim_mode. So we cannot use
> cache_trim_mode for that purpose.

For direct reclaim, this is true. But, balance_pgdat() works for one
node only.

--
Best Regards,
Huang, Ying

> Byungchul
>
>> for priority == 1 and failed to reclaim, we will restart. If this
>> works, we can avoid to add another flag.
>>
>> > + sc.no_cache_trim_mode = 1;
>> > + goto restart;
>> > + }
>> > +
>> > if (!sc.nr_reclaimed)
>> > pgdat->kswapd_failures++;
>>
>> --
>> Best Regards,
>> Huang, Ying

2024-03-05 04:09:53

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> > > Byungchul Park <[email protected]> writes:
> > >
> > > > Changes from v5:
> > > > 1. Make it retry the kswapd's scan priority loop with
> > > > cache_trim_mode off *only if* the mode didn't work in the
> > > > previous loop. (feedbacked by Huang Ying)
> > > > 2. Take into account 'break's from the priority loop when making
> > > > the decision whether to retry. (feedbacked by Huang Ying)
> > > > 3. Update the test result in the commit message.
> > > >
> > > > Changes from v4:
> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> > > >
> > > > Changes from v3:
> > > > 1. Update the test result in the commit message with v4.
> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> > > > rather than forcing the mode off at the highest priority,
> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> > > >
> > > > Changes from v2:
> > > > 1. Change the condition to stop cache_trim_mode.
> > > >
> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> > > > the mode didn't work in the previous turn.
> > > >
> > > > (feedbacked by Huang Ying)
> > > >
> > > > 2. Change the test result in the commit message after testing
> > > > with the new logic.
> > > >
> > > > Changes from v1:
> > > > 1. Add a comment describing why this change is necessary in code
> > > > and rewrite the commit message with how to reproduce and what
> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> > > > Yu Zhao)
> > > > 2. Change the condition to avoid cache_trim_mode from
> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > > > where the priority goes to zero all the way. (feedbacked by
> > > > Yu Zhao)
> > > >
> > > > --->8---
> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> > > > From: Byungchul Park <[email protected]>
> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> > > >
> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > > > pages. However, it should be more careful to use the mode because it's
> > > > going to prevent anon pages from being reclaimed even if there are a
> > > > huge number of anon pages that are cold and should be reclaimed. Even
> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > > > stopping kswapd from functioning until direct reclaim eventually works
> > > > to resume kswapd.
> > > >
> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > > > off again if the mode doesn't work for reclaim.
> > > >
> > > > The problematic behavior can be reproduced by:
> > > >
> > > > CONFIG_NUMA_BALANCING enabled
> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > > > numa node0 (8GB local memory, 16 CPUs)
> > > > numa node1 (8GB slow tier memory, no CPUs)
> > > >
> > > > Sequence:
> > > >
> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> > > > the following dummy program and never touch the region:
> > > >
> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > > >
> > > > 3) Run any memory intensive work e.g. XSBench.
> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> > > >
> > > > With this, you could see that promotion/demotion are not working because
> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> > > >
> > > > Interesting vmstat delta's differences between before and after are like:
> > > >
> > > > +-----------------------+-------------------------------+
> > > > | interesting vmstat | before | after |
> > > > +-----------------------+-------------------------------+
> > > > | nr_inactive_anon | 321935 | 1664772 |
> > > > | nr_active_anon | 1780700 | 437834 |
> > > > | nr_inactive_file | 30425 | 40882 |
> > > > | nr_active_file | 14961 | 3012 |
> > > > | pgpromote_success | 356 | 1293122 |
> > > > | pgpromote_candidate | 21953245 | 1824148 |
> > > > | pgactivate | 1844523 | 3311907 |
> > > > | pgdeactivate | 50634 | 1554069 |
> > > > | pgfault | 31100294 | 6518806 |
> > > > | pgdemote_kswapd | 30856 | 2230821 |
> > > > | pgscan_kswapd | 1861981 | 7667629 |
> > > > | pgscan_anon | 1822930 | 7610583 |
> > > > | pgscan_file | 39051 | 57046 |
> > > > | pgsteal_anon | 386 | 2192033 |
> > > > | pgsteal_file | 30470 | 38788 |
> > > > | pageoutrun | 30 | 412 |
> > > > | numa_hint_faults | 27418279 | 2875955 |
> > > > | numa_pages_migrated | 356 | 1293122 |
> > > > +-----------------------+-------------------------------+
> > > >
> > > > Signed-off-by: Byungchul Park <[email protected]>
> > > > ---
> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index bba207f41b14..6fe45eca7766 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -108,6 +108,12 @@ struct scan_control {
> > > > /* Can folios be swapped as part of reclaim? */
> > > > unsigned int may_swap:1;
> > > >
> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> > > > + unsigned int no_cache_trim_mode:1;
> > > > +
> > > > + /* Has cache_trim_mode failed at least once? */
> > > > + unsigned int cache_trim_mode_failed:1;
> > > > +
> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > > > unsigned int proactive:1;
> > > >
> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > > > * anonymous pages.
> > > > */
> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > > > + !sc->no_cache_trim_mode)
> > > > sc->cache_trim_mode = 1;
> > > > else
> > > > sc->cache_trim_mode = 0;
> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> > > > */
> > > > if (reclaimable)
> > > > pgdat->kswapd_failures = 0;
> > > > + else if (sc->cache_trim_mode)
> > > > + sc->cache_trim_mode_failed = 1;
> > > > }
> > > >
> > > > /*
> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > > > sc.priority--;
> > > > } while (sc.priority >= 1);
> > > >
> > > > + /*
> > > > + * Restart only if it went through the priority loop all the way,
> > > > + * but cache_trim_mode didn't work.
> > > > + */
> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> > >
> > > Can we just use sc.cache_trim_mode (instead of
> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> >
> > As Johannes mentioned, within a priority scan, all the numa nodes are
> > scanned each with its own value of cache_trim_mode. So we cannot use
> > cache_trim_mode for that purpose.
>
> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> there's no objection to it. Thanks.

I didn't want to introduce two additional flags either, but it was
possible to make it do exactly what we want it to do thanks to the flags.
I'd like to keep this version if possible unless there are any other
objections on it.

Byungchul

> Byungchul
> >
> > Byungchul
> >
> > > for priority == 1 and failed to reclaim, we will restart. If this
> > > works, we can avoid to add another flag.
> > >
> > > > + sc.no_cache_trim_mode = 1;
> > > > + goto restart;
> > > > + }
> > > > +
> > > > if (!sc.nr_reclaimed)
> > > > pgdat->kswapd_failures++;
> > >
> > > --
> > > Best Regards,
> > > Huang, Ying

2024-03-05 04:33:26

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Mon, Mar 04, 2024 at 05:21:18PM +0900, Byungchul Park wrote:
> Changes from v5:
> 1. Make it retry the kswapd's scan priority loop with
> cache_trim_mode off *only if* the mode didn't work in the
> previous loop. (feedbacked by Huang Ying)
> 2. Take into account 'break's from the priority loop when making
> the decision whether to retry. (feedbacked by Huang Ying)
> 3. Update the test result in the commit message.
>
> Changes from v4:
> 1. Make other scans start with may_cache_trim_mode = 1.
>
> Changes from v3:
> 1. Update the test result in the commit message with v4.
> 2. Retry the whole priority loop with cache_trim_mode off again,
> rather than forcing the mode off at the highest priority,
> when the mode doesn't work. (feedbacked by Johannes Weiner)
>
> Changes from v2:
> 1. Change the condition to stop cache_trim_mode.
>
> From - Stop it if it's at high scan priorities, 0 or 1.
> To - Stop it if it's at high scan priorities, 0 or 1, and
> the mode didn't work in the previous turn.
>
> (feedbacked by Huang Ying)
>
> 2. Change the test result in the commit message after testing
> with the new logic.
>
> Changes from v1:
> 1. Add a comment describing why this change is necessary in code
> and rewrite the commit message with how to reproduce and what
> the result is using vmstat. (feedbacked by Andrew Morton and
> Yu Zhao)
> 2. Change the condition to avoid cache_trim_mode from
> 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> where the priority goes to zero all the way. (feedbacked by
> Yu Zhao)
>
> --->8---
> >From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> From: Byungchul Park <[email protected]>
> Date: Mon, 4 Mar 2024 15:27:37 +0900
> Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>
> With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> pages. However, it should be more careful to use the mode because it's
> going to prevent anon pages from being reclaimed even if there are a
> huge number of anon pages that are cold and should be reclaimed. Even
> worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> stopping kswapd from functioning until direct reclaim eventually works
> to resume kswapd.
>
> So kswapd needs to retry its scan priority loop with cache_trim_mode
> off again if the mode doesn't work for reclaim.
>
> The problematic behavior can be reproduced by:
>
> CONFIG_NUMA_BALANCING enabled
> sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> numa node0 (8GB local memory, 16 CPUs)
> numa node1 (8GB slow tier memory, no CPUs)
>
> Sequence:
>
> 1) echo 3 > /proc/sys/vm/drop_caches
> 2) To emulate the system with full of cold memory in local DRAM, run
> the following dummy program and never touch the region:
>
> mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>
> 3) Run any memory intensive work e.g. XSBench.
> 4) Check if numa balancing is working e.i. promotion/demotion.
> 5) Iterate 1) ~ 4) until numa balancing stops.
>
> With this, you could see that promotion/demotion are not working because
> kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>
> Interesting vmstat delta's differences between before and after are like:
>
> +-----------------------+-------------------------------+
> | interesting vmstat | before | after |
> +-----------------------+-------------------------------+
> | nr_inactive_anon | 321935 | 1664772 |
> | nr_active_anon | 1780700 | 437834 |
> | nr_inactive_file | 30425 | 40882 |
> | nr_active_file | 14961 | 3012 |
> | pgpromote_success | 356 | 1293122 |
> | pgpromote_candidate | 21953245 | 1824148 |
> | pgactivate | 1844523 | 3311907 |
> | pgdeactivate | 50634 | 1554069 |
> | pgfault | 31100294 | 6518806 |
> | pgdemote_kswapd | 30856 | 2230821 |
> | pgscan_kswapd | 1861981 | 7667629 |
> | pgscan_anon | 1822930 | 7610583 |
> | pgscan_file | 39051 | 57046 |
> | pgsteal_anon | 386 | 2192033 |
> | pgsteal_file | 30470 | 38788 |
> | pageoutrun | 30 | 412 |
> | numa_hint_faults | 27418279 | 2875955 |
> | numa_pages_migrated | 356 | 1293122 |
> +-----------------------+-------------------------------+
>
> Signed-off-by: Byungchul Park <[email protected]>
> ---
> mm/vmscan.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..6fe45eca7766 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -108,6 +108,12 @@ struct scan_control {
> /* Can folios be swapped as part of reclaim? */
> unsigned int may_swap:1;
>
> + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> + unsigned int no_cache_trim_mode:1;
> +
> + /* Has cache_trim_mode failed at least once? */
> + unsigned int cache_trim_mode_failed:1;
> +
> /* Proactive reclaim invoked by userspace through memory.reclaim */
> unsigned int proactive:1;
>
> @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> * anonymous pages.
> */
> file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> + !sc->no_cache_trim_mode)
> sc->cache_trim_mode = 1;
> else
> sc->cache_trim_mode = 0;
> @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> */
> if (reclaimable)
> pgdat->kswapd_failures = 0;
> + else if (sc->cache_trim_mode)
> + sc->cache_trim_mode_failed = 1;
> }
>
> /*
> @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> sc.priority--;
> } while (sc.priority >= 1);
>
> + /*
> + * Restart only if it went through the priority loop all the way,
> + * but cache_trim_mode didn't work.
> + */
> + if (!sc.nr_reclaimed && sc.priority < 1 &&
> + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> + sc.no_cache_trim_mode = 1;
> + goto restart;
> + }
> +
> if (!sc.nr_reclaimed)
> pgdat->kswapd_failures++;

Or 's/cache_trim_mode_failed/balancing_cleverness_failed' so that any
balancing cleverness can be surpressed when needed?

Even though I faced an issue by cache_trim_mode and I'm trying to resolve
it this time, but I'm still not sure if kswapd's reclaim is okay with
other cleverness(?) including SCAN_FRACT at its highest priority.

Just grumbling. You can ignore the second paragrph :(

Byungchul

2024-03-05 06:20:45

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
>> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
>> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
>> > > Byungchul Park <[email protected]> writes:
>> > >
>> > > > Changes from v5:
>> > > > 1. Make it retry the kswapd's scan priority loop with
>> > > > cache_trim_mode off *only if* the mode didn't work in the
>> > > > previous loop. (feedbacked by Huang Ying)
>> > > > 2. Take into account 'break's from the priority loop when making
>> > > > the decision whether to retry. (feedbacked by Huang Ying)
>> > > > 3. Update the test result in the commit message.
>> > > >
>> > > > Changes from v4:
>> > > > 1. Make other scans start with may_cache_trim_mode = 1.
>> > > >
>> > > > Changes from v3:
>> > > > 1. Update the test result in the commit message with v4.
>> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
>> > > > rather than forcing the mode off at the highest priority,
>> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
>> > > >
>> > > > Changes from v2:
>> > > > 1. Change the condition to stop cache_trim_mode.
>> > > >
>> > > > From - Stop it if it's at high scan priorities, 0 or 1.
>> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
>> > > > the mode didn't work in the previous turn.
>> > > >
>> > > > (feedbacked by Huang Ying)
>> > > >
>> > > > 2. Change the test result in the commit message after testing
>> > > > with the new logic.
>> > > >
>> > > > Changes from v1:
>> > > > 1. Add a comment describing why this change is necessary in code
>> > > > and rewrite the commit message with how to reproduce and what
>> > > > the result is using vmstat. (feedbacked by Andrew Morton and
>> > > > Yu Zhao)
>> > > > 2. Change the condition to avoid cache_trim_mode from
>> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> > > > where the priority goes to zero all the way. (feedbacked by
>> > > > Yu Zhao)
>> > > >
>> > > > --->8---
>> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
>> > > > From: Byungchul Park <[email protected]>
>> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
>> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>> > > >
>> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> > > > pages. However, it should be more careful to use the mode because it's
>> > > > going to prevent anon pages from being reclaimed even if there are a
>> > > > huge number of anon pages that are cold and should be reclaimed. Even
>> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> > > > stopping kswapd from functioning until direct reclaim eventually works
>> > > > to resume kswapd.
>> > > >
>> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
>> > > > off again if the mode doesn't work for reclaim.
>> > > >
>> > > > The problematic behavior can be reproduced by:
>> > > >
>> > > > CONFIG_NUMA_BALANCING enabled
>> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> > > > numa node0 (8GB local memory, 16 CPUs)
>> > > > numa node1 (8GB slow tier memory, no CPUs)
>> > > >
>> > > > Sequence:
>> > > >
>> > > > 1) echo 3 > /proc/sys/vm/drop_caches
>> > > > 2) To emulate the system with full of cold memory in local DRAM, run
>> > > > the following dummy program and never touch the region:
>> > > >
>> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> > > >
>> > > > 3) Run any memory intensive work e.g. XSBench.
>> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
>> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
>> > > >
>> > > > With this, you could see that promotion/demotion are not working because
>> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>> > > >
>> > > > Interesting vmstat delta's differences between before and after are like:
>> > > >
>> > > > +-----------------------+-------------------------------+
>> > > > | interesting vmstat | before | after |
>> > > > +-----------------------+-------------------------------+
>> > > > | nr_inactive_anon | 321935 | 1664772 |
>> > > > | nr_active_anon | 1780700 | 437834 |
>> > > > | nr_inactive_file | 30425 | 40882 |
>> > > > | nr_active_file | 14961 | 3012 |
>> > > > | pgpromote_success | 356 | 1293122 |
>> > > > | pgpromote_candidate | 21953245 | 1824148 |
>> > > > | pgactivate | 1844523 | 3311907 |
>> > > > | pgdeactivate | 50634 | 1554069 |
>> > > > | pgfault | 31100294 | 6518806 |
>> > > > | pgdemote_kswapd | 30856 | 2230821 |
>> > > > | pgscan_kswapd | 1861981 | 7667629 |
>> > > > | pgscan_anon | 1822930 | 7610583 |
>> > > > | pgscan_file | 39051 | 57046 |
>> > > > | pgsteal_anon | 386 | 2192033 |
>> > > > | pgsteal_file | 30470 | 38788 |
>> > > > | pageoutrun | 30 | 412 |
>> > > > | numa_hint_faults | 27418279 | 2875955 |
>> > > > | numa_pages_migrated | 356 | 1293122 |
>> > > > +-----------------------+-------------------------------+
>> > > >
>> > > > Signed-off-by: Byungchul Park <[email protected]>
>> > > > ---
>> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
>> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
>> > > >
>> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > > > index bba207f41b14..6fe45eca7766 100644
>> > > > --- a/mm/vmscan.c
>> > > > +++ b/mm/vmscan.c
>> > > > @@ -108,6 +108,12 @@ struct scan_control {
>> > > > /* Can folios be swapped as part of reclaim? */
>> > > > unsigned int may_swap:1;
>> > > >
>> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
>> > > > + unsigned int no_cache_trim_mode:1;
>> > > > +
>> > > > + /* Has cache_trim_mode failed at least once? */
>> > > > + unsigned int cache_trim_mode_failed:1;
>> > > > +
>> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
>> > > > unsigned int proactive:1;
>> > > >
>> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> > > > * anonymous pages.
>> > > > */
>> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
>> > > > + !sc->no_cache_trim_mode)
>> > > > sc->cache_trim_mode = 1;
>> > > > else
>> > > > sc->cache_trim_mode = 0;
>> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>> > > > */
>> > > > if (reclaimable)
>> > > > pgdat->kswapd_failures = 0;
>> > > > + else if (sc->cache_trim_mode)
>> > > > + sc->cache_trim_mode_failed = 1;
>> > > > }
>> > > >
>> > > > /*
>> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> > > > sc.priority--;
>> > > > } while (sc.priority >= 1);
>> > > >
>> > > > + /*
>> > > > + * Restart only if it went through the priority loop all the way,
>> > > > + * but cache_trim_mode didn't work.
>> > > > + */
>> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
>> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
>> > >
>> > > Can we just use sc.cache_trim_mode (instead of
>> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
>> >
>> > As Johannes mentioned, within a priority scan, all the numa nodes are
>> > scanned each with its own value of cache_trim_mode. So we cannot use
>> > cache_trim_mode for that purpose.
>>
>> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
>> there's no objection to it. Thanks.
>
> I didn't want to introduce two additional flags either, but it was
> possible to make it do exactly what we want it to do thanks to the flags.
> I'd like to keep this version if possible unless there are any other
> objections on it.

Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
If so, why not? If not, why?

--
Best Regards,
Huang, Ying

> Byungchul
>
>> Byungchul
>> >
>> > Byungchul
>> >
>> > > for priority == 1 and failed to reclaim, we will restart. If this
>> > > works, we can avoid to add another flag.
>> > >
>> > > > + sc.no_cache_trim_mode = 1;
>> > > > + goto restart;
>> > > > + }
>> > > > +
>> > > > if (!sc.nr_reclaimed)
>> > > > pgdat->kswapd_failures++;
>> > >
>> > > --
>> > > Best Regards,
>> > > Huang, Ying

2024-03-05 07:07:16

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> >> > > Byungchul Park <[email protected]> writes:
> >> > >
> >> > > > Changes from v5:
> >> > > > 1. Make it retry the kswapd's scan priority loop with
> >> > > > cache_trim_mode off *only if* the mode didn't work in the
> >> > > > previous loop. (feedbacked by Huang Ying)
> >> > > > 2. Take into account 'break's from the priority loop when making
> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
> >> > > > 3. Update the test result in the commit message.
> >> > > >
> >> > > > Changes from v4:
> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> >> > > >
> >> > > > Changes from v3:
> >> > > > 1. Update the test result in the commit message with v4.
> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> >> > > > rather than forcing the mode off at the highest priority,
> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >> > > >
> >> > > > Changes from v2:
> >> > > > 1. Change the condition to stop cache_trim_mode.
> >> > > >
> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> >> > > > the mode didn't work in the previous turn.
> >> > > >
> >> > > > (feedbacked by Huang Ying)
> >> > > >
> >> > > > 2. Change the test result in the commit message after testing
> >> > > > with the new logic.
> >> > > >
> >> > > > Changes from v1:
> >> > > > 1. Add a comment describing why this change is necessary in code
> >> > > > and rewrite the commit message with how to reproduce and what
> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> >> > > > Yu Zhao)
> >> > > > 2. Change the condition to avoid cache_trim_mode from
> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> >> > > > where the priority goes to zero all the way. (feedbacked by
> >> > > > Yu Zhao)
> >> > > >
> >> > > > --->8---
> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> >> > > > From: Byungchul Park <[email protected]>
> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >> > > >
> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> >> > > > pages. However, it should be more careful to use the mode because it's
> >> > > > going to prevent anon pages from being reclaimed even if there are a
> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> >> > > > stopping kswapd from functioning until direct reclaim eventually works
> >> > > > to resume kswapd.
> >> > > >
> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> >> > > > off again if the mode doesn't work for reclaim.
> >> > > >
> >> > > > The problematic behavior can be reproduced by:
> >> > > >
> >> > > > CONFIG_NUMA_BALANCING enabled
> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >> > > > numa node0 (8GB local memory, 16 CPUs)
> >> > > > numa node1 (8GB slow tier memory, no CPUs)
> >> > > >
> >> > > > Sequence:
> >> > > >
> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> >> > > > the following dummy program and never touch the region:
> >> > > >
> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >> > > >
> >> > > > 3) Run any memory intensive work e.g. XSBench.
> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> >> > > >
> >> > > > With this, you could see that promotion/demotion are not working because
> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >> > > >
> >> > > > Interesting vmstat delta's differences between before and after are like:
> >> > > >
> >> > > > +-----------------------+-------------------------------+
> >> > > > | interesting vmstat | before | after |
> >> > > > +-----------------------+-------------------------------+
> >> > > > | nr_inactive_anon | 321935 | 1664772 |
> >> > > > | nr_active_anon | 1780700 | 437834 |
> >> > > > | nr_inactive_file | 30425 | 40882 |
> >> > > > | nr_active_file | 14961 | 3012 |
> >> > > > | pgpromote_success | 356 | 1293122 |
> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
> >> > > > | pgactivate | 1844523 | 3311907 |
> >> > > > | pgdeactivate | 50634 | 1554069 |
> >> > > > | pgfault | 31100294 | 6518806 |
> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
> >> > > > | pgscan_anon | 1822930 | 7610583 |
> >> > > > | pgscan_file | 39051 | 57046 |
> >> > > > | pgsteal_anon | 386 | 2192033 |
> >> > > > | pgsteal_file | 30470 | 38788 |
> >> > > > | pageoutrun | 30 | 412 |
> >> > > > | numa_hint_faults | 27418279 | 2875955 |
> >> > > > | numa_pages_migrated | 356 | 1293122 |
> >> > > > +-----------------------+-------------------------------+
> >> > > >
> >> > > > Signed-off-by: Byungchul Park <[email protected]>
> >> > > > ---
> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> >> > > >
> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> > > > index bba207f41b14..6fe45eca7766 100644
> >> > > > --- a/mm/vmscan.c
> >> > > > +++ b/mm/vmscan.c
> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
> >> > > > /* Can folios be swapped as part of reclaim? */
> >> > > > unsigned int may_swap:1;
> >> > > >
> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> >> > > > + unsigned int no_cache_trim_mode:1;
> >> > > > +
> >> > > > + /* Has cache_trim_mode failed at least once? */
> >> > > > + unsigned int cache_trim_mode_failed:1;
> >> > > > +
> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> >> > > > unsigned int proactive:1;
> >> > > >
> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> >> > > > * anonymous pages.
> >> > > > */
> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> >> > > > + !sc->no_cache_trim_mode)
> >> > > > sc->cache_trim_mode = 1;
> >> > > > else
> >> > > > sc->cache_trim_mode = 0;
> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >> > > > */
> >> > > > if (reclaimable)
> >> > > > pgdat->kswapd_failures = 0;
> >> > > > + else if (sc->cache_trim_mode)
> >> > > > + sc->cache_trim_mode_failed = 1;
> >> > > > }
> >> > > >
> >> > > > /*
> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >> > > > sc.priority--;
> >> > > > } while (sc.priority >= 1);
> >> > > >
> >> > > > + /*
> >> > > > + * Restart only if it went through the priority loop all the way,
> >> > > > + * but cache_trim_mode didn't work.
> >> > > > + */
> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> >> > >
> >> > > Can we just use sc.cache_trim_mode (instead of
> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> >> >
> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
> >> > scanned each with its own value of cache_trim_mode. So we cannot use
> >> > cache_trim_mode for that purpose.
> >>
> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> >> there's no objection to it. Thanks.
> >
> > I didn't want to introduce two additional flags either, but it was
> > possible to make it do exactly what we want it to do thanks to the flags.
> > I'd like to keep this version if possible unless there are any other
> > objections on it.
>
> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
> If so, why not? If not, why?

kswapd might happen to go through:

priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
priority 11 + cache_trim_mode on -> fail
priority 10 + cache_trim_mode on -> fail
priority 9 + cache_trim_mode on -> fail
priority 8 + cache_trim_mode on -> fail
priority 7 + cache_trim_mode on -> fail
priority 6 + cache_trim_mode on -> fail
priority 5 + cache_trim_mode on -> fail
priority 4 + cache_trim_mode on -> fail
priority 3 + cache_trim_mode on -> fail
priority 2 + cache_trim_mode on -> fail
priority 1 + cache_trim_mode off -> fail

I'd like to retry even in this case.

Am I missing something?

Byungchul

> --
> Best Regards,
> Huang, Ying
>
> > Byungchul
> >
> >> Byungchul
> >> >
> >> > Byungchul
> >> >
> >> > > for priority == 1 and failed to reclaim, we will restart. If this
> >> > > works, we can avoid to add another flag.
> >> > >
> >> > > > + sc.no_cache_trim_mode = 1;
> >> > > > + goto restart;
> >> > > > + }
> >> > > > +
> >> > > > if (!sc.nr_reclaimed)
> >> > > > pgdat->kswapd_failures++;
> >> > >
> >> > > --
> >> > > Best Regards,
> >> > > Huang, Ying

2024-03-05 07:14:11

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 03:04:48PM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
> >> Byungchul Park <[email protected]> writes:
> >>
> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> >> >> > > Byungchul Park <[email protected]> writes:
> >> >> > >
> >> >> > > > Changes from v5:
> >> >> > > > 1. Make it retry the kswapd's scan priority loop with
> >> >> > > > cache_trim_mode off *only if* the mode didn't work in the
> >> >> > > > previous loop. (feedbacked by Huang Ying)
> >> >> > > > 2. Take into account 'break's from the priority loop when making
> >> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
> >> >> > > > 3. Update the test result in the commit message.
> >> >> > > >
> >> >> > > > Changes from v4:
> >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> >> >> > > >
> >> >> > > > Changes from v3:
> >> >> > > > 1. Update the test result in the commit message with v4.
> >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> >> >> > > > rather than forcing the mode off at the highest priority,
> >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >> >> > > >
> >> >> > > > Changes from v2:
> >> >> > > > 1. Change the condition to stop cache_trim_mode.
> >> >> > > >
> >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> >> >> > > > the mode didn't work in the previous turn.
> >> >> > > >
> >> >> > > > (feedbacked by Huang Ying)
> >> >> > > >
> >> >> > > > 2. Change the test result in the commit message after testing
> >> >> > > > with the new logic.
> >> >> > > >
> >> >> > > > Changes from v1:
> >> >> > > > 1. Add a comment describing why this change is necessary in code
> >> >> > > > and rewrite the commit message with how to reproduce and what
> >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> >> >> > > > Yu Zhao)
> >> >> > > > 2. Change the condition to avoid cache_trim_mode from
> >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> >> >> > > > where the priority goes to zero all the way. (feedbacked by
> >> >> > > > Yu Zhao)
> >> >> > > >
> >> >> > > > --->8---
> >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> >> >> > > > From: Byungchul Park <[email protected]>
> >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >> >> > > >
> >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> >> >> > > > pages. However, it should be more careful to use the mode because it's
> >> >> > > > going to prevent anon pages from being reclaimed even if there are a
> >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
> >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> >> >> > > > stopping kswapd from functioning until direct reclaim eventually works
> >> >> > > > to resume kswapd.
> >> >> > > >
> >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> >> >> > > > off again if the mode doesn't work for reclaim.
> >> >> > > >
> >> >> > > > The problematic behavior can be reproduced by:
> >> >> > > >
> >> >> > > > CONFIG_NUMA_BALANCING enabled
> >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >> >> > > > numa node0 (8GB local memory, 16 CPUs)
> >> >> > > > numa node1 (8GB slow tier memory, no CPUs)
> >> >> > > >
> >> >> > > > Sequence:
> >> >> > > >
> >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> >> >> > > > the following dummy program and never touch the region:
> >> >> > > >
> >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >> >> > > >
> >> >> > > > 3) Run any memory intensive work e.g. XSBench.
> >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> >> >> > > >
> >> >> > > > With this, you could see that promotion/demotion are not working because
> >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >> >> > > >
> >> >> > > > Interesting vmstat delta's differences between before and after are like:
> >> >> > > >
> >> >> > > > +-----------------------+-------------------------------+
> >> >> > > > | interesting vmstat | before | after |
> >> >> > > > +-----------------------+-------------------------------+
> >> >> > > > | nr_inactive_anon | 321935 | 1664772 |
> >> >> > > > | nr_active_anon | 1780700 | 437834 |
> >> >> > > > | nr_inactive_file | 30425 | 40882 |
> >> >> > > > | nr_active_file | 14961 | 3012 |
> >> >> > > > | pgpromote_success | 356 | 1293122 |
> >> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
> >> >> > > > | pgactivate | 1844523 | 3311907 |
> >> >> > > > | pgdeactivate | 50634 | 1554069 |
> >> >> > > > | pgfault | 31100294 | 6518806 |
> >> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
> >> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
> >> >> > > > | pgscan_anon | 1822930 | 7610583 |
> >> >> > > > | pgscan_file | 39051 | 57046 |
> >> >> > > > | pgsteal_anon | 386 | 2192033 |
> >> >> > > > | pgsteal_file | 30470 | 38788 |
> >> >> > > > | pageoutrun | 30 | 412 |
> >> >> > > > | numa_hint_faults | 27418279 | 2875955 |
> >> >> > > > | numa_pages_migrated | 356 | 1293122 |
> >> >> > > > +-----------------------+-------------------------------+
> >> >> > > >
> >> >> > > > Signed-off-by: Byungchul Park <[email protected]>
> >> >> > > > ---
> >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> >> >> > > >
> >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> >> > > > index bba207f41b14..6fe45eca7766 100644
> >> >> > > > --- a/mm/vmscan.c
> >> >> > > > +++ b/mm/vmscan.c
> >> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
> >> >> > > > /* Can folios be swapped as part of reclaim? */
> >> >> > > > unsigned int may_swap:1;
> >> >> > > >
> >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> >> >> > > > + unsigned int no_cache_trim_mode:1;
> >> >> > > > +
> >> >> > > > + /* Has cache_trim_mode failed at least once? */
> >> >> > > > + unsigned int cache_trim_mode_failed:1;
> >> >> > > > +
> >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> >> >> > > > unsigned int proactive:1;
> >> >> > > >
> >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> >> >> > > > * anonymous pages.
> >> >> > > > */
> >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> >> >> > > > + !sc->no_cache_trim_mode)
> >> >> > > > sc->cache_trim_mode = 1;
> >> >> > > > else
> >> >> > > > sc->cache_trim_mode = 0;
> >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >> >> > > > */
> >> >> > > > if (reclaimable)
> >> >> > > > pgdat->kswapd_failures = 0;
> >> >> > > > + else if (sc->cache_trim_mode)
> >> >> > > > + sc->cache_trim_mode_failed = 1;
> >> >> > > > }
> >> >> > > >
> >> >> > > > /*
> >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >> >> > > > sc.priority--;
> >> >> > > > } while (sc.priority >= 1);
> >> >> > > >
> >> >> > > > + /*
> >> >> > > > + * Restart only if it went through the priority loop all the way,
> >> >> > > > + * but cache_trim_mode didn't work.
> >> >> > > > + */
> >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> >> >> > >
> >> >> > > Can we just use sc.cache_trim_mode (instead of
> >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> >> >> >
> >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
> >> >> > scanned each with its own value of cache_trim_mode. So we cannot use
> >> >> > cache_trim_mode for that purpose.
> >> >>
> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> >> >> there's no objection to it. Thanks.
> >> >
> >> > I didn't want to introduce two additional flags either, but it was
> >> > possible to make it do exactly what we want it to do thanks to the flags.
> >> > I'd like to keep this version if possible unless there are any other
> >> > objections on it.
> >>
> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
> >> If so, why not? If not, why?
> >
> > kswapd might happen to go through:
> >
> > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
> > priority 11 + cache_trim_mode on -> fail
> > priority 10 + cache_trim_mode on -> fail
> > priority 9 + cache_trim_mode on -> fail
> > priority 8 + cache_trim_mode on -> fail
> > priority 7 + cache_trim_mode on -> fail
> > priority 6 + cache_trim_mode on -> fail
> > priority 5 + cache_trim_mode on -> fail
> > priority 4 + cache_trim_mode on -> fail
> > priority 3 + cache_trim_mode on -> fail
> > priority 2 + cache_trim_mode on -> fail
> > priority 1 + cache_trim_mode off -> fail
> >
> > I'd like to retry even in this case.
>
> I don't think that we should retry in this case. If the following case
> fails,
>
> > priority 1 + cache_trim_mode off -> fail
>
> Why will we succeed after retrying?

At priority 1, anon pages will be partially scanned. However, there
might be anon pages that have never been scanned but can be reclaimed.

Do I get it wrong?

Byungchul

> --
> Best Regards,
> Huang, Ying
>
> > Am I missing something?
> >
> > Byungchul
> >
> >> --
> >> Best Regards,
> >> Huang, Ying
> >>
> >> > Byungchul
> >> >
> >> >> Byungchul
> >> >> >
> >> >> > Byungchul
> >> >> >
> >> >> > > for priority == 1 and failed to reclaim, we will restart. If this
> >> >> > > works, we can avoid to add another flag.
> >> >> > >
> >> >> > > > + sc.no_cache_trim_mode = 1;
> >> >> > > > + goto restart;
> >> >> > > > + }
> >> >> > > > +
> >> >> > > > if (!sc.nr_reclaimed)
> >> >> > > > pgdat->kswapd_failures++;
> >> >> > >
> >> >> > > --
> >> >> > > Best Regards,
> >> >> > > Huang, Ying

2024-03-05 07:16:46

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
>> Byungchul Park <[email protected]> writes:
>>
>> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
>> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
>> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
>> >> > > Byungchul Park <[email protected]> writes:
>> >> > >
>> >> > > > Changes from v5:
>> >> > > > 1. Make it retry the kswapd's scan priority loop with
>> >> > > > cache_trim_mode off *only if* the mode didn't work in the
>> >> > > > previous loop. (feedbacked by Huang Ying)
>> >> > > > 2. Take into account 'break's from the priority loop when making
>> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
>> >> > > > 3. Update the test result in the commit message.
>> >> > > >
>> >> > > > Changes from v4:
>> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
>> >> > > >
>> >> > > > Changes from v3:
>> >> > > > 1. Update the test result in the commit message with v4.
>> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
>> >> > > > rather than forcing the mode off at the highest priority,
>> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
>> >> > > >
>> >> > > > Changes from v2:
>> >> > > > 1. Change the condition to stop cache_trim_mode.
>> >> > > >
>> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
>> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
>> >> > > > the mode didn't work in the previous turn.
>> >> > > >
>> >> > > > (feedbacked by Huang Ying)
>> >> > > >
>> >> > > > 2. Change the test result in the commit message after testing
>> >> > > > with the new logic.
>> >> > > >
>> >> > > > Changes from v1:
>> >> > > > 1. Add a comment describing why this change is necessary in code
>> >> > > > and rewrite the commit message with how to reproduce and what
>> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
>> >> > > > Yu Zhao)
>> >> > > > 2. Change the condition to avoid cache_trim_mode from
>> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> >> > > > where the priority goes to zero all the way. (feedbacked by
>> >> > > > Yu Zhao)
>> >> > > >
>> >> > > > --->8---
>> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
>> >> > > > From: Byungchul Park <[email protected]>
>> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
>> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>> >> > > >
>> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> >> > > > pages. However, it should be more careful to use the mode because it's
>> >> > > > going to prevent anon pages from being reclaimed even if there are a
>> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
>> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> >> > > > stopping kswapd from functioning until direct reclaim eventually works
>> >> > > > to resume kswapd.
>> >> > > >
>> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
>> >> > > > off again if the mode doesn't work for reclaim.
>> >> > > >
>> >> > > > The problematic behavior can be reproduced by:
>> >> > > >
>> >> > > > CONFIG_NUMA_BALANCING enabled
>> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> >> > > > numa node0 (8GB local memory, 16 CPUs)
>> >> > > > numa node1 (8GB slow tier memory, no CPUs)
>> >> > > >
>> >> > > > Sequence:
>> >> > > >
>> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
>> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
>> >> > > > the following dummy program and never touch the region:
>> >> > > >
>> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> >> > > >
>> >> > > > 3) Run any memory intensive work e.g. XSBench.
>> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
>> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
>> >> > > >
>> >> > > > With this, you could see that promotion/demotion are not working because
>> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>> >> > > >
>> >> > > > Interesting vmstat delta's differences between before and after are like:
>> >> > > >
>> >> > > > +-----------------------+-------------------------------+
>> >> > > > | interesting vmstat | before | after |
>> >> > > > +-----------------------+-------------------------------+
>> >> > > > | nr_inactive_anon | 321935 | 1664772 |
>> >> > > > | nr_active_anon | 1780700 | 437834 |
>> >> > > > | nr_inactive_file | 30425 | 40882 |
>> >> > > > | nr_active_file | 14961 | 3012 |
>> >> > > > | pgpromote_success | 356 | 1293122 |
>> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
>> >> > > > | pgactivate | 1844523 | 3311907 |
>> >> > > > | pgdeactivate | 50634 | 1554069 |
>> >> > > > | pgfault | 31100294 | 6518806 |
>> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
>> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
>> >> > > > | pgscan_anon | 1822930 | 7610583 |
>> >> > > > | pgscan_file | 39051 | 57046 |
>> >> > > > | pgsteal_anon | 386 | 2192033 |
>> >> > > > | pgsteal_file | 30470 | 38788 |
>> >> > > > | pageoutrun | 30 | 412 |
>> >> > > > | numa_hint_faults | 27418279 | 2875955 |
>> >> > > > | numa_pages_migrated | 356 | 1293122 |
>> >> > > > +-----------------------+-------------------------------+
>> >> > > >
>> >> > > > Signed-off-by: Byungchul Park <[email protected]>
>> >> > > > ---
>> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
>> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
>> >> > > >
>> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> >> > > > index bba207f41b14..6fe45eca7766 100644
>> >> > > > --- a/mm/vmscan.c
>> >> > > > +++ b/mm/vmscan.c
>> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
>> >> > > > /* Can folios be swapped as part of reclaim? */
>> >> > > > unsigned int may_swap:1;
>> >> > > >
>> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
>> >> > > > + unsigned int no_cache_trim_mode:1;
>> >> > > > +
>> >> > > > + /* Has cache_trim_mode failed at least once? */
>> >> > > > + unsigned int cache_trim_mode_failed:1;
>> >> > > > +
>> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
>> >> > > > unsigned int proactive:1;
>> >> > > >
>> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> >> > > > * anonymous pages.
>> >> > > > */
>> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
>> >> > > > + !sc->no_cache_trim_mode)
>> >> > > > sc->cache_trim_mode = 1;
>> >> > > > else
>> >> > > > sc->cache_trim_mode = 0;
>> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>> >> > > > */
>> >> > > > if (reclaimable)
>> >> > > > pgdat->kswapd_failures = 0;
>> >> > > > + else if (sc->cache_trim_mode)
>> >> > > > + sc->cache_trim_mode_failed = 1;
>> >> > > > }
>> >> > > >
>> >> > > > /*
>> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> >> > > > sc.priority--;
>> >> > > > } while (sc.priority >= 1);
>> >> > > >
>> >> > > > + /*
>> >> > > > + * Restart only if it went through the priority loop all the way,
>> >> > > > + * but cache_trim_mode didn't work.
>> >> > > > + */
>> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
>> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
>> >> > >
>> >> > > Can we just use sc.cache_trim_mode (instead of
>> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
>> >> >
>> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
>> >> > scanned each with its own value of cache_trim_mode. So we cannot use
>> >> > cache_trim_mode for that purpose.
>> >>
>> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
>> >> there's no objection to it. Thanks.
>> >
>> > I didn't want to introduce two additional flags either, but it was
>> > possible to make it do exactly what we want it to do thanks to the flags.
>> > I'd like to keep this version if possible unless there are any other
>> > objections on it.
>>
>> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
>> If so, why not? If not, why?
>
> kswapd might happen to go through:
>
> priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
> priority 11 + cache_trim_mode on -> fail
> priority 10 + cache_trim_mode on -> fail
> priority 9 + cache_trim_mode on -> fail
> priority 8 + cache_trim_mode on -> fail
> priority 7 + cache_trim_mode on -> fail
> priority 6 + cache_trim_mode on -> fail
> priority 5 + cache_trim_mode on -> fail
> priority 4 + cache_trim_mode on -> fail
> priority 3 + cache_trim_mode on -> fail
> priority 2 + cache_trim_mode on -> fail
> priority 1 + cache_trim_mode off -> fail
>
> I'd like to retry even in this case.

I don't think that we should retry in this case. If the following case
fails,

> priority 1 + cache_trim_mode off -> fail

Why will we succeed after retrying?

--
Best Regards,
Huang, Ying

> Am I missing something?
>
> Byungchul
>
>> --
>> Best Regards,
>> Huang, Ying
>>
>> > Byungchul
>> >
>> >> Byungchul
>> >> >
>> >> > Byungchul
>> >> >
>> >> > > for priority == 1 and failed to reclaim, we will restart. If this
>> >> > > works, we can avoid to add another flag.
>> >> > >
>> >> > > > + sc.no_cache_trim_mode = 1;
>> >> > > > + goto restart;
>> >> > > > + }
>> >> > > > +
>> >> > > > if (!sc.nr_reclaimed)
>> >> > > > pgdat->kswapd_failures++;
>> >> > >
>> >> > > --
>> >> > > Best Regards,
>> >> > > Huang, Ying

2024-03-05 07:43:59

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

Byungchul Park <[email protected]> writes:

> On Tue, Mar 05, 2024 at 03:04:48PM +0800, Huang, Ying wrote:
>> Byungchul Park <[email protected]> writes:
>>
>> > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
>> >> Byungchul Park <[email protected]> writes:
>> >>
>> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
>> >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
>> >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
>> >> >> > > Byungchul Park <[email protected]> writes:
>> >> >> > >
>> >> >> > > > Changes from v5:
>> >> >> > > > 1. Make it retry the kswapd's scan priority loop with
>> >> >> > > > cache_trim_mode off *only if* the mode didn't work in the
>> >> >> > > > previous loop. (feedbacked by Huang Ying)
>> >> >> > > > 2. Take into account 'break's from the priority loop when making
>> >> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
>> >> >> > > > 3. Update the test result in the commit message.
>> >> >> > > >
>> >> >> > > > Changes from v4:
>> >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
>> >> >> > > >
>> >> >> > > > Changes from v3:
>> >> >> > > > 1. Update the test result in the commit message with v4.
>> >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
>> >> >> > > > rather than forcing the mode off at the highest priority,
>> >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
>> >> >> > > >
>> >> >> > > > Changes from v2:
>> >> >> > > > 1. Change the condition to stop cache_trim_mode.
>> >> >> > > >
>> >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
>> >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
>> >> >> > > > the mode didn't work in the previous turn.
>> >> >> > > >
>> >> >> > > > (feedbacked by Huang Ying)
>> >> >> > > >
>> >> >> > > > 2. Change the test result in the commit message after testing
>> >> >> > > > with the new logic.
>> >> >> > > >
>> >> >> > > > Changes from v1:
>> >> >> > > > 1. Add a comment describing why this change is necessary in code
>> >> >> > > > and rewrite the commit message with how to reproduce and what
>> >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
>> >> >> > > > Yu Zhao)
>> >> >> > > > 2. Change the condition to avoid cache_trim_mode from
>> >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> >> >> > > > where the priority goes to zero all the way. (feedbacked by
>> >> >> > > > Yu Zhao)
>> >> >> > > >
>> >> >> > > > --->8---
>> >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
>> >> >> > > > From: Byungchul Park <[email protected]>
>> >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
>> >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
>> >> >> > > >
>> >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> >> >> > > > pages. However, it should be more careful to use the mode because it's
>> >> >> > > > going to prevent anon pages from being reclaimed even if there are a
>> >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
>> >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> >> >> > > > stopping kswapd from functioning until direct reclaim eventually works
>> >> >> > > > to resume kswapd.
>> >> >> > > >
>> >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
>> >> >> > > > off again if the mode doesn't work for reclaim.
>> >> >> > > >
>> >> >> > > > The problematic behavior can be reproduced by:
>> >> >> > > >
>> >> >> > > > CONFIG_NUMA_BALANCING enabled
>> >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> >> >> > > > numa node0 (8GB local memory, 16 CPUs)
>> >> >> > > > numa node1 (8GB slow tier memory, no CPUs)
>> >> >> > > >
>> >> >> > > > Sequence:
>> >> >> > > >
>> >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
>> >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
>> >> >> > > > the following dummy program and never touch the region:
>> >> >> > > >
>> >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> >> >> > > >
>> >> >> > > > 3) Run any memory intensive work e.g. XSBench.
>> >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
>> >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
>> >> >> > > >
>> >> >> > > > With this, you could see that promotion/demotion are not working because
>> >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
>> >> >> > > >
>> >> >> > > > Interesting vmstat delta's differences between before and after are like:
>> >> >> > > >
>> >> >> > > > +-----------------------+-------------------------------+
>> >> >> > > > | interesting vmstat | before | after |
>> >> >> > > > +-----------------------+-------------------------------+
>> >> >> > > > | nr_inactive_anon | 321935 | 1664772 |
>> >> >> > > > | nr_active_anon | 1780700 | 437834 |
>> >> >> > > > | nr_inactive_file | 30425 | 40882 |
>> >> >> > > > | nr_active_file | 14961 | 3012 |
>> >> >> > > > | pgpromote_success | 356 | 1293122 |
>> >> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
>> >> >> > > > | pgactivate | 1844523 | 3311907 |
>> >> >> > > > | pgdeactivate | 50634 | 1554069 |
>> >> >> > > > | pgfault | 31100294 | 6518806 |
>> >> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
>> >> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
>> >> >> > > > | pgscan_anon | 1822930 | 7610583 |
>> >> >> > > > | pgscan_file | 39051 | 57046 |
>> >> >> > > > | pgsteal_anon | 386 | 2192033 |
>> >> >> > > > | pgsteal_file | 30470 | 38788 |
>> >> >> > > > | pageoutrun | 30 | 412 |
>> >> >> > > > | numa_hint_faults | 27418279 | 2875955 |
>> >> >> > > > | numa_pages_migrated | 356 | 1293122 |
>> >> >> > > > +-----------------------+-------------------------------+
>> >> >> > > >
>> >> >> > > > Signed-off-by: Byungchul Park <[email protected]>
>> >> >> > > > ---
>> >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
>> >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
>> >> >> > > >
>> >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> >> >> > > > index bba207f41b14..6fe45eca7766 100644
>> >> >> > > > --- a/mm/vmscan.c
>> >> >> > > > +++ b/mm/vmscan.c
>> >> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
>> >> >> > > > /* Can folios be swapped as part of reclaim? */
>> >> >> > > > unsigned int may_swap:1;
>> >> >> > > >
>> >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
>> >> >> > > > + unsigned int no_cache_trim_mode:1;
>> >> >> > > > +
>> >> >> > > > + /* Has cache_trim_mode failed at least once? */
>> >> >> > > > + unsigned int cache_trim_mode_failed:1;
>> >> >> > > > +
>> >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
>> >> >> > > > unsigned int proactive:1;
>> >> >> > > >
>> >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> >> >> > > > * anonymous pages.
>> >> >> > > > */
>> >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
>> >> >> > > > + !sc->no_cache_trim_mode)
>> >> >> > > > sc->cache_trim_mode = 1;
>> >> >> > > > else
>> >> >> > > > sc->cache_trim_mode = 0;
>> >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>> >> >> > > > */
>> >> >> > > > if (reclaimable)
>> >> >> > > > pgdat->kswapd_failures = 0;
>> >> >> > > > + else if (sc->cache_trim_mode)
>> >> >> > > > + sc->cache_trim_mode_failed = 1;
>> >> >> > > > }
>> >> >> > > >
>> >> >> > > > /*
>> >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>> >> >> > > > sc.priority--;
>> >> >> > > > } while (sc.priority >= 1);
>> >> >> > > >
>> >> >> > > > + /*
>> >> >> > > > + * Restart only if it went through the priority loop all the way,
>> >> >> > > > + * but cache_trim_mode didn't work.
>> >> >> > > > + */
>> >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
>> >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
>> >> >> > >
>> >> >> > > Can we just use sc.cache_trim_mode (instead of
>> >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
>> >> >> >
>> >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
>> >> >> > scanned each with its own value of cache_trim_mode. So we cannot use
>> >> >> > cache_trim_mode for that purpose.
>> >> >>
>> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
>> >> >> there's no objection to it. Thanks.
>> >> >
>> >> > I didn't want to introduce two additional flags either, but it was
>> >> > possible to make it do exactly what we want it to do thanks to the flags.
>> >> > I'd like to keep this version if possible unless there are any other
>> >> > objections on it.
>> >>
>> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
>> >> If so, why not? If not, why?
>> >
>> > kswapd might happen to go through:
>> >
>> > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
>> > priority 11 + cache_trim_mode on -> fail
>> > priority 10 + cache_trim_mode on -> fail
>> > priority 9 + cache_trim_mode on -> fail
>> > priority 8 + cache_trim_mode on -> fail
>> > priority 7 + cache_trim_mode on -> fail
>> > priority 6 + cache_trim_mode on -> fail
>> > priority 5 + cache_trim_mode on -> fail
>> > priority 4 + cache_trim_mode on -> fail
>> > priority 3 + cache_trim_mode on -> fail
>> > priority 2 + cache_trim_mode on -> fail
>> > priority 1 + cache_trim_mode off -> fail
>> >
>> > I'd like to retry even in this case.
>>
>> I don't think that we should retry in this case. If the following case
>> fails,
>>
>> > priority 1 + cache_trim_mode off -> fail
>>
>> Why will we succeed after retrying?
>
> At priority 1, anon pages will be partially scanned. However, there
> might be anon pages that have never been scanned but can be reclaimed.
>
> Do I get it wrong?

Yes. In theory, that's possible. But do you think that will be some
practical issue? So that, pgdat->kswapd_failures will reach max value?

--
Best Regards,
Huang, Ying

> Byungchul
>
>> --
>> Best Regards,
>> Huang, Ying
>>
>> > Am I missing something?
>> >
>> > Byungchul
>> >
>> >> --
>> >> Best Regards,
>> >> Huang, Ying
>> >>
>> >> > Byungchul
>> >> >
>> >> >> Byungchul
>> >> >> >
>> >> >> > Byungchul
>> >> >> >
>> >> >> > > for priority == 1 and failed to reclaim, we will restart. If this
>> >> >> > > works, we can avoid to add another flag.
>> >> >> > >
>> >> >> > > > + sc.no_cache_trim_mode = 1;
>> >> >> > > > + goto restart;
>> >> >> > > > + }
>> >> >> > > > +
>> >> >> > > > if (!sc.nr_reclaimed)
>> >> >> > > > pgdat->kswapd_failures++;
>> >> >> > >
>> >> >> > > --
>> >> >> > > Best Regards,
>> >> >> > > Huang, Ying

2024-03-05 07:57:14

by Byungchul Park

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 03:35:35PM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > On Tue, Mar 05, 2024 at 03:04:48PM +0800, Huang, Ying wrote:
> >> Byungchul Park <[email protected]> writes:
> >>
> >> > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
> >> >> Byungchul Park <[email protected]> writes:
> >> >>
> >> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> >> >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> >> >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> >> >> >> > > Byungchul Park <[email protected]> writes:
> >> >> >> > >
> >> >> >> > > > Changes from v5:
> >> >> >> > > > 1. Make it retry the kswapd's scan priority loop with
> >> >> >> > > > cache_trim_mode off *only if* the mode didn't work in the
> >> >> >> > > > previous loop. (feedbacked by Huang Ying)
> >> >> >> > > > 2. Take into account 'break's from the priority loop when making
> >> >> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
> >> >> >> > > > 3. Update the test result in the commit message.
> >> >> >> > > >
> >> >> >> > > > Changes from v4:
> >> >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> >> >> >> > > >
> >> >> >> > > > Changes from v3:
> >> >> >> > > > 1. Update the test result in the commit message with v4.
> >> >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> >> >> >> > > > rather than forcing the mode off at the highest priority,
> >> >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> >> >> >> > > >
> >> >> >> > > > Changes from v2:
> >> >> >> > > > 1. Change the condition to stop cache_trim_mode.
> >> >> >> > > >
> >> >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> >> >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> >> >> >> > > > the mode didn't work in the previous turn.
> >> >> >> > > >
> >> >> >> > > > (feedbacked by Huang Ying)
> >> >> >> > > >
> >> >> >> > > > 2. Change the test result in the commit message after testing
> >> >> >> > > > with the new logic.
> >> >> >> > > >
> >> >> >> > > > Changes from v1:
> >> >> >> > > > 1. Add a comment describing why this change is necessary in code
> >> >> >> > > > and rewrite the commit message with how to reproduce and what
> >> >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> >> >> >> > > > Yu Zhao)
> >> >> >> > > > 2. Change the condition to avoid cache_trim_mode from
> >> >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> >> >> >> > > > where the priority goes to zero all the way. (feedbacked by
> >> >> >> > > > Yu Zhao)
> >> >> >> > > >
> >> >> >> > > > --->8---
> >> >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> >> >> >> > > > From: Byungchul Park <[email protected]>
> >> >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> >> >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> >> >> >> > > >
> >> >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> >> >> >> > > > pages. However, it should be more careful to use the mode because it's
> >> >> >> > > > going to prevent anon pages from being reclaimed even if there are a
> >> >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
> >> >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> >> >> >> > > > stopping kswapd from functioning until direct reclaim eventually works
> >> >> >> > > > to resume kswapd.
> >> >> >> > > >
> >> >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> >> >> >> > > > off again if the mode doesn't work for reclaim.
> >> >> >> > > >
> >> >> >> > > > The problematic behavior can be reproduced by:
> >> >> >> > > >
> >> >> >> > > > CONFIG_NUMA_BALANCING enabled
> >> >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >> >> >> > > > numa node0 (8GB local memory, 16 CPUs)
> >> >> >> > > > numa node1 (8GB slow tier memory, no CPUs)
> >> >> >> > > >
> >> >> >> > > > Sequence:
> >> >> >> > > >
> >> >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> >> >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> >> >> >> > > > the following dummy program and never touch the region:
> >> >> >> > > >
> >> >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> >> >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >> >> >> > > >
> >> >> >> > > > 3) Run any memory intensive work e.g. XSBench.
> >> >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> >> >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> >> >> >> > > >
> >> >> >> > > > With this, you could see that promotion/demotion are not working because
> >> >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> >> >> >> > > >
> >> >> >> > > > Interesting vmstat delta's differences between before and after are like:
> >> >> >> > > >
> >> >> >> > > > +-----------------------+-------------------------------+
> >> >> >> > > > | interesting vmstat | before | after |
> >> >> >> > > > +-----------------------+-------------------------------+
> >> >> >> > > > | nr_inactive_anon | 321935 | 1664772 |
> >> >> >> > > > | nr_active_anon | 1780700 | 437834 |
> >> >> >> > > > | nr_inactive_file | 30425 | 40882 |
> >> >> >> > > > | nr_active_file | 14961 | 3012 |
> >> >> >> > > > | pgpromote_success | 356 | 1293122 |
> >> >> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
> >> >> >> > > > | pgactivate | 1844523 | 3311907 |
> >> >> >> > > > | pgdeactivate | 50634 | 1554069 |
> >> >> >> > > > | pgfault | 31100294 | 6518806 |
> >> >> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
> >> >> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
> >> >> >> > > > | pgscan_anon | 1822930 | 7610583 |
> >> >> >> > > > | pgscan_file | 39051 | 57046 |
> >> >> >> > > > | pgsteal_anon | 386 | 2192033 |
> >> >> >> > > > | pgsteal_file | 30470 | 38788 |
> >> >> >> > > > | pageoutrun | 30 | 412 |
> >> >> >> > > > | numa_hint_faults | 27418279 | 2875955 |
> >> >> >> > > > | numa_pages_migrated | 356 | 1293122 |
> >> >> >> > > > +-----------------------+-------------------------------+
> >> >> >> > > >
> >> >> >> > > > Signed-off-by: Byungchul Park <[email protected]>
> >> >> >> > > > ---
> >> >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> >> >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> >> >> >> > > >
> >> >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> >> >> > > > index bba207f41b14..6fe45eca7766 100644
> >> >> >> > > > --- a/mm/vmscan.c
> >> >> >> > > > +++ b/mm/vmscan.c
> >> >> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
> >> >> >> > > > /* Can folios be swapped as part of reclaim? */
> >> >> >> > > > unsigned int may_swap:1;
> >> >> >> > > >
> >> >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> >> >> >> > > > + unsigned int no_cache_trim_mode:1;
> >> >> >> > > > +
> >> >> >> > > > + /* Has cache_trim_mode failed at least once? */
> >> >> >> > > > + unsigned int cache_trim_mode_failed:1;
> >> >> >> > > > +
> >> >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> >> >> >> > > > unsigned int proactive:1;
> >> >> >> > > >
> >> >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> >> >> >> > > > * anonymous pages.
> >> >> >> > > > */
> >> >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> >> >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> >> >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> >> >> >> > > > + !sc->no_cache_trim_mode)
> >> >> >> > > > sc->cache_trim_mode = 1;
> >> >> >> > > > else
> >> >> >> > > > sc->cache_trim_mode = 0;
> >> >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >> >> >> > > > */
> >> >> >> > > > if (reclaimable)
> >> >> >> > > > pgdat->kswapd_failures = 0;
> >> >> >> > > > + else if (sc->cache_trim_mode)
> >> >> >> > > > + sc->cache_trim_mode_failed = 1;
> >> >> >> > > > }
> >> >> >> > > >
> >> >> >> > > > /*
> >> >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >> >> >> > > > sc.priority--;
> >> >> >> > > > } while (sc.priority >= 1);
> >> >> >> > > >
> >> >> >> > > > + /*
> >> >> >> > > > + * Restart only if it went through the priority loop all the way,
> >> >> >> > > > + * but cache_trim_mode didn't work.
> >> >> >> > > > + */
> >> >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> >> >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> >> >> >> > >
> >> >> >> > > Can we just use sc.cache_trim_mode (instead of
> >> >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> >> >> >> >
> >> >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
> >> >> >> > scanned each with its own value of cache_trim_mode. So we cannot use
> >> >> >> > cache_trim_mode for that purpose.
> >> >> >>
> >> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> >> >> >> there's no objection to it. Thanks.
> >> >> >
> >> >> > I didn't want to introduce two additional flags either, but it was
> >> >> > possible to make it do exactly what we want it to do thanks to the flags.
> >> >> > I'd like to keep this version if possible unless there are any other
> >> >> > objections on it.
> >> >>
> >> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
> >> >> If so, why not? If not, why?
> >> >
> >> > kswapd might happen to go through:
> >> >
> >> > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
> >> > priority 11 + cache_trim_mode on -> fail
> >> > priority 10 + cache_trim_mode on -> fail
> >> > priority 9 + cache_trim_mode on -> fail
> >> > priority 8 + cache_trim_mode on -> fail
> >> > priority 7 + cache_trim_mode on -> fail
> >> > priority 6 + cache_trim_mode on -> fail
> >> > priority 5 + cache_trim_mode on -> fail
> >> > priority 4 + cache_trim_mode on -> fail
> >> > priority 3 + cache_trim_mode on -> fail
> >> > priority 2 + cache_trim_mode on -> fail
> >> > priority 1 + cache_trim_mode off -> fail
> >> >
> >> > I'd like to retry even in this case.
> >>
> >> I don't think that we should retry in this case. If the following case
> >> fails,
> >>
> >> > priority 1 + cache_trim_mode off -> fail
> >>
> >> Why will we succeed after retrying?
> >
> > At priority 1, anon pages will be partially scanned. However, there
> > might be anon pages that have never been scanned but can be reclaimed.
> >
> > Do I get it wrong?
>
> Yes. In theory, that's possible. But do you think that will be some
> practical issue? So that, pgdat->kswapd_failures will reach max value?

v6 is based on what Johannes suggested. I thought it's more right way to
fix the issue.

Yeah, I also think checking cache_trim_mode only at the kswapd's highest
priorty would manage to work for the issue.

I'd like to listen to Johannes's opinion or others.

Byungchul

> --
> Best Regards,
> Huang, Ying
>
> > Byungchul
> >
> >> --
> >> Best Regards,
> >> Huang, Ying
> >>
> >> > Am I missing something?
> >> >
> >> > Byungchul
> >> >
> >> >> --
> >> >> Best Regards,
> >> >> Huang, Ying
> >> >>
> >> >> > Byungchul
> >> >> >
> >> >> >> Byungchul
> >> >> >> >
> >> >> >> > Byungchul
> >> >> >> >
> >> >> >> > > for priority == 1 and failed to reclaim, we will restart. If this
> >> >> >> > > works, we can avoid to add another flag.
> >> >> >> > >
> >> >> >> > > > + sc.no_cache_trim_mode = 1;
> >> >> >> > > > + goto restart;
> >> >> >> > > > + }
> >> >> >> > > > +
> >> >> >> > > > if (!sc.nr_reclaimed)
> >> >> >> > > > pgdat->kswapd_failures++;
> >> >> >> > >
> >> >> >> > > --
> >> >> >> > > Best Regards,
> >> >> >> > > Huang, Ying

2024-03-05 12:51:10

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

On Tue, Mar 05, 2024 at 04:55:38PM +0900, Byungchul Park wrote:
> On Tue, Mar 05, 2024 at 03:35:35PM +0800, Huang, Ying wrote:
> > Byungchul Park <[email protected]> writes:
> >
> > > On Tue, Mar 05, 2024 at 03:04:48PM +0800, Huang, Ying wrote:
> > >> Byungchul Park <[email protected]> writes:
> > >>
> > >> > On Tue, Mar 05, 2024 at 02:18:33PM +0800, Huang, Ying wrote:
> > >> >> Byungchul Park <[email protected]> writes:
> > >> >>
> > >> >> > On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> > >> >> >> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> > >> >> >> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> > >> >> >> > > Byungchul Park <[email protected]> writes:
> > >> >> >> > >
> > >> >> >> > > > Changes from v5:
> > >> >> >> > > > 1. Make it retry the kswapd's scan priority loop with
> > >> >> >> > > > cache_trim_mode off *only if* the mode didn't work in the
> > >> >> >> > > > previous loop. (feedbacked by Huang Ying)
> > >> >> >> > > > 2. Take into account 'break's from the priority loop when making
> > >> >> >> > > > the decision whether to retry. (feedbacked by Huang Ying)
> > >> >> >> > > > 3. Update the test result in the commit message.
> > >> >> >> > > >
> > >> >> >> > > > Changes from v4:
> > >> >> >> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> > >> >> >> > > >
> > >> >> >> > > > Changes from v3:
> > >> >> >> > > > 1. Update the test result in the commit message with v4.
> > >> >> >> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> > >> >> >> > > > rather than forcing the mode off at the highest priority,
> > >> >> >> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> > >> >> >> > > >
> > >> >> >> > > > Changes from v2:
> > >> >> >> > > > 1. Change the condition to stop cache_trim_mode.
> > >> >> >> > > >
> > >> >> >> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> > >> >> >> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> > >> >> >> > > > the mode didn't work in the previous turn.
> > >> >> >> > > >
> > >> >> >> > > > (feedbacked by Huang Ying)
> > >> >> >> > > >
> > >> >> >> > > > 2. Change the test result in the commit message after testing
> > >> >> >> > > > with the new logic.
> > >> >> >> > > >
> > >> >> >> > > > Changes from v1:
> > >> >> >> > > > 1. Add a comment describing why this change is necessary in code
> > >> >> >> > > > and rewrite the commit message with how to reproduce and what
> > >> >> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> > >> >> >> > > > Yu Zhao)
> > >> >> >> > > > 2. Change the condition to avoid cache_trim_mode from
> > >> >> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > >> >> >> > > > where the priority goes to zero all the way. (feedbacked by
> > >> >> >> > > > Yu Zhao)
> > >> >> >> > > >
> > >> >> >> > > > --->8---
> > >> >> >> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> > >> >> >> > > > From: Byungchul Park <[email protected]>
> > >> >> >> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> > >> >> >> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> > >> >> >> > > >
> > >> >> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > >> >> >> > > > pages. However, it should be more careful to use the mode because it's
> > >> >> >> > > > going to prevent anon pages from being reclaimed even if there are a
> > >> >> >> > > > huge number of anon pages that are cold and should be reclaimed. Even
> > >> >> >> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > >> >> >> > > > stopping kswapd from functioning until direct reclaim eventually works
> > >> >> >> > > > to resume kswapd.
> > >> >> >> > > >
> > >> >> >> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > >> >> >> > > > off again if the mode doesn't work for reclaim.
> > >> >> >> > > >
> > >> >> >> > > > The problematic behavior can be reproduced by:
> > >> >> >> > > >
> > >> >> >> > > > CONFIG_NUMA_BALANCING enabled
> > >> >> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > >> >> >> > > > numa node0 (8GB local memory, 16 CPUs)
> > >> >> >> > > > numa node1 (8GB slow tier memory, no CPUs)
> > >> >> >> > > >
> > >> >> >> > > > Sequence:
> > >> >> >> > > >
> > >> >> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> > >> >> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> > >> >> >> > > > the following dummy program and never touch the region:
> > >> >> >> > > >
> > >> >> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > >> >> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > >> >> >> > > >
> > >> >> >> > > > 3) Run any memory intensive work e.g. XSBench.
> > >> >> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > >> >> >> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> > >> >> >> > > >
> > >> >> >> > > > With this, you could see that promotion/demotion are not working because
> > >> >> >> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> > >> >> >> > > >
> > >> >> >> > > > Interesting vmstat delta's differences between before and after are like:
> > >> >> >> > > >
> > >> >> >> > > > +-----------------------+-------------------------------+
> > >> >> >> > > > | interesting vmstat | before | after |
> > >> >> >> > > > +-----------------------+-------------------------------+
> > >> >> >> > > > | nr_inactive_anon | 321935 | 1664772 |
> > >> >> >> > > > | nr_active_anon | 1780700 | 437834 |
> > >> >> >> > > > | nr_inactive_file | 30425 | 40882 |
> > >> >> >> > > > | nr_active_file | 14961 | 3012 |
> > >> >> >> > > > | pgpromote_success | 356 | 1293122 |
> > >> >> >> > > > | pgpromote_candidate | 21953245 | 1824148 |
> > >> >> >> > > > | pgactivate | 1844523 | 3311907 |
> > >> >> >> > > > | pgdeactivate | 50634 | 1554069 |
> > >> >> >> > > > | pgfault | 31100294 | 6518806 |
> > >> >> >> > > > | pgdemote_kswapd | 30856 | 2230821 |
> > >> >> >> > > > | pgscan_kswapd | 1861981 | 7667629 |
> > >> >> >> > > > | pgscan_anon | 1822930 | 7610583 |
> > >> >> >> > > > | pgscan_file | 39051 | 57046 |
> > >> >> >> > > > | pgsteal_anon | 386 | 2192033 |
> > >> >> >> > > > | pgsteal_file | 30470 | 38788 |
> > >> >> >> > > > | pageoutrun | 30 | 412 |
> > >> >> >> > > > | numa_hint_faults | 27418279 | 2875955 |
> > >> >> >> > > > | numa_pages_migrated | 356 | 1293122 |
> > >> >> >> > > > +-----------------------+-------------------------------+
> > >> >> >> > > >
> > >> >> >> > > > Signed-off-by: Byungchul Park <[email protected]>
> > >> >> >> > > > ---
> > >> >> >> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> > >> >> >> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > >> >> >> > > >
> > >> >> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > >> >> >> > > > index bba207f41b14..6fe45eca7766 100644
> > >> >> >> > > > --- a/mm/vmscan.c
> > >> >> >> > > > +++ b/mm/vmscan.c
> > >> >> >> > > > @@ -108,6 +108,12 @@ struct scan_control {
> > >> >> >> > > > /* Can folios be swapped as part of reclaim? */
> > >> >> >> > > > unsigned int may_swap:1;
> > >> >> >> > > >
> > >> >> >> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> > >> >> >> > > > + unsigned int no_cache_trim_mode:1;
> > >> >> >> > > > +
> > >> >> >> > > > + /* Has cache_trim_mode failed at least once? */
> > >> >> >> > > > + unsigned int cache_trim_mode_failed:1;
> > >> >> >> > > > +
> > >> >> >> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > >> >> >> > > > unsigned int proactive:1;
> > >> >> >> > > >
> > >> >> >> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > >> >> >> > > > * anonymous pages.
> > >> >> >> > > > */
> > >> >> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > >> >> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > >> >> >> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > >> >> >> > > > + !sc->no_cache_trim_mode)
> > >> >> >> > > > sc->cache_trim_mode = 1;
> > >> >> >> > > > else
> > >> >> >> > > > sc->cache_trim_mode = 0;
> > >> >> >> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> > >> >> >> > > > */
> > >> >> >> > > > if (reclaimable)
> > >> >> >> > > > pgdat->kswapd_failures = 0;
> > >> >> >> > > > + else if (sc->cache_trim_mode)
> > >> >> >> > > > + sc->cache_trim_mode_failed = 1;
> > >> >> >> > > > }
> > >> >> >> > > >
> > >> >> >> > > > /*
> > >> >> >> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > >> >> >> > > > sc.priority--;
> > >> >> >> > > > } while (sc.priority >= 1);
> > >> >> >> > > >
> > >> >> >> > > > + /*
> > >> >> >> > > > + * Restart only if it went through the priority loop all the way,
> > >> >> >> > > > + * but cache_trim_mode didn't work.
> > >> >> >> > > > + */
> > >> >> >> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> > >> >> >> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> > >> >> >> > >
> > >> >> >> > > Can we just use sc.cache_trim_mode (instead of
> > >> >> >> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> > >> >> >> >
> > >> >> >> > As Johannes mentioned, within a priority scan, all the numa nodes are
> > >> >> >> > scanned each with its own value of cache_trim_mode. So we cannot use
> > >> >> >> > cache_trim_mode for that purpose.
> > >> >> >>
> > >> >> >> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> > >> >> >> there's no objection to it. Thanks.
> > >> >> >
> > >> >> > I didn't want to introduce two additional flags either, but it was
> > >> >> > possible to make it do exactly what we want it to do thanks to the flags.
> > >> >> > I'd like to keep this version if possible unless there are any other
> > >> >> > objections on it.
> > >> >>
> > >> >> Sorry, I'm confused. Whether does "cache_trim_mode == 1" do the trick?
> > >> >> If so, why not? If not, why?
> > >> >
> > >> > kswapd might happen to go through:
> > >> >
> > >> > priority 12(== DEF_PRIORITY) + cache_trim_mode on -> fail
> > >> > priority 11 + cache_trim_mode on -> fail
> > >> > priority 10 + cache_trim_mode on -> fail
> > >> > priority 9 + cache_trim_mode on -> fail
> > >> > priority 8 + cache_trim_mode on -> fail
> > >> > priority 7 + cache_trim_mode on -> fail
> > >> > priority 6 + cache_trim_mode on -> fail
> > >> > priority 5 + cache_trim_mode on -> fail
> > >> > priority 4 + cache_trim_mode on -> fail
> > >> > priority 3 + cache_trim_mode on -> fail
> > >> > priority 2 + cache_trim_mode on -> fail
> > >> > priority 1 + cache_trim_mode off -> fail
> > >> >
> > >> > I'd like to retry even in this case.
> > >>
> > >> I don't think that we should retry in this case. If the following case
> > >> fails,
> > >>
> > >> > priority 1 + cache_trim_mode off -> fail
> > >>
> > >> Why will we succeed after retrying?
> > >
> > > At priority 1, anon pages will be partially scanned. However, there
> > > might be anon pages that have never been scanned but can be reclaimed.
> > >
> > > Do I get it wrong?
> >
> > Yes. In theory, that's possible. But do you think that will be some
> > practical issue? So that, pgdat->kswapd_failures will reach max value?
>
> v6 is based on what Johannes suggested. I thought it's more right way to
> fix the issue.
>
> Yeah, I also think checking cache_trim_mode only at the kswapd's highest
> priorty would manage to work for the issue.
>
> I'd like to listen to Johannes's opinion or others.

I slightly prefer the extra flag because it more closely matches the
pattern we have for skipped_deactivate and memcg_low_skipped. IMO
repeating the pattern is better than inventing something new, and this
patch as-is is simple enough.

There is also the thought that we might expand this to direct reclaim
in the future, right? I know your immediate problem at hand is with
kswapd but at least in theory this should apply to direct reclaim as
well unless, I'm missing something. So keeping it generic and easily
reusable seems like a good idea.

Just my 2c.