Changes from v1:
1. Add a comment describing why this change is necessary in code
and rewrite the commit message with how to reproduce and what
the result is using vmstat. (feedbacked by Andrew Morton and
Yu Zhao)
2. Change the condition to avoid cache_trim_mode from
'sc->priority != 1' to 'sc->priority > 1' to reflect cases
where the priority goes to zero all the way. (feedbacked by
Yu Zhao)
--->8---
From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
From: Byungchul Park <[email protected]>
Date: Thu, 22 Feb 2024 14:50:17 +0900
Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
pages. However, it should be more careful to turn on the mode because
it's going to prevent anon pages from being reclaimed even if there are
a huge number of anon pages that are cold and should be reclaimed. Even
worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
stopping kswapd until direct reclaim eventually works to resume kswapd.
So this is more like a bug fix than a performance improvement.
The problematic behavior can be reproduced by:
CONFIG_NUMA_BALANCING enabled
sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
numa node0 (8GB local memory, 16 CPUs)
numa node1 (8GB slow tier memory, no CPUs)
Sequence:
1) echo 3 > /proc/sys/vm/drop_caches
2) To emulate the system with full of cold memory in local DRAM, run
the following dummy program and never touch the region:
mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
3) Run any memory intensive work e.g. XSBench.
4) Check if numa balancing is working e.i. promotion/demotion.
5) Iterate 1) ~ 4) until kswapd stops.
With this, you could eventually see that promotion/demotion are not
working because kswapd has stopped due to ->kswapd_failures >=
MAX_RECLAIM_RETRIES.
Interesting vmstat delta's differences between before and after are like:
-nr_inactive_anon 321935
-nr_active_anon 1780700
-nr_inactive_file 30425
-nr_active_file 14961
-pgpromote_success 356
-pgpromote_candidate 21953245
-pgactivate 1844523
-pgdeactivate 50634
-pgfault 31100294
-pgdemote_kswapd 30856
-pgscan_kswapd 1861981
-pgscan_anon 1822930
-pgscan_file 39051
-pgsteal_anon 386
-pgsteal_file 30470
-pageoutrun 30
-numa_hint_faults 27418279
-numa_pages_migrated 356
+nr_inactive_anon 1662306
+nr_active_anon 440303
+nr_inactive_file 27669
+nr_active_file 1654
+pgpromote_success 1314102
+pgpromote_candidate 1892525
+pgactivate 3284457
+pgdeactivate 1527504
+pgfault 6847775
+pgdemote_kswapd 2142047
+pgscan_kswapd 7496588
+pgscan_anon 7462488
+pgscan_file 34100
+pgsteal_anon 2115661
+pgsteal_file 26386
+pageoutrun 378
+numa_hint_faults 3220891
+numa_pages_migrated 1314102
where -: before this patch, +: after this patch
Signed-off-by: Byungchul Park <[email protected]>
---
mm/vmscan.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bba207f41b14..6eda59fce5ee 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
* If we have plenty of inactive file pages that aren't
* thrashing, try to reclaim those first before touching
* anonymous pages.
+ *
+ * However, the condition 'sc->cache_trim_mode == 1' all through
+ * the scan priorties might lead reclaim failure. If it keeps
+ * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
+ * if there are still plenty anon pages to reclaim, which is not
+ * desirable. So do not use cache_trim_mode when reclaim is not
+ * smooth e.i. high scan priority.
*/
file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
- if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
+ if (sc->priority > 1 && file >> sc->priority &&
+ !(sc->may_deactivate & DEACTIVATE_FILE))
sc->cache_trim_mode = 1;
else
sc->cache_trim_mode = 0;
--
2.17.1
Byungchul Park <[email protected]> writes:
> Changes from v1:
> 1. Add a comment describing why this change is necessary in code
> and rewrite the commit message with how to reproduce and what
> the result is using vmstat. (feedbacked by Andrew Morton and
> Yu Zhao)
> 2. Change the condition to avoid cache_trim_mode from
> 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> where the priority goes to zero all the way. (feedbacked by
> Yu Zhao)
>
> --->8---
> From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
> From: Byungchul Park <[email protected]>
> Date: Thu, 22 Feb 2024 14:50:17 +0900
> Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
>
> With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> pages. However, it should be more careful to turn on the mode because
> it's going to prevent anon pages from being reclaimed even if there are
> a huge number of anon pages that are cold and should be reclaimed. Even
> worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
> stopping kswapd until direct reclaim eventually works to resume kswapd.
> So this is more like a bug fix than a performance improvement.
>
> The problematic behavior can be reproduced by:
>
> CONFIG_NUMA_BALANCING enabled
> sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>
> numa node0 (8GB local memory, 16 CPUs)
> numa node1 (8GB slow tier memory, no CPUs)
>
> Sequence:
>
> 1) echo 3 > /proc/sys/vm/drop_caches
> 2) To emulate the system with full of cold memory in local DRAM, run
> the following dummy program and never touch the region:
>
> mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>
> 3) Run any memory intensive work e.g. XSBench.
> 4) Check if numa balancing is working e.i. promotion/demotion.
> 5) Iterate 1) ~ 4) until kswapd stops.
>
> With this, you could eventually see that promotion/demotion are not
> working because kswapd has stopped due to ->kswapd_failures >=
> MAX_RECLAIM_RETRIES.
>
> Interesting vmstat delta's differences between before and after are like:
>
> -nr_inactive_anon 321935
> -nr_active_anon 1780700
> -nr_inactive_file 30425
> -nr_active_file 14961
> -pgpromote_success 356
> -pgpromote_candidate 21953245
> -pgactivate 1844523
> -pgdeactivate 50634
> -pgfault 31100294
> -pgdemote_kswapd 30856
> -pgscan_kswapd 1861981
> -pgscan_anon 1822930
> -pgscan_file 39051
> -pgsteal_anon 386
> -pgsteal_file 30470
> -pageoutrun 30
> -numa_hint_faults 27418279
> -numa_pages_migrated 356
>
> +nr_inactive_anon 1662306
> +nr_active_anon 440303
> +nr_inactive_file 27669
> +nr_active_file 1654
> +pgpromote_success 1314102
> +pgpromote_candidate 1892525
> +pgactivate 3284457
> +pgdeactivate 1527504
> +pgfault 6847775
> +pgdemote_kswapd 2142047
> +pgscan_kswapd 7496588
> +pgscan_anon 7462488
> +pgscan_file 34100
> +pgsteal_anon 2115661
> +pgsteal_file 26386
> +pageoutrun 378
> +numa_hint_faults 3220891
> +numa_pages_migrated 1314102
>
> where -: before this patch, +: after this patch
>
> Signed-off-by: Byungchul Park <[email protected]>
> ---
> mm/vmscan.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..6eda59fce5ee 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> * If we have plenty of inactive file pages that aren't
> * thrashing, try to reclaim those first before touching
> * anonymous pages.
> + *
> + * However, the condition 'sc->cache_trim_mode == 1' all through
> + * the scan priorties might lead reclaim failure. If it keeps
> + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
> + * if there are still plenty anon pages to reclaim, which is not
> + * desirable. So do not use cache_trim_mode when reclaim is not
> + * smooth e.i. high scan priority.
> */
> file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> + if (sc->priority > 1 && file >> sc->priority &&
> + !(sc->may_deactivate & DEACTIVATE_FILE))
> sc->cache_trim_mode = 1;
> else
> sc->cache_trim_mode = 0;
In get_scan_count(), there's following code,
/*
* Do not apply any pressure balancing cleverness when the
* system is close to OOM, scan both anon and file equally
* (unless the swappiness setting disagrees with swapping).
*/
if (!sc->priority && swappiness) {
scan_balance = SCAN_EQUAL;
goto out;
}
So, swappiness is 0 in you system? Please check it. If it's not 0,
please check why this doesn't help.
--
Best Regards,
Huang, Ying
On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > Changes from v1:
> > 1. Add a comment describing why this change is necessary in code
> > and rewrite the commit message with how to reproduce and what
> > the result is using vmstat. (feedbacked by Andrew Morton and
> > Yu Zhao)
> > 2. Change the condition to avoid cache_trim_mode from
> > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > where the priority goes to zero all the way. (feedbacked by
> > Yu Zhao)
> >
> > --->8---
> > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
> > From: Byungchul Park <[email protected]>
> > Date: Thu, 22 Feb 2024 14:50:17 +0900
> > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
> >
> > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > pages. However, it should be more careful to turn on the mode because
> > it's going to prevent anon pages from being reclaimed even if there are
> > a huge number of anon pages that are cold and should be reclaimed. Even
> > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > stopping kswapd until direct reclaim eventually works to resume kswapd.
> > So this is more like a bug fix than a performance improvement.
> >
> > The problematic behavior can be reproduced by:
> >
> > CONFIG_NUMA_BALANCING enabled
> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >
> > numa node0 (8GB local memory, 16 CPUs)
> > numa node1 (8GB slow tier memory, no CPUs)
> >
> > Sequence:
> >
> > 1) echo 3 > /proc/sys/vm/drop_caches
> > 2) To emulate the system with full of cold memory in local DRAM, run
> > the following dummy program and never touch the region:
> >
> > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >
> > 3) Run any memory intensive work e.g. XSBench.
> > 4) Check if numa balancing is working e.i. promotion/demotion.
> > 5) Iterate 1) ~ 4) until kswapd stops.
> >
> > With this, you could eventually see that promotion/demotion are not
> > working because kswapd has stopped due to ->kswapd_failures >=
> > MAX_RECLAIM_RETRIES.
> >
> > Interesting vmstat delta's differences between before and after are like:
> >
> > -nr_inactive_anon 321935
> > -nr_active_anon 1780700
> > -nr_inactive_file 30425
> > -nr_active_file 14961
> > -pgpromote_success 356
> > -pgpromote_candidate 21953245
> > -pgactivate 1844523
> > -pgdeactivate 50634
> > -pgfault 31100294
> > -pgdemote_kswapd 30856
> > -pgscan_kswapd 1861981
> > -pgscan_anon 1822930
> > -pgscan_file 39051
> > -pgsteal_anon 386
> > -pgsteal_file 30470
> > -pageoutrun 30
> > -numa_hint_faults 27418279
> > -numa_pages_migrated 356
> >
> > +nr_inactive_anon 1662306
> > +nr_active_anon 440303
> > +nr_inactive_file 27669
> > +nr_active_file 1654
> > +pgpromote_success 1314102
> > +pgpromote_candidate 1892525
> > +pgactivate 3284457
> > +pgdeactivate 1527504
> > +pgfault 6847775
> > +pgdemote_kswapd 2142047
> > +pgscan_kswapd 7496588
> > +pgscan_anon 7462488
> > +pgscan_file 34100
> > +pgsteal_anon 2115661
> > +pgsteal_file 26386
> > +pageoutrun 378
> > +numa_hint_faults 3220891
> > +numa_pages_migrated 1314102
> >
> > where -: before this patch, +: after this patch
> >
> > Signed-off-by: Byungchul Park <[email protected]>
> > ---
> > mm/vmscan.c | 10 +++++++++-
> > 1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bba207f41b14..6eda59fce5ee 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > * If we have plenty of inactive file pages that aren't
> > * thrashing, try to reclaim those first before touching
> > * anonymous pages.
> > + *
> > + * However, the condition 'sc->cache_trim_mode == 1' all through
> > + * the scan priorties might lead reclaim failure. If it keeps
> > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
> > + * if there are still plenty anon pages to reclaim, which is not
> > + * desirable. So do not use cache_trim_mode when reclaim is not
> > + * smooth e.i. high scan priority.
> > */
> > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > + if (sc->priority > 1 && file >> sc->priority &&
> > + !(sc->may_deactivate & DEACTIVATE_FILE))
> > sc->cache_trim_mode = 1;
> > else
> > sc->cache_trim_mode = 0;
>
> In get_scan_count(), there's following code,
>
> /*
> * Do not apply any pressure balancing cleverness when the
> * system is close to OOM, scan both anon and file equally
> * (unless the swappiness setting disagrees with swapping).
> */
> if (!sc->priority && swappiness) {
> scan_balance = SCAN_EQUAL;
> goto out;
> }
>
> So, swappiness is 0 in you system? Please check it. If it's not 0,
> please check why this doesn't help.
Nice information! Then the change should be:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bba207f41b14..91f9bab86e92 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
* system is close to OOM, scan both anon and file equally
* (unless the swappiness setting disagrees with swapping).
*/
- if (!sc->priority && swappiness) {
+ if (sc->priority <= 1 && swappiness) {
scan_balance = SCAN_EQUAL;
goto out;
}
Worth noting that the priority goes from DEF_PRIORITY to 1 in
balance_pgdat() of kswapd. I will change how to fix to this if this
looks more reasonable.
Byungchul
On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote:
> On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote:
> > Byungchul Park <[email protected]> writes:
> >
> > > Changes from v1:
> > > 1. Add a comment describing why this change is necessary in code
> > > and rewrite the commit message with how to reproduce and what
> > > the result is using vmstat. (feedbacked by Andrew Morton and
> > > Yu Zhao)
> > > 2. Change the condition to avoid cache_trim_mode from
> > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > > where the priority goes to zero all the way. (feedbacked by
> > > Yu Zhao)
> > >
> > > --->8---
> > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
> > > From: Byungchul Park <[email protected]>
> > > Date: Thu, 22 Feb 2024 14:50:17 +0900
> > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
> > >
> > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > > pages. However, it should be more careful to turn on the mode because
> > > it's going to prevent anon pages from being reclaimed even if there are
> > > a huge number of anon pages that are cold and should be reclaimed. Even
> > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > > stopping kswapd until direct reclaim eventually works to resume kswapd.
> > > So this is more like a bug fix than a performance improvement.
> > >
> > > The problematic behavior can be reproduced by:
> > >
> > > CONFIG_NUMA_BALANCING enabled
> > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > >
> > > numa node0 (8GB local memory, 16 CPUs)
> > > numa node1 (8GB slow tier memory, no CPUs)
> > >
> > > Sequence:
> > >
> > > 1) echo 3 > /proc/sys/vm/drop_caches
> > > 2) To emulate the system with full of cold memory in local DRAM, run
> > > the following dummy program and never touch the region:
> > >
> > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > >
> > > 3) Run any memory intensive work e.g. XSBench.
> > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > > 5) Iterate 1) ~ 4) until kswapd stops.
> > >
> > > With this, you could eventually see that promotion/demotion are not
> > > working because kswapd has stopped due to ->kswapd_failures >=
> > > MAX_RECLAIM_RETRIES.
> > >
> > > Interesting vmstat delta's differences between before and after are like:
> > >
> > > -nr_inactive_anon 321935
> > > -nr_active_anon 1780700
> > > -nr_inactive_file 30425
> > > -nr_active_file 14961
> > > -pgpromote_success 356
> > > -pgpromote_candidate 21953245
> > > -pgactivate 1844523
> > > -pgdeactivate 50634
> > > -pgfault 31100294
> > > -pgdemote_kswapd 30856
> > > -pgscan_kswapd 1861981
> > > -pgscan_anon 1822930
> > > -pgscan_file 39051
> > > -pgsteal_anon 386
> > > -pgsteal_file 30470
> > > -pageoutrun 30
> > > -numa_hint_faults 27418279
> > > -numa_pages_migrated 356
> > >
> > > +nr_inactive_anon 1662306
> > > +nr_active_anon 440303
> > > +nr_inactive_file 27669
> > > +nr_active_file 1654
> > > +pgpromote_success 1314102
> > > +pgpromote_candidate 1892525
> > > +pgactivate 3284457
> > > +pgdeactivate 1527504
> > > +pgfault 6847775
> > > +pgdemote_kswapd 2142047
> > > +pgscan_kswapd 7496588
> > > +pgscan_anon 7462488
> > > +pgscan_file 34100
> > > +pgsteal_anon 2115661
> > > +pgsteal_file 26386
> > > +pageoutrun 378
> > > +numa_hint_faults 3220891
> > > +numa_pages_migrated 1314102
> > >
> > > where -: before this patch, +: after this patch
> > >
> > > Signed-off-by: Byungchul Park <[email protected]>
> > > ---
> > > mm/vmscan.c | 10 +++++++++-
> > > 1 file changed, 9 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index bba207f41b14..6eda59fce5ee 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > > * If we have plenty of inactive file pages that aren't
> > > * thrashing, try to reclaim those first before touching
> > > * anonymous pages.
> > > + *
> > > + * However, the condition 'sc->cache_trim_mode == 1' all through
> > > + * the scan priorties might lead reclaim failure. If it keeps
> > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
> > > + * if there are still plenty anon pages to reclaim, which is not
> > > + * desirable. So do not use cache_trim_mode when reclaim is not
> > > + * smooth e.i. high scan priority.
> > > */
> > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > > + if (sc->priority > 1 && file >> sc->priority &&
> > > + !(sc->may_deactivate & DEACTIVATE_FILE))
> > > sc->cache_trim_mode = 1;
> > > else
> > > sc->cache_trim_mode = 0;
> >
> > In get_scan_count(), there's following code,
> >
> > /*
> > * Do not apply any pressure balancing cleverness when the
> > * system is close to OOM, scan both anon and file equally
> > * (unless the swappiness setting disagrees with swapping).
> > */
> > if (!sc->priority && swappiness) {
> > scan_balance = SCAN_EQUAL;
> > goto out;
> > }
> >
> > So, swappiness is 0 in you system? Please check it. If it's not 0,
> > please check why this doesn't help.
>
> Nice information! Then the change should be:
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..91f9bab86e92 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> * system is close to OOM, scan both anon and file equally
> * (unless the swappiness setting disagrees with swapping).
> */
> - if (!sc->priority && swappiness) {
> + if (sc->priority <= 1 && swappiness) {
> scan_balance = SCAN_EQUAL;
> goto out;
> }
Or:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bba207f41b14..c54371a398b1 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
if (raise_priority || !nr_reclaimed)
sc.priority--;
- } while (sc.priority >= 1);
+ } while (sc.priority >= 0);
if (!sc.nr_reclaimed)
pgdat->kswapd_failures++;
---
Byungchul
> Worth noting that the priority goes from DEF_PRIORITY to 1 in
> balance_pgdat() of kswapd. I will change how to fix to this if this
> looks more reasonable.
>
> Byungchul
On Thu, Feb 22, 2024 at 06:49:00PM +0900, Byungchul Park wrote:
> On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote:
> > On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote:
> > > Byungchul Park <[email protected]> writes:
> > >
> > > > Changes from v1:
> > > > 1. Add a comment describing why this change is necessary in code
> > > > and rewrite the commit message with how to reproduce and what
> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> > > > Yu Zhao)
> > > > 2. Change the condition to avoid cache_trim_mode from
> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > > > where the priority goes to zero all the way. (feedbacked by
> > > > Yu Zhao)
> > > >
> > > > --->8---
> > > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
> > > > From: Byungchul Park <[email protected]>
> > > > Date: Thu, 22 Feb 2024 14:50:17 +0900
> > > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
> > > >
> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > > > pages. However, it should be more careful to turn on the mode because
> > > > it's going to prevent anon pages from being reclaimed even if there are
> > > > a huge number of anon pages that are cold and should be reclaimed. Even
> > > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > > > stopping kswapd until direct reclaim eventually works to resume kswapd.
> > > > So this is more like a bug fix than a performance improvement.
> > > >
> > > > The problematic behavior can be reproduced by:
> > > >
> > > > CONFIG_NUMA_BALANCING enabled
> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > > >
> > > > numa node0 (8GB local memory, 16 CPUs)
> > > > numa node1 (8GB slow tier memory, no CPUs)
> > > >
> > > > Sequence:
> > > >
> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> > > > the following dummy program and never touch the region:
> > > >
> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > > >
> > > > 3) Run any memory intensive work e.g. XSBench.
> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > > > 5) Iterate 1) ~ 4) until kswapd stops.
> > > >
> > > > With this, you could eventually see that promotion/demotion are not
> > > > working because kswapd has stopped due to ->kswapd_failures >=
> > > > MAX_RECLAIM_RETRIES.
> > > >
> > > > Interesting vmstat delta's differences between before and after are like:
> > > >
> > > > -nr_inactive_anon 321935
> > > > -nr_active_anon 1780700
> > > > -nr_inactive_file 30425
> > > > -nr_active_file 14961
> > > > -pgpromote_success 356
> > > > -pgpromote_candidate 21953245
> > > > -pgactivate 1844523
> > > > -pgdeactivate 50634
> > > > -pgfault 31100294
> > > > -pgdemote_kswapd 30856
> > > > -pgscan_kswapd 1861981
> > > > -pgscan_anon 1822930
> > > > -pgscan_file 39051
> > > > -pgsteal_anon 386
> > > > -pgsteal_file 30470
> > > > -pageoutrun 30
> > > > -numa_hint_faults 27418279
> > > > -numa_pages_migrated 356
> > > >
> > > > +nr_inactive_anon 1662306
> > > > +nr_active_anon 440303
> > > > +nr_inactive_file 27669
> > > > +nr_active_file 1654
> > > > +pgpromote_success 1314102
> > > > +pgpromote_candidate 1892525
> > > > +pgactivate 3284457
> > > > +pgdeactivate 1527504
> > > > +pgfault 6847775
> > > > +pgdemote_kswapd 2142047
> > > > +pgscan_kswapd 7496588
> > > > +pgscan_anon 7462488
> > > > +pgscan_file 34100
> > > > +pgsteal_anon 2115661
> > > > +pgsteal_file 26386
> > > > +pageoutrun 378
> > > > +numa_hint_faults 3220891
> > > > +numa_pages_migrated 1314102
> > > >
> > > > where -: before this patch, +: after this patch
> > > >
> > > > Signed-off-by: Byungchul Park <[email protected]>
> > > > ---
> > > > mm/vmscan.c | 10 +++++++++-
> > > > 1 file changed, 9 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index bba207f41b14..6eda59fce5ee 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > > > * If we have plenty of inactive file pages that aren't
> > > > * thrashing, try to reclaim those first before touching
> > > > * anonymous pages.
> > > > + *
> > > > + * However, the condition 'sc->cache_trim_mode == 1' all through
> > > > + * the scan priorties might lead reclaim failure. If it keeps
> > > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
> > > > + * if there are still plenty anon pages to reclaim, which is not
> > > > + * desirable. So do not use cache_trim_mode when reclaim is not
> > > > + * smooth e.i. high scan priority.
> > > > */
> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > > > + if (sc->priority > 1 && file >> sc->priority &&
> > > > + !(sc->may_deactivate & DEACTIVATE_FILE))
> > > > sc->cache_trim_mode = 1;
> > > > else
> > > > sc->cache_trim_mode = 0;
> > >
> > > In get_scan_count(), there's following code,
> > >
> > > /*
> > > * Do not apply any pressure balancing cleverness when the
> > > * system is close to OOM, scan both anon and file equally
> > > * (unless the swappiness setting disagrees with swapping).
> > > */
> > > if (!sc->priority && swappiness) {
> > > scan_balance = SCAN_EQUAL;
> > > goto out;
> > > }
> > >
> > > So, swappiness is 0 in you system? Please check it. If it's not 0,
> > > please check why this doesn't help.
> >
> > Nice information! Then the change should be:
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bba207f41b14..91f9bab86e92 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> > * system is close to OOM, scan both anon and file equally
> > * (unless the swappiness setting disagrees with swapping).
> > */
> > - if (!sc->priority && swappiness) {
> > + if (sc->priority <= 1 && swappiness) {
> > scan_balance = SCAN_EQUAL;
> > goto out;
> > }
>
> Or:
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bba207f41b14..c54371a398b1 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>
> if (raise_priority || !nr_reclaimed)
> sc.priority--;
> - } while (sc.priority >= 1);
> + } while (sc.priority >= 0);
>
> if (!sc.nr_reclaimed)
> pgdat->kswapd_failures++;
+cc Mel Gorman
I just found this was intended. See commit 9aa41348a8d11 ("mm: vmscan:
do not allow kswapd to scan at maximum priority"). Mel Gorman didn't want
to make kswapd too much aggressive. However, does it make sense to stop
kswapd even if there are plenty cold anon pages to reclaim and make the
system wait for direct reclaim?
Thoughts?
Byungchul
> ---
>
> Byungchul
>
> > Worth noting that the priority goes from DEF_PRIORITY to 1 in
> > balance_pgdat() of kswapd. I will change how to fix to this if this
> > looks more reasonable.
> >
> > Byungchul
Byungchul Park <[email protected]> writes:
> On Thu, Feb 22, 2024 at 06:49:00PM +0900, Byungchul Park wrote:
>> On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote:
>> > On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote:
>> > > Byungchul Park <[email protected]> writes:
>> > >
>> > > > Changes from v1:
>> > > > 1. Add a comment describing why this change is necessary in code
>> > > > and rewrite the commit message with how to reproduce and what
>> > > > the result is using vmstat. (feedbacked by Andrew Morton and
>> > > > Yu Zhao)
>> > > > 2. Change the condition to avoid cache_trim_mode from
>> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
>> > > > where the priority goes to zero all the way. (feedbacked by
>> > > > Yu Zhao)
>> > > >
>> > > > --->8---
>> > > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
>> > > > From: Byungchul Park <[email protected]>
>> > > > Date: Thu, 22 Feb 2024 14:50:17 +0900
>> > > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
>> > > >
>> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
>> > > > pages. However, it should be more careful to turn on the mode because
>> > > > it's going to prevent anon pages from being reclaimed even if there are
>> > > > a huge number of anon pages that are cold and should be reclaimed. Even
>> > > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
>> > > > stopping kswapd until direct reclaim eventually works to resume kswapd.
>> > > > So this is more like a bug fix than a performance improvement.
>> > > >
>> > > > The problematic behavior can be reproduced by:
>> > > >
>> > > > CONFIG_NUMA_BALANCING enabled
>> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
>> > > >
>> > > > numa node0 (8GB local memory, 16 CPUs)
>> > > > numa node1 (8GB slow tier memory, no CPUs)
>> > > >
>> > > > Sequence:
>> > > >
>> > > > 1) echo 3 > /proc/sys/vm/drop_caches
>> > > > 2) To emulate the system with full of cold memory in local DRAM, run
>> > > > the following dummy program and never touch the region:
>> > > >
>> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
>> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
>> > > >
>> > > > 3) Run any memory intensive work e.g. XSBench.
>> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
>> > > > 5) Iterate 1) ~ 4) until kswapd stops.
>> > > >
>> > > > With this, you could eventually see that promotion/demotion are not
>> > > > working because kswapd has stopped due to ->kswapd_failures >=
>> > > > MAX_RECLAIM_RETRIES.
>> > > >
>> > > > Interesting vmstat delta's differences between before and after are like:
>> > > >
>> > > > -nr_inactive_anon 321935
>> > > > -nr_active_anon 1780700
>> > > > -nr_inactive_file 30425
>> > > > -nr_active_file 14961
>> > > > -pgpromote_success 356
>> > > > -pgpromote_candidate 21953245
>> > > > -pgactivate 1844523
>> > > > -pgdeactivate 50634
>> > > > -pgfault 31100294
>> > > > -pgdemote_kswapd 30856
>> > > > -pgscan_kswapd 1861981
>> > > > -pgscan_anon 1822930
>> > > > -pgscan_file 39051
>> > > > -pgsteal_anon 386
>> > > > -pgsteal_file 30470
>> > > > -pageoutrun 30
>> > > > -numa_hint_faults 27418279
>> > > > -numa_pages_migrated 356
>> > > >
>> > > > +nr_inactive_anon 1662306
>> > > > +nr_active_anon 440303
>> > > > +nr_inactive_file 27669
>> > > > +nr_active_file 1654
>> > > > +pgpromote_success 1314102
>> > > > +pgpromote_candidate 1892525
>> > > > +pgactivate 3284457
>> > > > +pgdeactivate 1527504
>> > > > +pgfault 6847775
>> > > > +pgdemote_kswapd 2142047
>> > > > +pgscan_kswapd 7496588
>> > > > +pgscan_anon 7462488
>> > > > +pgscan_file 34100
>> > > > +pgsteal_anon 2115661
>> > > > +pgsteal_file 26386
>> > > > +pageoutrun 378
>> > > > +numa_hint_faults 3220891
>> > > > +numa_pages_migrated 1314102
>> > > >
>> > > > where -: before this patch, +: after this patch
>> > > >
>> > > > Signed-off-by: Byungchul Park <[email protected]>
>> > > > ---
>> > > > mm/vmscan.c | 10 +++++++++-
>> > > > 1 file changed, 9 insertions(+), 1 deletion(-)
>> > > >
>> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > > > index bba207f41b14..6eda59fce5ee 100644
>> > > > --- a/mm/vmscan.c
>> > > > +++ b/mm/vmscan.c
>> > > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
>> > > > * If we have plenty of inactive file pages that aren't
>> > > > * thrashing, try to reclaim those first before touching
>> > > > * anonymous pages.
>> > > > + *
>> > > > + * However, the condition 'sc->cache_trim_mode == 1' all through
>> > > > + * the scan priorties might lead reclaim failure. If it keeps
>> > > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
>> > > > + * if there are still plenty anon pages to reclaim, which is not
>> > > > + * desirable. So do not use cache_trim_mode when reclaim is not
>> > > > + * smooth e.i. high scan priority.
>> > > > */
>> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
>> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
>> > > > + if (sc->priority > 1 && file >> sc->priority &&
>> > > > + !(sc->may_deactivate & DEACTIVATE_FILE))
>> > > > sc->cache_trim_mode = 1;
>> > > > else
>> > > > sc->cache_trim_mode = 0;
>> > >
>> > > In get_scan_count(), there's following code,
>> > >
>> > > /*
>> > > * Do not apply any pressure balancing cleverness when the
>> > > * system is close to OOM, scan both anon and file equally
>> > > * (unless the swappiness setting disagrees with swapping).
>> > > */
>> > > if (!sc->priority && swappiness) {
>> > > scan_balance = SCAN_EQUAL;
>> > > goto out;
>> > > }
>> > >
>> > > So, swappiness is 0 in you system? Please check it. If it's not 0,
>> > > please check why this doesn't help.
>> >
>> > Nice information! Then the change should be:
>> >
>> > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > index bba207f41b14..91f9bab86e92 100644
>> > --- a/mm/vmscan.c
>> > +++ b/mm/vmscan.c
>> > @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>> > * system is close to OOM, scan both anon and file equally
>> > * (unless the swappiness setting disagrees with swapping).
>> > */
>> > - if (!sc->priority && swappiness) {
>> > + if (sc->priority <= 1 && swappiness) {
>> > scan_balance = SCAN_EQUAL;
>> > goto out;
>> > }
>>
>> Or:
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index bba207f41b14..c54371a398b1 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
>>
>> if (raise_priority || !nr_reclaimed)
>> sc.priority--;
>> - } while (sc.priority >= 1);
>> + } while (sc.priority >= 0);
>>
>> if (!sc.nr_reclaimed)
>> pgdat->kswapd_failures++;
>
> +cc Mel Gorman
>
> I just found this was intended. See commit 9aa41348a8d11 ("mm: vmscan:
> do not allow kswapd to scan at maximum priority"). Mel Gorman didn't want
> to make kswapd too much aggressive. However, does it make sense to stop
> kswapd even if there are plenty cold anon pages to reclaim and make the
> system wait for direct reclaim?
Maybe we can play with cache_trim_mode, for example, if sc.nr_reclaimed
== 0 and sc.cache_trim_mode == true, force disabling cache_trim_mode in
the next round?
--
Best Regards,
Huang, Ying
> Thoughts?
>
> Byungchul
>
>> ---
>>
>> Byungchul
>>
>> > Worth noting that the priority goes from DEF_PRIORITY to 1 in
>> > balance_pgdat() of kswapd. I will change how to fix to this if this
>> > looks more reasonable.
>> >
>> > Byungchul
On Fri, Feb 23, 2024 at 09:03:39AM +0800, Huang, Ying wrote:
> Byungchul Park <[email protected]> writes:
>
> > On Thu, Feb 22, 2024 at 06:49:00PM +0900, Byungchul Park wrote:
> >> On Thu, Feb 22, 2024 at 06:20:42PM +0900, Byungchul Park wrote:
> >> > On Thu, Feb 22, 2024 at 04:37:16PM +0800, Huang, Ying wrote:
> >> > > Byungchul Park <[email protected]> writes:
> >> > >
> >> > > > Changes from v1:
> >> > > > 1. Add a comment describing why this change is necessary in code
> >> > > > and rewrite the commit message with how to reproduce and what
> >> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> >> > > > Yu Zhao)
> >> > > > 2. Change the condition to avoid cache_trim_mode from
> >> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> >> > > > where the priority goes to zero all the way. (feedbacked by
> >> > > > Yu Zhao)
> >> > > >
> >> > > > --->8---
> >> > > > From 07e0baab368160e50b6ca35d95745168aa60e217 Mon Sep 17 00:00:00 2001
> >> > > > From: Byungchul Park <[email protected]>
> >> > > > Date: Thu, 22 Feb 2024 14:50:17 +0900
> >> > > > Subject: [PATCH v2] mm, vmscan: don't turn on cache_trim_mode at high scan priorities
> >> > > >
> >> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> >> > > > pages. However, it should be more careful to turn on the mode because
> >> > > > it's going to prevent anon pages from being reclaimed even if there are
> >> > > > a huge number of anon pages that are cold and should be reclaimed. Even
> >> > > > worse, that can lead kswapd_failures to reach MAX_RECLAIM_RETRIES and
> >> > > > stopping kswapd until direct reclaim eventually works to resume kswapd.
> >> > > > So this is more like a bug fix than a performance improvement.
> >> > > >
> >> > > > The problematic behavior can be reproduced by:
> >> > > >
> >> > > > CONFIG_NUMA_BALANCING enabled
> >> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >> > > >
> >> > > > numa node0 (8GB local memory, 16 CPUs)
> >> > > > numa node1 (8GB slow tier memory, no CPUs)
> >> > > >
> >> > > > Sequence:
> >> > > >
> >> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> >> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> >> > > > the following dummy program and never touch the region:
> >> > > >
> >> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> >> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> >> > > >
> >> > > > 3) Run any memory intensive work e.g. XSBench.
> >> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> >> > > > 5) Iterate 1) ~ 4) until kswapd stops.
> >> > > >
> >> > > > With this, you could eventually see that promotion/demotion are not
> >> > > > working because kswapd has stopped due to ->kswapd_failures >=
> >> > > > MAX_RECLAIM_RETRIES.
> >> > > >
> >> > > > Interesting vmstat delta's differences between before and after are like:
> >> > > >
> >> > > > -nr_inactive_anon 321935
> >> > > > -nr_active_anon 1780700
> >> > > > -nr_inactive_file 30425
> >> > > > -nr_active_file 14961
> >> > > > -pgpromote_success 356
> >> > > > -pgpromote_candidate 21953245
> >> > > > -pgactivate 1844523
> >> > > > -pgdeactivate 50634
> >> > > > -pgfault 31100294
> >> > > > -pgdemote_kswapd 30856
> >> > > > -pgscan_kswapd 1861981
> >> > > > -pgscan_anon 1822930
> >> > > > -pgscan_file 39051
> >> > > > -pgsteal_anon 386
> >> > > > -pgsteal_file 30470
> >> > > > -pageoutrun 30
> >> > > > -numa_hint_faults 27418279
> >> > > > -numa_pages_migrated 356
> >> > > >
> >> > > > +nr_inactive_anon 1662306
> >> > > > +nr_active_anon 440303
> >> > > > +nr_inactive_file 27669
> >> > > > +nr_active_file 1654
> >> > > > +pgpromote_success 1314102
> >> > > > +pgpromote_candidate 1892525
> >> > > > +pgactivate 3284457
> >> > > > +pgdeactivate 1527504
> >> > > > +pgfault 6847775
> >> > > > +pgdemote_kswapd 2142047
> >> > > > +pgscan_kswapd 7496588
> >> > > > +pgscan_anon 7462488
> >> > > > +pgscan_file 34100
> >> > > > +pgsteal_anon 2115661
> >> > > > +pgsteal_file 26386
> >> > > > +pageoutrun 378
> >> > > > +numa_hint_faults 3220891
> >> > > > +numa_pages_migrated 1314102
> >> > > >
> >> > > > where -: before this patch, +: after this patch
> >> > > >
> >> > > > Signed-off-by: Byungchul Park <[email protected]>
> >> > > > ---
> >> > > > mm/vmscan.c | 10 +++++++++-
> >> > > > 1 file changed, 9 insertions(+), 1 deletion(-)
> >> > > >
> >> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> > > > index bba207f41b14..6eda59fce5ee 100644
> >> > > > --- a/mm/vmscan.c
> >> > > > +++ b/mm/vmscan.c
> >> > > > @@ -2266,9 +2266,17 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> >> > > > * If we have plenty of inactive file pages that aren't
> >> > > > * thrashing, try to reclaim those first before touching
> >> > > > * anonymous pages.
> >> > > > + *
> >> > > > + * However, the condition 'sc->cache_trim_mode == 1' all through
> >> > > > + * the scan priorties might lead reclaim failure. If it keeps
> >> > > > + * MAX_RECLAIM_RETRIES times, then kswapd would get stopped even
> >> > > > + * if there are still plenty anon pages to reclaim, which is not
> >> > > > + * desirable. So do not use cache_trim_mode when reclaim is not
> >> > > > + * smooth e.i. high scan priority.
> >> > > > */
> >> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> >> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> >> > > > + if (sc->priority > 1 && file >> sc->priority &&
> >> > > > + !(sc->may_deactivate & DEACTIVATE_FILE))
> >> > > > sc->cache_trim_mode = 1;
> >> > > > else
> >> > > > sc->cache_trim_mode = 0;
> >> > >
> >> > > In get_scan_count(), there's following code,
> >> > >
> >> > > /*
> >> > > * Do not apply any pressure balancing cleverness when the
> >> > > * system is close to OOM, scan both anon and file equally
> >> > > * (unless the swappiness setting disagrees with swapping).
> >> > > */
> >> > > if (!sc->priority && swappiness) {
> >> > > scan_balance = SCAN_EQUAL;
> >> > > goto out;
> >> > > }
> >> > >
> >> > > So, swappiness is 0 in you system? Please check it. If it's not 0,
> >> > > please check why this doesn't help.
> >> >
> >> > Nice information! Then the change should be:
> >> >
> >> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> > index bba207f41b14..91f9bab86e92 100644
> >> > --- a/mm/vmscan.c
> >> > +++ b/mm/vmscan.c
> >> > @@ -2357,7 +2357,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> >> > * system is close to OOM, scan both anon and file equally
> >> > * (unless the swappiness setting disagrees with swapping).
> >> > */
> >> > - if (!sc->priority && swappiness) {
> >> > + if (sc->priority <= 1 && swappiness) {
> >> > scan_balance = SCAN_EQUAL;
> >> > goto out;
> >> > }
> >>
> >> Or:
> >>
> >> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> index bba207f41b14..c54371a398b1 100644
> >> --- a/mm/vmscan.c
> >> +++ b/mm/vmscan.c
> >> @@ -6896,7 +6896,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> >>
> >> if (raise_priority || !nr_reclaimed)
> >> sc.priority--;
> >> - } while (sc.priority >= 1);
> >> + } while (sc.priority >= 0);
> >>
> >> if (!sc.nr_reclaimed)
> >> pgdat->kswapd_failures++;
> >
> > +cc Mel Gorman
> >
> > I just found this was intended. See commit 9aa41348a8d11 ("mm: vmscan:
> > do not allow kswapd to scan at maximum priority"). Mel Gorman didn't want
> > to make kswapd too much aggressive. However, does it make sense to stop
> > kswapd even if there are plenty cold anon pages to reclaim and make the
> > system wait for direct reclaim?
>
> Maybe we can play with cache_trim_mode, for example, if sc.nr_reclaimed
> == 0 and sc.cache_trim_mode == true, force disabling cache_trim_mode in
> the next round?
Looks reasonable to me. I will try.
Byungchul
> --
> Best Regards,
> Huang, Ying
>
> > Thoughts?
> >
> > Byungchul
> >
> >> ---
> >>
> >> Byungchul
> >>
> >> > Worth noting that the priority goes from DEF_PRIORITY to 1 in
> >> > balance_pgdat() of kswapd. I will change how to fix to this if this
> >> > looks more reasonable.
> >> >
> >> > Byungchul