2022-03-29 01:19:06

by Wei Yang

[permalink] [raw]
Subject: [Patch v2 1/2] mm/vmscan: reclaim only affects managed_zones

As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
reclaim from zones with pages managed by the buddy allocator") , reclaim
only affects managed_zones.

Let's adjust the code and comment accordingly.

Signed-off-by: Wei Yang <[email protected]>
Reviewed-by: Miaohe Lin <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: "Huang, Ying" <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/vmscan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1f2d79e8c43c..4385b59ef599 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1040,7 +1040,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat)
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = pgdat->node_zones + i;

- if (!populated_zone(zone))
+ if (!managed_zone(zone))
continue;

reclaimable += zone_reclaimable_pages(zone);
@@ -3909,7 +3909,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
}

/*
- * If a node has no populated zone within highest_zoneidx, it does not
+ * If a node has no managed zone within highest_zoneidx, it does not
* need balancing by definition. This can happen if a zone-restricted
* allocation tries to wake a remote kswapd.
*/
--
2.33.1


2022-03-29 01:19:08

by Wei Yang

[permalink] [raw]
Subject: [Patch v2 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone

wakeup_kswapd() only wake up kswapd when the zone is managed.

For two callers of wakeup_kswapd(), they are node perspective.

* wake_all_kswapds
* numamigrate_isolate_page

If we picked up a !managed zone, this is not we expected.

This patch makes sure we pick up a managed zone for wakeup_kswapd(). And
it also use managed_zone in migrate_balanced_pgdat() to get the proper
zone.

Signed-off-by: Wei Yang <[email protected]>
Cc: Miaohe Lin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: "Huang, Ying" <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Oscar Salvador <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>

---
v2: adjust the usage in migrate_balanced_pgdat()

---
mm/migrate.c | 6 +++---
mm/page_alloc.c | 2 ++
2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 3d60823afd2d..5adc55b5347c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
#ifdef CONFIG_NUMA_BALANCING
/*
* Returns true if this is a safe migration target node for misplaced NUMA
- * pages. Currently it only checks the watermarks which crude
+ * pages. Currently it only checks the watermarks which is crude.
*/
static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
unsigned long nr_migrate_pages)
@@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
struct zone *zone = pgdat->node_zones + z;

- if (!populated_zone(zone))
+ if (!managed_zone(zone))
continue;

/* Avoid waking kswapd by allocating pages_to_migrate pages. */
@@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
return 0;
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
- if (populated_zone(pgdat->node_zones + z))
+ if (managed_zone(pgdat->node_zones + z))
break;
}
wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4c0c4ef94ba0..6656c2d06e01 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,

for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
ac->nodemask) {
+ if (!managed_zone(zone))
+ continue;
if (last_pgdat != zone->zone_pgdat)
wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
last_pgdat = zone->zone_pgdat;
--
2.33.1

2022-03-29 01:43:55

by Huang, Ying

[permalink] [raw]
Subject: Re: [Patch v2 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone

Wei Yang <[email protected]> writes:

> wakeup_kswapd() only wake up kswapd when the zone is managed.
>
> For two callers of wakeup_kswapd(), they are node perspective.
>
> * wake_all_kswapds
> * numamigrate_isolate_page
>
> If we picked up a !managed zone, this is not we expected.
>
> This patch makes sure we pick up a managed zone for wakeup_kswapd(). And
> it also use managed_zone in migrate_balanced_pgdat() to get the proper
> zone.
>
> Signed-off-by: Wei Yang <[email protected]>
> Cc: Miaohe Lin <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: "Huang, Ying" <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>

LGTM, Thanks!

Reviewed-by: "Huang, Ying" <[email protected]>

>
> ---
> v2: adjust the usage in migrate_balanced_pgdat()
>
> ---
> mm/migrate.c | 6 +++---
> mm/page_alloc.c | 2 ++
> 2 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 3d60823afd2d..5adc55b5347c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
> #ifdef CONFIG_NUMA_BALANCING
> /*
> * Returns true if this is a safe migration target node for misplaced NUMA
> - * pages. Currently it only checks the watermarks which crude
> + * pages. Currently it only checks the watermarks which is crude.
> */
> static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
> unsigned long nr_migrate_pages)
> @@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
> for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> struct zone *zone = pgdat->node_zones + z;
>
> - if (!populated_zone(zone))
> + if (!managed_zone(zone))
> continue;
>
> /* Avoid waking kswapd by allocating pages_to_migrate pages. */
> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
> return 0;
> for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> - if (populated_zone(pgdat->node_zones + z))
> + if (managed_zone(pgdat->node_zones + z))
> break;
> }
> wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4c0c4ef94ba0..6656c2d06e01 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
>
> for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
> ac->nodemask) {
> + if (!managed_zone(zone))
> + continue;
> if (last_pgdat != zone->zone_pgdat)
> wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
> last_pgdat = zone->zone_pgdat;

2022-03-30 12:35:29

by David Hildenbrand

[permalink] [raw]
Subject: Re: [Patch v2 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone

On 29.03.22 03:09, Wei Yang wrote:
> wakeup_kswapd() only wake up kswapd when the zone is managed.
>
> For two callers of wakeup_kswapd(), they are node perspective.
>
> * wake_all_kswapds
> * numamigrate_isolate_page
>
> If we picked up a !managed zone, this is not we expected.
>
> This patch makes sure we pick up a managed zone for wakeup_kswapd(). And
> it also use managed_zone in migrate_balanced_pgdat() to get the proper
> zone.
>
> Signed-off-by: Wei Yang <[email protected]>
> Cc: Miaohe Lin <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: "Huang, Ying" <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>

^ I'm not so sure about that SOB, actually Andrew should add that. But
maybe there is good reason for it that I'm not aware of.


Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb

2022-03-31 02:49:00

by Wei Yang

[permalink] [raw]
Subject: Re: [Patch v2 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone

On Wed, Mar 30, 2022 at 09:39:42AM +0200, David Hildenbrand wrote:
>On 29.03.22 03:09, Wei Yang wrote:
>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>
>> For two callers of wakeup_kswapd(), they are node perspective.
>>
>> * wake_all_kswapds
>> * numamigrate_isolate_page
>>
>> If we picked up a !managed zone, this is not we expected.
>>
>> This patch makes sure we pick up a managed zone for wakeup_kswapd(). And
>> it also use managed_zone in migrate_balanced_pgdat() to get the proper
>> zone.
>>
>> Signed-off-by: Wei Yang <[email protected]>
>> Cc: Miaohe Lin <[email protected]>
>> Cc: David Hildenbrand <[email protected]>
>> Cc: "Huang, Ying" <[email protected]>
>> Cc: Mel Gorman <[email protected]>
>> Cc: Oscar Salvador <[email protected]>
>> Signed-off-by: Andrew Morton <[email protected]>
>
>^ I'm not so sure about that SOB, actually Andrew should add that. But
>maybe there is good reason for it that I'm not aware of.
>

I see Andrew has added this for v1.

Maybe I should remove this since v2 has some minor adjustment to v1. :-)

>
>Reviewed-by: David Hildenbrand <[email protected]>
>
>--
>Thanks,
>
>David / dhildenb

--
Wei Yang
Help you, Help me