According to the commit messages of "mm: vmscan: fix endless loop in kswapd balancing"
and "mm: vmscan: decide whether to compact the pgdat based on reclaim progress", minor
change is required to the following snippet.
/*
* If any zone is currently balanced then kswapd will
* not call compaction as it is expected that the
* necessary pages are already available.
*/
if (pgdat_needs_compaction &&
zone_watermark_ok(zone, order,
low_wmark_pages(zone),
*classzone_idx, 0))
pgdat_needs_compaction = false;
zone_watermark_ok() should be replaced by zone_balanced() in the above snippet. That's
because zone_balanced() is more suitable for the context.
Signed-off-by: Chen Yucong <[email protected]>
---
mm/vmscan.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a8ffe4e..e1004ad 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3157,9 +3157,8 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
* necessary pages are already available.
*/
if (pgdat_needs_compaction &&
- zone_watermark_ok(zone, order,
- low_wmark_pages(zone),
- *classzone_idx, 0))
+ zone_balanced(zone, order, 0,
+ *classzone_idx))
pgdat_needs_compaction = false;
}
--
1.7.10.4
On Sun, Jun 22, 2014 at 04:51:00PM +0800, Chen Yucong wrote:
> According to the commit messages of "mm: vmscan: fix endless loop in kswapd balancing"
> and "mm: vmscan: decide whether to compact the pgdat based on reclaim progress", minor
> change is required to the following snippet.
>
> /*
> * If any zone is currently balanced then kswapd will
> * not call compaction as it is expected that the
> * necessary pages are already available.
> */
> if (pgdat_needs_compaction &&
> zone_watermark_ok(zone, order,
> low_wmark_pages(zone),
> *classzone_idx, 0))
> pgdat_needs_compaction = false;
>
> zone_watermark_ok() should be replaced by zone_balanced() in the above snippet. That's
> because zone_balanced() is more suitable for the context.
>
What bug does this fix?
The intent here is to prevent kswapd compacting a node if an allocation
request within that node would succeed against the low watermark.
Your change alters that to check against hte high watermark + balance gap
without explaining why kswapd should compact until the high watermark is
reached.
--
Mel Gorman
SUSE Labs