I've observed that fast isolation often isolates more pages than
cc->migratepages, and the excess freepages will be released back to the
buddy system. So skip fast freepages isolation if enough freepages are
isolated to save some CPU cycles.
Signed-off-by: Baolin Wang <[email protected]>
---
mm/compaction.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index eccec84dae82..3ade4c095ed2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1550,6 +1550,10 @@ static void fast_isolate_freepages(struct compact_control *cc)
spin_unlock_irqrestore(&cc->zone->lock, flags);
+ /* Skip fast search if enough freepages isolated */
+ if (cc->nr_freepages >= cc->nr_migratepages)
+ break;
+
/*
* Smaller scan on next order so the total scan is related
* to freelist_scan_limit.
--
2.27.0
On 5/25/23 14:54, Baolin Wang wrote:
> I've observed that fast isolation often isolates more pages than
> cc->migratepages, and the excess freepages will be released back to the
> buddy system. So skip fast freepages isolation if enough freepages are
> isolated to save some CPU cycles.
>
> Signed-off-by: Baolin Wang <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
> ---
> mm/compaction.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index eccec84dae82..3ade4c095ed2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1550,6 +1550,10 @@ static void fast_isolate_freepages(struct compact_control *cc)
>
> spin_unlock_irqrestore(&cc->zone->lock, flags);
>
> + /* Skip fast search if enough freepages isolated */
> + if (cc->nr_freepages >= cc->nr_migratepages)
> + break;
> +
> /*
> * Smaller scan on next order so the total scan is related
> * to freelist_scan_limit.