2022-02-15 15:33:56

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 0/5] Follow-up on high-order PCP caching

Commit 44042b449872 ("mm/page_alloc: allow high-order pages to be
stored on the per-cpu lists") was primarily aimed at reducing the cost
of SLUB cache refills of high-order pages in two ways. Firstly, zone
lock acquisitions was reduced and secondly, there were fewer buddy list
modifications. This is a follow-up series fixing some issues that became
apparant after merging.

Patch 1 is a functional fix. It's harmless but inefficient.

Patches 2-4 reduce the overhead of bulk freeing of PCP pages. While
the overhead is small, it's cumulative and noticable when truncating
large files. The changelog for patch 4 includes results of a microbench
that deletes large sparse files with data in page cache. Sparse files
were used to eliminate filesystem overhead.

Patch 5 addresses issues with high-order PCP pages being stored on PCP
lists for too long. Pages freed on a CPU potentially may not be quickly
reused and in some cases this can increase cache miss rates. Details are
included in the changelog.

mm/page_alloc.c | 128 ++++++++++++++++++++++++------------------------
1 file changed, 64 insertions(+), 64 deletions(-)

--
2.31.1


2022-02-15 15:37:24

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 3/5] mm/page_alloc: Simplify how many pages are selected per pcp list during bulk free

free_pcppages_bulk() selects pages to free by round-robining between
lists. Originally this was to evenly shrink pages by migratetype
but uneven freeing is inevitable due to high pages. Simplify list
selection by starting with a list that definitely has pages on it in
free_unref_page_commit() and for drain, it does not matter where draining
starts as all pages are removed.

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 34 +++++++++++-----------------------
1 file changed, 11 insertions(+), 23 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5110fdeb115..5e8c7cbe7a41 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1447,13 +1447,11 @@ static inline void prefetch_buddy(struct page *page, unsigned int order)
* count is the number of pages to free.
*/
static void free_pcppages_bulk(struct zone *zone, int count,
- struct per_cpu_pages *pcp)
+ struct per_cpu_pages *pcp,
+ int pindex)
{
- int pindex = 0;
int min_pindex = 0;
int max_pindex = NR_PCP_LISTS - 1;
- int batch_free = 0;
- int nr_freed = 0;
unsigned int order;
int prefetch_nr = READ_ONCE(pcp->batch);
bool isolated_pageblocks;
@@ -1467,16 +1465,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
count = min(pcp->count, count);
while (count > 0) {
struct list_head *list;
+ int nr_pages;

- /*
- * Remove pages from lists in a round-robin fashion. A
- * batch_free count is maintained that is incremented when an
- * empty list is encountered. This is so more pages are freed
- * off fuller lists instead of spinning excessively around empty
- * lists
- */
+ /* Remove pages from lists in a round-robin fashion. */
do {
- batch_free++;
if (++pindex == NR_PCP_LISTS)
pindex = 0;
list = &pcp->lists[pindex];
@@ -1489,18 +1481,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
min_pindex++;
} while (1);

- /* This is the only non-empty list. Free them all. */
- if (batch_free >= max_pindex - min_pindex)
- batch_free = count;
-
order = pindex_to_order(pindex);
+ nr_pages = 1 << order;
BUILD_BUG_ON(MAX_ORDER >= (1<<NR_PCP_ORDER_WIDTH));
do {
page = list_last_entry(list, struct page, lru);
/* must delete to avoid corrupting pcp list */
list_del(&page->lru);
- nr_freed += 1 << order;
- count -= 1 << order;
+ count -= nr_pages;
+ pcp->count -= nr_pages;

if (bulkfree_pcp_prepare(page))
continue;
@@ -1524,9 +1513,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
prefetch_buddy(page, order);
prefetch_nr--;
}
- } while (count > 0 && --batch_free && !list_empty(list));
+ } while (count > 0 && !list_empty(list));
}
- pcp->count -= nr_freed;

/*
* local_lock_irq held so equivalent to spin_lock_irqsave for
@@ -3095,7 +3083,7 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
batch = READ_ONCE(pcp->batch);
to_drain = min(pcp->count, batch);
if (to_drain > 0)
- free_pcppages_bulk(zone, to_drain, pcp);
+ free_pcppages_bulk(zone, to_drain, pcp, 0);
local_unlock_irqrestore(&pagesets.lock, flags);
}
#endif
@@ -3116,7 +3104,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)

pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
if (pcp->count)
- free_pcppages_bulk(zone, pcp->count, pcp);
+ free_pcppages_bulk(zone, pcp->count, pcp, 0);

local_unlock_irqrestore(&pagesets.lock, flags);
}
@@ -3397,7 +3385,7 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn,
if (pcp->count >= high) {
int batch = READ_ONCE(pcp->batch);

- free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp);
+ free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp, pindex);
}
}

--
2.31.1

2022-02-15 15:54:21

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 4/5] mm/page_alloc: Free pages in a single pass during bulk free

free_pcppages_bulk() has taken two passes through the pcp lists since
commit 0a5f4e5b4562 ("mm/free_pcppages_bulk: do not hold lock when picking
pages to free") due to deferring the cost of selecting PCP lists until
the zone lock is held. Now that list selection is simplier, the main
cost during selection is bulkfree_pcp_prepare() which in the normal case
is a simple check and prefetching. As the list manipulations have cost
in itself, go back to freeing pages in a single pass.

The series up to this point was evaulated using a trunc microbenchmark
that is truncating sparse files stored in page cache (mmtests config
config-io-trunc). Sparse files were used to limit filesystem interaction.

The results versus a revert of storing high-order pages in the PCP lists is

1-socket Skylake
5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
vanilla mm-reverthighpcp-v1r1 mm-highpcpopt-v1
Min elapsed 540.00 ( 0.00%) 530.00 ( 1.85%) 530.00 ( 1.85%)
Amean elapsed 543.00 ( 0.00%) 530.00 * 2.39%* 530.00 * 2.39%*
Stddev elapsed 4.83 ( 0.00%) 0.00 ( 100.00%) 0.00 ( 100.00%)
CoeffVar elapsed 0.89 ( 0.00%) 0.00 ( 100.00%) 0.00 ( 100.00%)
Max elapsed 550.00 ( 0.00%) 530.00 ( 3.64%) 530.00 ( 3.64%)
BAmean-50 elapsed 540.00 ( 0.00%) 530.00 ( 1.85%) 530.00 ( 1.85%)
BAmean-95 elapsed 542.22 ( 0.00%) 530.00 ( 2.25%) 530.00 ( 2.25%)
BAmean-99 elapsed 542.22 ( 0.00%) 530.00 ( 2.25%) 530.00 ( 2.25%)

2-socket CascadeLake
5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
vanilla mm-reverthighpcp-v1 mm-highpcpopt-v1
Min elapsed 510.00 ( 0.00%) 500.00 ( 1.96%) 500.00 ( 1.96%)
Amean elapsed 529.00 ( 0.00%) 521.00 ( 1.51%) 516.00 * 2.46%*
Stddev elapsed 16.63 ( 0.00%) 12.87 ( 22.64%) 9.66 ( 41.92%)
CoeffVar elapsed 3.14 ( 0.00%) 2.47 ( 21.46%) 1.87 ( 40.45%)
Max elapsed 550.00 ( 0.00%) 540.00 ( 1.82%) 530.00 ( 3.64%)
BAmean-50 elapsed 516.00 ( 0.00%) 512.00 ( 0.78%) 510.00 ( 1.16%)
BAmean-95 elapsed 526.67 ( 0.00%) 518.89 ( 1.48%) 514.44 ( 2.32%)
BAmean-99 elapsed 526.67 ( 0.00%) 518.89 ( 1.48%) 514.44 ( 2.32%)

The original motivation for multi-passes was will-it-scale page_fault1
using $nr_cpu processes.

2-socket CascadeLake (40 cores, 80 CPUs HT enabled)
5.17.0-rc3 5.17.0-rc3
vanilla mm-highpcpopt-v1r4
Hmean page_fault1-processes-2 2694662.26 ( 0.00%) 2696801.07 ( 0.08%)
Hmean page_fault1-processes-5 6425819.34 ( 0.00%) 6426573.21 ( 0.01%)
Hmean page_fault1-processes-8 9642169.10 ( 0.00%) 9647444.94 ( 0.05%)
Hmean page_fault1-processes-12 12167502.10 ( 0.00%) 12073323.10 * -0.77%*
Hmean page_fault1-processes-21 15636859.03 ( 0.00%) 15587449.50 * -0.32%*
Hmean page_fault1-processes-30 25157348.61 ( 0.00%) 25111707.15 * -0.18%*
Hmean page_fault1-processes-48 27694013.85 ( 0.00%) 27728568.63 ( 0.12%)
Hmean page_fault1-processes-79 25928742.64 ( 0.00%) 25920933.41 ( -0.03%) <---
Hmean page_fault1-processes-110 25730869.75 ( 0.00%) 25695727.57 * -0.14%*
Hmean page_fault1-processes-141 25626992.42 ( 0.00%) 25675346.68 * 0.19%*
Hmean page_fault1-processes-172 25611651.35 ( 0.00%) 25650940.14 * 0.15%*
Hmean page_fault1-processes-203 25577298.75 ( 0.00%) 25584848.65 ( 0.03%)
Hmean page_fault1-processes-234 25580686.07 ( 0.00%) 25601794.52 * 0.08%*
Hmean page_fault1-processes-265 25570215.47 ( 0.00%) 25553191.25 ( -0.07%)
Hmean page_fault1-processes-296 25549488.62 ( 0.00%) 25530311.58 ( -0.08%)
Hmean page_fault1-processes-320 25555149.05 ( 0.00%) 25585059.83 ( 0.12%)

The differences are mostly within the noise and the difference close to
$nr_cpus is negligible.

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 57 +++++++++++++++++++------------------------------
1 file changed, 22 insertions(+), 35 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5e8c7cbe7a41..6881175b27df 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1455,14 +1455,21 @@ static void free_pcppages_bulk(struct zone *zone, int count,
unsigned int order;
int prefetch_nr = READ_ONCE(pcp->batch);
bool isolated_pageblocks;
- struct page *page, *tmp;
- LIST_HEAD(head);
+ struct page *page;

/*
* Ensure proper count is passed which otherwise would stuck in the
* below while (list_empty(list)) loop.
*/
count = min(pcp->count, count);
+
+ /*
+ * local_lock_irq held so equivalent to spin_lock_irqsave for
+ * both PREEMPT_RT and non-PREEMPT_RT configurations.
+ */
+ spin_lock(&zone->lock);
+ isolated_pageblocks = has_isolate_pageblock(zone);
+
while (count > 0) {
struct list_head *list;
int nr_pages;
@@ -1485,7 +1492,11 @@ static void free_pcppages_bulk(struct zone *zone, int count,
nr_pages = 1 << order;
BUILD_BUG_ON(MAX_ORDER >= (1<<NR_PCP_ORDER_WIDTH));
do {
+ int mt;
+
page = list_last_entry(list, struct page, lru);
+ mt = get_pcppage_migratetype(page);
+
/* must delete to avoid corrupting pcp list */
list_del(&page->lru);
count -= nr_pages;
@@ -1494,12 +1505,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
if (bulkfree_pcp_prepare(page))
continue;

- /* Encode order with the migratetype */
- page->index <<= NR_PCP_ORDER_WIDTH;
- page->index |= order;
-
- list_add_tail(&page->lru, &head);
-
/*
* We are going to put the page back to the global
* pool, prefetch its buddy to speed up later access
@@ -1513,36 +1518,18 @@ static void free_pcppages_bulk(struct zone *zone, int count,
prefetch_buddy(page, order);
prefetch_nr--;
}
- } while (count > 0 && !list_empty(list));
- }
-
- /*
- * local_lock_irq held so equivalent to spin_lock_irqsave for
- * both PREEMPT_RT and non-PREEMPT_RT configurations.
- */
- spin_lock(&zone->lock);
- isolated_pageblocks = has_isolate_pageblock(zone);
-
- /*
- * Use safe version since after __free_one_page(),
- * page->lru.next will not point to original list.
- */
- list_for_each_entry_safe(page, tmp, &head, lru) {
- int mt = get_pcppage_migratetype(page);

- /* mt has been encoded with the order (see above) */
- order = mt & NR_PCP_ORDER_MASK;
- mt >>= NR_PCP_ORDER_WIDTH;
+ /* MIGRATE_ISOLATE page should not go to pcplists */
+ VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
+ /* Pageblock could have been isolated meanwhile */
+ if (unlikely(isolated_pageblocks))
+ mt = get_pageblock_migratetype(page);

- /* MIGRATE_ISOLATE page should not go to pcplists */
- VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
- /* Pageblock could have been isolated meanwhile */
- if (unlikely(isolated_pageblocks))
- mt = get_pageblock_migratetype(page);
-
- __free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE);
- trace_mm_page_pcpu_drain(page, order, mt);
+ __free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE);
+ trace_mm_page_pcpu_drain(page, order, mt);
+ } while (count > 0 && !list_empty(list));
}
+
spin_unlock(&zone->lock);
}

--
2.31.1

2022-02-15 17:37:33

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 5/5] mm/page_alloc: Limit number of high-order pages on PCP during bulk free

When a PCP is mostly used for frees then high-order pages can exist on PCP
lists for some time. This is problematic when the allocation pattern is all
allocations from one CPU and all frees from another resulting in colder
pages being used. When bulk freeing pages, limit the number of high-order
pages that are stored on the PCP lists.

Netperf running on localhost exhibits this pattern and while it does
not matter for some machines, it does matter for others with smaller
caches where cache misses cause problems due to reduced page reuse.
Pages freed directly to the buddy list may be reused quickly while still
cache hot where as storing on the PCP lists may be cold by the time
free_pcppages_bulk() is called.

Using perf kmem:mm_page_alloc, the 5 most used page frames were

5.17-rc3
13041 pfn=0x111a30
13081 pfn=0x5814d0
13097 pfn=0x108258
13121 pfn=0x689598
13128 pfn=0x5814d8

5.17-revert-highpcp
192009 pfn=0x54c140
195426 pfn=0x1081d0
200908 pfn=0x61c808
243515 pfn=0xa9dc20
402523 pfn=0x222bb8

5.17-full-series
142693 pfn=0x346208
162227 pfn=0x13bf08
166413 pfn=0x2711e0
166950 pfn=0x2702f8

The spread is wider as there is still time before pages freed to one
PCP get released with a tradeoff between fast reuse and reduced zone
lock acquisition.

From the machine used to gather the traces, the headline performance
was equivalent.

netperf-tcp
5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
vanilla mm-reverthighpcp-v1r1 mm-highpcplimit-v1r12
Hmean 64 839.93 ( 0.00%) 840.77 ( 0.10%) 835.34 * -0.55%*
Hmean 128 1614.22 ( 0.00%) 1622.07 * 0.49%* 1604.18 * -0.62%*
Hmean 256 2952.00 ( 0.00%) 2953.19 ( 0.04%) 2959.46 ( 0.25%)
Hmean 1024 10291.67 ( 0.00%) 10239.17 ( -0.51%) 10287.05 ( -0.04%)
Hmean 2048 17335.08 ( 0.00%) 17399.97 ( 0.37%) 17125.73 * -1.21%*
Hmean 3312 22628.15 ( 0.00%) 22471.97 ( -0.69%) 22414.24 * -0.95%*
Hmean 4096 25009.50 ( 0.00%) 24752.83 * -1.03%* 24620.03 * -1.56%*
Hmean 8192 32745.01 ( 0.00%) 31682.63 * -3.24%* 32475.31 ( -0.82%)
Hmean 16384 39759.59 ( 0.00%) 36805.78 * -7.43%* 39291.42 ( -1.18%)

From a 1-socket skylake machine with a small CPU cache that suffers
more if cache misses are too high

netperf-tcp
5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
vanilla mm-reverthighpcp-v1 mm-highpcplimit-v1
Min 64 935.38 ( 0.00%) 939.40 ( 0.43%) 940.11 ( 0.51%)
Min 128 1831.69 ( 0.00%) 1856.15 ( 1.34%) 1849.30 ( 0.96%)
Min 256 3560.61 ( 0.00%) 3659.25 ( 2.77%) 3654.12 ( 2.63%)
Min 1024 13165.24 ( 0.00%) 13444.74 ( 2.12%) 13281.71 ( 0.88%)
Min 2048 22706.44 ( 0.00%) 23219.67 ( 2.26%) 23027.31 ( 1.41%)
Min 3312 30960.26 ( 0.00%) 31985.01 ( 3.31%) 31484.40 ( 1.69%)
Min 4096 35149.03 ( 0.00%) 35997.44 ( 2.41%) 35891.92 ( 2.11%)
Min 8192 48064.73 ( 0.00%) 49574.05 ( 3.14%) 48928.89 ( 1.80%)
Min 16384 58017.25 ( 0.00%) 60352.93 ( 4.03%) 60691.14 ( 4.61%)
Hmean 64 938.95 ( 0.00%) 941.50 * 0.27%* 940.47 ( 0.16%)
Hmean 128 1843.10 ( 0.00%) 1857.58 * 0.79%* 1855.83 * 0.69%*
Hmean 256 3573.07 ( 0.00%) 3667.45 * 2.64%* 3662.08 * 2.49%*
Hmean 1024 13206.52 ( 0.00%) 13487.80 * 2.13%* 13351.11 * 1.09%*
Hmean 2048 22870.23 ( 0.00%) 23337.96 * 2.05%* 23149.68 * 1.22%*
Hmean 3312 31001.99 ( 0.00%) 32206.50 * 3.89%* 31849.40 * 2.73%*
Hmean 4096 35364.59 ( 0.00%) 36490.96 * 3.19%* 36112.91 * 2.12%*
Hmean 8192 48497.71 ( 0.00%) 49954.05 * 3.00%* 49384.50 * 1.83%*
Hmean 16384 58410.86 ( 0.00%) 60839.80 * 4.16%* 61362.12 * 5.05%*

Note that this was a machine that did not benefit from caching high-order
pages and performance is almost restored with the series applied. It's not
fully restored as cache misses are still higher. This is a trade-off
between optimising for a workload that does all allocs on one CPU and frees
on another or more general workloads that need high-order pages for SLUB
and benefit from avoiding zone->lock for every SLUB refill/drain.

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6881175b27df..cfb3cbad152c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3314,10 +3314,15 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
return true;
}

-static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch)
+static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
+ bool free_high)
{
int min_nr_free, max_nr_free;

+ /* Free everything if batch freeing high-order pages. */
+ if (unlikely(free_high))
+ return pcp->count;
+
/* Check for PCP disabled or boot pageset */
if (unlikely(high < batch))
return 1;
@@ -3338,11 +3343,12 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch)
return batch;
}

-static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone)
+static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
+ bool free_high)
{
int high = READ_ONCE(pcp->high);

- if (unlikely(!high))
+ if (unlikely(!high || free_high))
return 0;

if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
@@ -3362,17 +3368,27 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn,
struct per_cpu_pages *pcp;
int high;
int pindex;
+ bool free_high;

__count_vm_event(PGFREE);
pcp = this_cpu_ptr(zone->per_cpu_pageset);
pindex = order_to_pindex(migratetype, order);
list_add(&page->lru, &pcp->lists[pindex]);
pcp->count += 1 << order;
- high = nr_pcp_high(pcp, zone);
+
+ /*
+ * As high-order pages other than THP's stored on PCP can contribute
+ * to fragmentation, limit the number stored when PCP is heavily
+ * freeing without allocation. The remainder after bulk freeing
+ * stops will be drained from vmstat refresh context.
+ */
+ free_high = (pcp->free_factor && order && order <= PAGE_ALLOC_COSTLY_ORDER);
+
+ high = nr_pcp_high(pcp, zone, free_high);
if (pcp->count >= high) {
int batch = READ_ONCE(pcp->batch);

- free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp, pindex);
+ free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch, free_high), pcp, pindex);
}
}

--
2.31.1

2022-02-15 18:33:45

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 1/5] mm/page_alloc: Fetch the correct pcp buddy during bulk free

free_pcppages_bulk() prefetches buddies about to be freed but the
order must also be passed in as PCP lists store multiple orders.

Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists")
Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589febc6d31..08de32cfd9bb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1432,10 +1432,10 @@ static bool bulkfree_pcp_prepare(struct page *page)
}
#endif /* CONFIG_DEBUG_VM */

-static inline void prefetch_buddy(struct page *page)
+static inline void prefetch_buddy(struct page *page, unsigned int order)
{
unsigned long pfn = page_to_pfn(page);
- unsigned long buddy_pfn = __find_buddy_pfn(pfn, 0);
+ unsigned long buddy_pfn = __find_buddy_pfn(pfn, order);
struct page *buddy = page + (buddy_pfn - pfn);

prefetch(buddy);
@@ -1512,7 +1512,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
* prefetch buddy for the first pcp->batch nr of pages.
*/
if (prefetch_nr) {
- prefetch_buddy(page);
+ prefetch_buddy(page, order);
prefetch_nr--;
}
} while (count > 0 && --batch_free && !list_empty(list));
--
2.31.1

2022-02-15 21:50:25

by Mel Gorman

[permalink] [raw]
Subject: [PATCH 2/5] mm/page_alloc: Track range of active PCP lists during bulk free

free_pcppages_bulk() frees pages in a round-robin fashion. Originally,
this was dealing only with migratetypes but storing high-order pages
means that there can be many more empty lists that are uselessly
checked. Track the minimum and maximum active pindex to reduce the
search space.

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 08de32cfd9bb..c5110fdeb115 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp)
{
int pindex = 0;
+ int min_pindex = 0;
+ int max_pindex = NR_PCP_LISTS - 1;
int batch_free = 0;
int nr_freed = 0;
unsigned int order;
@@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count,
if (++pindex == NR_PCP_LISTS)
pindex = 0;
list = &pcp->lists[pindex];
- } while (list_empty(list));
+ if (!list_empty(list))
+ break;
+
+ if (pindex == max_pindex)
+ max_pindex--;
+ if (pindex == min_pindex)
+ min_pindex++;
+ } while (1);

/* This is the only non-empty list. Free them all. */
- if (batch_free == NR_PCP_LISTS)
+ if (batch_free >= max_pindex - min_pindex)
batch_free = count;

order = pindex_to_order(pindex);
--
2.31.1

2022-02-16 11:33:07

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 1/5] mm/page_alloc: Fetch the correct pcp buddy during bulk free

On 2/15/22 15:51, Mel Gorman wrote:
> free_pcppages_bulk() prefetches buddies about to be freed but the
> order must also be passed in as PCP lists store multiple orders.
>
> Fixes: 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists")
> Signed-off-by: Mel Gorman <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> mm/page_alloc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6d31..08de32cfd9bb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1432,10 +1432,10 @@ static bool bulkfree_pcp_prepare(struct page *page)
> }
> #endif /* CONFIG_DEBUG_VM */
>
> -static inline void prefetch_buddy(struct page *page)
> +static inline void prefetch_buddy(struct page *page, unsigned int order)
> {
> unsigned long pfn = page_to_pfn(page);
> - unsigned long buddy_pfn = __find_buddy_pfn(pfn, 0);
> + unsigned long buddy_pfn = __find_buddy_pfn(pfn, order);
> struct page *buddy = page + (buddy_pfn - pfn);
>
> prefetch(buddy);
> @@ -1512,7 +1512,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> * prefetch buddy for the first pcp->batch nr of pages.
> */
> if (prefetch_nr) {
> - prefetch_buddy(page);
> + prefetch_buddy(page, order);
> prefetch_nr--;
> }
> } while (count > 0 && --batch_free && !list_empty(list));

2022-02-16 12:23:09

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 2/5] mm/page_alloc: Track range of active PCP lists during bulk free

On 2/15/22 15:51, Mel Gorman wrote:
> free_pcppages_bulk() frees pages in a round-robin fashion. Originally,
> this was dealing only with migratetypes but storing high-order pages
> means that there can be many more empty lists that are uselessly
> checked. Track the minimum and maximum active pindex to reduce the
> search space.
>
> Signed-off-by: Mel Gorman <[email protected]>
> ---
> mm/page_alloc.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 08de32cfd9bb..c5110fdeb115 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> struct per_cpu_pages *pcp)
> {
> int pindex = 0;
> + int min_pindex = 0;
> + int max_pindex = NR_PCP_LISTS - 1;
> int batch_free = 0;
> int nr_freed = 0;
> unsigned int order;
> @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> if (++pindex == NR_PCP_LISTS)

Hmm, so in the very first iteration at this point pindex is already 1. This
looks odd even before the patch, as order 0 MIGRATE_UNMOVABLE list is only
processed after all the higher orders?

> pindex = 0;

Also shouldn't this wrap-around check also use min_index/max_index instead
of NR_PCP_LISTS and 0?

> list = &pcp->lists[pindex];
> - } while (list_empty(list));
> + if (!list_empty(list))
> + break;
> +
> + if (pindex == max_pindex)
> + max_pindex--;
> + if (pindex == min_pindex)

So with pindex 1 and min_pindex == 0 this will not trigger until
(eventually) the first pindex wrap around, which seems suboptimal. But I can
see the later patches change things substantially anyway so it may be moot...

> + min_pindex++;
> + } while (1);
>
> /* This is the only non-empty list. Free them all. */
> - if (batch_free == NR_PCP_LISTS)
> + if (batch_free >= max_pindex - min_pindex)
> batch_free = count;
>
> order = pindex_to_order(pindex);

2022-02-16 12:52:17

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 3/5] mm/page_alloc: Simplify how many pages are selected per pcp list during bulk free

On 2/15/22 15:51, Mel Gorman wrote:
> free_pcppages_bulk() selects pages to free by round-robining between
> lists. Originally this was to evenly shrink pages by migratetype
> but uneven freeing is inevitable due to high pages. Simplify list
> selection by starting with a list that definitely has pages on it in
> free_unref_page_commit() and for drain, it does not matter where draining
> starts as all pages are removed.
>
> Signed-off-by: Mel Gorman <[email protected]>

Now pindex is passed instead of initialized to 0, but still incremented
first before doing anything with it, which AFAICS is wrong. But that
predates this patch, which itself seems ok, so:

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> mm/page_alloc.c | 34 +++++++++++-----------------------
> 1 file changed, 11 insertions(+), 23 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5110fdeb115..5e8c7cbe7a41 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1447,13 +1447,11 @@ static inline void prefetch_buddy(struct page *page, unsigned int order)
> * count is the number of pages to free.
> */
> static void free_pcppages_bulk(struct zone *zone, int count,
> - struct per_cpu_pages *pcp)
> + struct per_cpu_pages *pcp,
> + int pindex)
> {
> - int pindex = 0;
> int min_pindex = 0;
> int max_pindex = NR_PCP_LISTS - 1;
> - int batch_free = 0;
> - int nr_freed = 0;
> unsigned int order;
> int prefetch_nr = READ_ONCE(pcp->batch);
> bool isolated_pageblocks;
> @@ -1467,16 +1465,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> count = min(pcp->count, count);
> while (count > 0) {
> struct list_head *list;
> + int nr_pages;
>
> - /*
> - * Remove pages from lists in a round-robin fashion. A
> - * batch_free count is maintained that is incremented when an
> - * empty list is encountered. This is so more pages are freed
> - * off fuller lists instead of spinning excessively around empty
> - * lists
> - */
> + /* Remove pages from lists in a round-robin fashion. */
> do {
> - batch_free++;
> if (++pindex == NR_PCP_LISTS)
> pindex = 0;
> list = &pcp->lists[pindex];
> @@ -1489,18 +1481,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> min_pindex++;
> } while (1);
>
> - /* This is the only non-empty list. Free them all. */
> - if (batch_free >= max_pindex - min_pindex)
> - batch_free = count;
> -
> order = pindex_to_order(pindex);
> + nr_pages = 1 << order;
> BUILD_BUG_ON(MAX_ORDER >= (1<<NR_PCP_ORDER_WIDTH));
> do {
> page = list_last_entry(list, struct page, lru);
> /* must delete to avoid corrupting pcp list */
> list_del(&page->lru);
> - nr_freed += 1 << order;
> - count -= 1 << order;
> + count -= nr_pages;
> + pcp->count -= nr_pages;
>
> if (bulkfree_pcp_prepare(page))
> continue;
> @@ -1524,9 +1513,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> prefetch_buddy(page, order);
> prefetch_nr--;
> }
> - } while (count > 0 && --batch_free && !list_empty(list));
> + } while (count > 0 && !list_empty(list));
> }
> - pcp->count -= nr_freed;
>
> /*
> * local_lock_irq held so equivalent to spin_lock_irqsave for
> @@ -3095,7 +3083,7 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
> batch = READ_ONCE(pcp->batch);
> to_drain = min(pcp->count, batch);
> if (to_drain > 0)
> - free_pcppages_bulk(zone, to_drain, pcp);
> + free_pcppages_bulk(zone, to_drain, pcp, 0);
> local_unlock_irqrestore(&pagesets.lock, flags);
> }
> #endif
> @@ -3116,7 +3104,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
>
> pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> if (pcp->count)
> - free_pcppages_bulk(zone, pcp->count, pcp);
> + free_pcppages_bulk(zone, pcp->count, pcp, 0);
>
> local_unlock_irqrestore(&pagesets.lock, flags);
> }
> @@ -3397,7 +3385,7 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn,
> if (pcp->count >= high) {
> int batch = READ_ONCE(pcp->batch);
>
> - free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp);
> + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp, pindex);
> }
> }
>

2022-02-16 13:19:41

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH 2/5] mm/page_alloc: Track range of active PCP lists during bulk free

On Wed, Feb 16, 2022 at 01:02:01PM +0100, Vlastimil Babka wrote:
> On 2/15/22 15:51, Mel Gorman wrote:
> > free_pcppages_bulk() frees pages in a round-robin fashion. Originally,
> > this was dealing only with migratetypes but storing high-order pages
> > means that there can be many more empty lists that are uselessly
> > checked. Track the minimum and maximum active pindex to reduce the
> > search space.
> >
> > Signed-off-by: Mel Gorman <[email protected]>
> > ---
> > mm/page_alloc.c | 13 +++++++++++--
> > 1 file changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 08de32cfd9bb..c5110fdeb115 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1450,6 +1450,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> > struct per_cpu_pages *pcp)
> > {
> > int pindex = 0;
> > + int min_pindex = 0;
> > + int max_pindex = NR_PCP_LISTS - 1;
> > int batch_free = 0;
> > int nr_freed = 0;
> > unsigned int order;
> > @@ -1478,10 +1480,17 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> > if (++pindex == NR_PCP_LISTS)
>
> Hmm, so in the very first iteration at this point pindex is already 1. This
> looks odd even before the patch, as order 0 MIGRATE_UNMOVABLE list is only
> processed after all the higher orders?
>

Yes and this was the behaviour before and after. I don't recall why. It
might have been to preserve UNMOVABLE pages but after the series is
finished, the reasoning is weak. I'll add a specific check.

> > pindex = 0;
>
> Also shouldn't this wrap-around check also use min_index/max_index instead
> of NR_PCP_LISTS and 0?
>

Yes, it should and it's a rebasing error from an earlier prototype that
I missed. I'll fix it.

> > list = &pcp->lists[pindex];
> > - } while (list_empty(list));
> > + if (!list_empty(list))
> > + break;
> > +
> > + if (pindex == max_pindex)
> > + max_pindex--;
> > + if (pindex == min_pindex)
>
> So with pindex 1 and min_pindex == 0 this will not trigger until
> (eventually) the first pindex wrap around, which seems suboptimal. But I can
> see the later patches change things substantially anyway so it may be moot...
>

It could potentially be more optimal but at the cost of complexity which
I wanted to avoid in this path as much as possible. Initialising
min_pindex == pindex could result in an infinite loop if the lower lists
need to be cleared.

--
Mel Gorman
SUSE Labs

2022-02-16 15:12:55

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 4/5] mm/page_alloc: Free pages in a single pass during bulk free

On 2/15/22 15:51, Mel Gorman wrote:
> free_pcppages_bulk() has taken two passes through the pcp lists since
> commit 0a5f4e5b4562 ("mm/free_pcppages_bulk: do not hold lock when picking
> pages to free") due to deferring the cost of selecting PCP lists until
> the zone lock is held. Now that list selection is simplier, the main
> cost during selection is bulkfree_pcp_prepare() which in the normal case
> is a simple check and prefetching. As the list manipulations have cost
> in itself, go back to freeing pages in a single pass.
>
> The series up to this point was evaulated using a trunc microbenchmark
> that is truncating sparse files stored in page cache (mmtests config
> config-io-trunc). Sparse files were used to limit filesystem interaction.
>
> The results versus a revert of storing high-order pages in the PCP lists is
>
> 1-socket Skylake
> 5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
> vanilla mm-reverthighpcp-v1r1 mm-highpcpopt-v1
> Min elapsed 540.00 ( 0.00%) 530.00 ( 1.85%) 530.00 ( 1.85%)
> Amean elapsed 543.00 ( 0.00%) 530.00 * 2.39%* 530.00 * 2.39%*
> Stddev elapsed 4.83 ( 0.00%) 0.00 ( 100.00%) 0.00 ( 100.00%)
> CoeffVar elapsed 0.89 ( 0.00%) 0.00 ( 100.00%) 0.00 ( 100.00%)
> Max elapsed 550.00 ( 0.00%) 530.00 ( 3.64%) 530.00 ( 3.64%)
> BAmean-50 elapsed 540.00 ( 0.00%) 530.00 ( 1.85%) 530.00 ( 1.85%)
> BAmean-95 elapsed 542.22 ( 0.00%) 530.00 ( 2.25%) 530.00 ( 2.25%)
> BAmean-99 elapsed 542.22 ( 0.00%) 530.00 ( 2.25%) 530.00 ( 2.25%)
>
> 2-socket CascadeLake
> 5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
> vanilla mm-reverthighpcp-v1 mm-highpcpopt-v1
> Min elapsed 510.00 ( 0.00%) 500.00 ( 1.96%) 500.00 ( 1.96%)
> Amean elapsed 529.00 ( 0.00%) 521.00 ( 1.51%) 516.00 * 2.46%*
> Stddev elapsed 16.63 ( 0.00%) 12.87 ( 22.64%) 9.66 ( 41.92%)
> CoeffVar elapsed 3.14 ( 0.00%) 2.47 ( 21.46%) 1.87 ( 40.45%)
> Max elapsed 550.00 ( 0.00%) 540.00 ( 1.82%) 530.00 ( 3.64%)
> BAmean-50 elapsed 516.00 ( 0.00%) 512.00 ( 0.78%) 510.00 ( 1.16%)
> BAmean-95 elapsed 526.67 ( 0.00%) 518.89 ( 1.48%) 514.44 ( 2.32%)
> BAmean-99 elapsed 526.67 ( 0.00%) 518.89 ( 1.48%) 514.44 ( 2.32%)
>
> The original motivation for multi-passes was will-it-scale page_fault1
> using $nr_cpu processes.
>
> 2-socket CascadeLake (40 cores, 80 CPUs HT enabled)
> 5.17.0-rc3 5.17.0-rc3
> vanilla mm-highpcpopt-v1r4
> Hmean page_fault1-processes-2 2694662.26 ( 0.00%) 2696801.07 ( 0.08%)
> Hmean page_fault1-processes-5 6425819.34 ( 0.00%) 6426573.21 ( 0.01%)
> Hmean page_fault1-processes-8 9642169.10 ( 0.00%) 9647444.94 ( 0.05%)
> Hmean page_fault1-processes-12 12167502.10 ( 0.00%) 12073323.10 * -0.77%*
> Hmean page_fault1-processes-21 15636859.03 ( 0.00%) 15587449.50 * -0.32%*
> Hmean page_fault1-processes-30 25157348.61 ( 0.00%) 25111707.15 * -0.18%*
> Hmean page_fault1-processes-48 27694013.85 ( 0.00%) 27728568.63 ( 0.12%)
> Hmean page_fault1-processes-79 25928742.64 ( 0.00%) 25920933.41 ( -0.03%) <---
> Hmean page_fault1-processes-110 25730869.75 ( 0.00%) 25695727.57 * -0.14%*
> Hmean page_fault1-processes-141 25626992.42 ( 0.00%) 25675346.68 * 0.19%*
> Hmean page_fault1-processes-172 25611651.35 ( 0.00%) 25650940.14 * 0.15%*
> Hmean page_fault1-processes-203 25577298.75 ( 0.00%) 25584848.65 ( 0.03%)
> Hmean page_fault1-processes-234 25580686.07 ( 0.00%) 25601794.52 * 0.08%*
> Hmean page_fault1-processes-265 25570215.47 ( 0.00%) 25553191.25 ( -0.07%)
> Hmean page_fault1-processes-296 25549488.62 ( 0.00%) 25530311.58 ( -0.08%)
> Hmean page_fault1-processes-320 25555149.05 ( 0.00%) 25585059.83 ( 0.12%)
>
> The differences are mostly within the noise and the difference close to
> $nr_cpus is negligible.
>
> Signed-off-by: Mel Gorman <[email protected]>


Reviewed-by: Vlastimil Babka <[email protected]>

2022-02-16 15:59:48

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 5/5] mm/page_alloc: Limit number of high-order pages on PCP during bulk free

On 2/15/22 15:51, Mel Gorman wrote:
> When a PCP is mostly used for frees then high-order pages can exist on PCP
> lists for some time. This is problematic when the allocation pattern is all
> allocations from one CPU and all frees from another resulting in colder
> pages being used. When bulk freeing pages, limit the number of high-order
> pages that are stored on the PCP lists.
>
> Netperf running on localhost exhibits this pattern and while it does
> not matter for some machines, it does matter for others with smaller
> caches where cache misses cause problems due to reduced page reuse.
> Pages freed directly to the buddy list may be reused quickly while still
> cache hot where as storing on the PCP lists may be cold by the time
> free_pcppages_bulk() is called.
>
> Using perf kmem:mm_page_alloc, the 5 most used page frames were
>
> 5.17-rc3
> 13041 pfn=0x111a30
> 13081 pfn=0x5814d0
> 13097 pfn=0x108258
> 13121 pfn=0x689598
> 13128 pfn=0x5814d8
>
> 5.17-revert-highpcp
> 192009 pfn=0x54c140
> 195426 pfn=0x1081d0
> 200908 pfn=0x61c808
> 243515 pfn=0xa9dc20
> 402523 pfn=0x222bb8
>
> 5.17-full-series
> 142693 pfn=0x346208
> 162227 pfn=0x13bf08
> 166413 pfn=0x2711e0
> 166950 pfn=0x2702f8
>
> The spread is wider as there is still time before pages freed to one
> PCP get released with a tradeoff between fast reuse and reduced zone
> lock acquisition.
>
> From the machine used to gather the traces, the headline performance
> was equivalent.
>
> netperf-tcp
> 5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
> vanilla mm-reverthighpcp-v1r1 mm-highpcplimit-v1r12
> Hmean 64 839.93 ( 0.00%) 840.77 ( 0.10%) 835.34 * -0.55%*
> Hmean 128 1614.22 ( 0.00%) 1622.07 * 0.49%* 1604.18 * -0.62%*
> Hmean 256 2952.00 ( 0.00%) 2953.19 ( 0.04%) 2959.46 ( 0.25%)
> Hmean 1024 10291.67 ( 0.00%) 10239.17 ( -0.51%) 10287.05 ( -0.04%)
> Hmean 2048 17335.08 ( 0.00%) 17399.97 ( 0.37%) 17125.73 * -1.21%*
> Hmean 3312 22628.15 ( 0.00%) 22471.97 ( -0.69%) 22414.24 * -0.95%*
> Hmean 4096 25009.50 ( 0.00%) 24752.83 * -1.03%* 24620.03 * -1.56%*
> Hmean 8192 32745.01 ( 0.00%) 31682.63 * -3.24%* 32475.31 ( -0.82%)
> Hmean 16384 39759.59 ( 0.00%) 36805.78 * -7.43%* 39291.42 ( -1.18%)
>
> From a 1-socket skylake machine with a small CPU cache that suffers
> more if cache misses are too high
>
> netperf-tcp
> 5.17.0-rc3 5.17.0-rc3 5.17.0-rc3
> vanilla mm-reverthighpcp-v1 mm-highpcplimit-v1
> Min 64 935.38 ( 0.00%) 939.40 ( 0.43%) 940.11 ( 0.51%)
> Min 128 1831.69 ( 0.00%) 1856.15 ( 1.34%) 1849.30 ( 0.96%)
> Min 256 3560.61 ( 0.00%) 3659.25 ( 2.77%) 3654.12 ( 2.63%)
> Min 1024 13165.24 ( 0.00%) 13444.74 ( 2.12%) 13281.71 ( 0.88%)
> Min 2048 22706.44 ( 0.00%) 23219.67 ( 2.26%) 23027.31 ( 1.41%)
> Min 3312 30960.26 ( 0.00%) 31985.01 ( 3.31%) 31484.40 ( 1.69%)
> Min 4096 35149.03 ( 0.00%) 35997.44 ( 2.41%) 35891.92 ( 2.11%)
> Min 8192 48064.73 ( 0.00%) 49574.05 ( 3.14%) 48928.89 ( 1.80%)
> Min 16384 58017.25 ( 0.00%) 60352.93 ( 4.03%) 60691.14 ( 4.61%)
> Hmean 64 938.95 ( 0.00%) 941.50 * 0.27%* 940.47 ( 0.16%)
> Hmean 128 1843.10 ( 0.00%) 1857.58 * 0.79%* 1855.83 * 0.69%*
> Hmean 256 3573.07 ( 0.00%) 3667.45 * 2.64%* 3662.08 * 2.49%*
> Hmean 1024 13206.52 ( 0.00%) 13487.80 * 2.13%* 13351.11 * 1.09%*
> Hmean 2048 22870.23 ( 0.00%) 23337.96 * 2.05%* 23149.68 * 1.22%*
> Hmean 3312 31001.99 ( 0.00%) 32206.50 * 3.89%* 31849.40 * 2.73%*
> Hmean 4096 35364.59 ( 0.00%) 36490.96 * 3.19%* 36112.91 * 2.12%*
> Hmean 8192 48497.71 ( 0.00%) 49954.05 * 3.00%* 49384.50 * 1.83%*
> Hmean 16384 58410.86 ( 0.00%) 60839.80 * 4.16%* 61362.12 * 5.05%*
>
> Note that this was a machine that did not benefit from caching high-order
> pages and performance is almost restored with the series applied. It's not
> fully restored as cache misses are still higher. This is a trade-off
> between optimising for a workload that does all allocs on one CPU and frees
> on another or more general workloads that need high-order pages for SLUB
> and benefit from avoiding zone->lock for every SLUB refill/drain.
>
> Signed-off-by: Mel Gorman <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> mm/page_alloc.c | 26 +++++++++++++++++++++-----
> 1 file changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6881175b27df..cfb3cbad152c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3314,10 +3314,15 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
> return true;
> }
>
> -static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch)
> +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
> + bool free_high)
> {
> int min_nr_free, max_nr_free;
>
> + /* Free everything if batch freeing high-order pages. */
> + if (unlikely(free_high))
> + return pcp->count;
> +
> /* Check for PCP disabled or boot pageset */
> if (unlikely(high < batch))
> return 1;
> @@ -3338,11 +3343,12 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch)
> return batch;
> }
>
> -static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone)
> +static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
> + bool free_high)
> {
> int high = READ_ONCE(pcp->high);
>
> - if (unlikely(!high))
> + if (unlikely(!high || free_high))
> return 0;
>
> if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
> @@ -3362,17 +3368,27 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn,
> struct per_cpu_pages *pcp;
> int high;
> int pindex;
> + bool free_high;
>
> __count_vm_event(PGFREE);
> pcp = this_cpu_ptr(zone->per_cpu_pageset);
> pindex = order_to_pindex(migratetype, order);
> list_add(&page->lru, &pcp->lists[pindex]);
> pcp->count += 1 << order;
> - high = nr_pcp_high(pcp, zone);
> +
> + /*
> + * As high-order pages other than THP's stored on PCP can contribute
> + * to fragmentation, limit the number stored when PCP is heavily
> + * freeing without allocation. The remainder after bulk freeing
> + * stops will be drained from vmstat refresh context.
> + */
> + free_high = (pcp->free_factor && order && order <= PAGE_ALLOC_COSTLY_ORDER);
> +
> + high = nr_pcp_high(pcp, zone, free_high);
> if (pcp->count >= high) {
> int batch = READ_ONCE(pcp->batch);
>
> - free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch), pcp, pindex);
> + free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch, free_high), pcp, pindex);
> }
> }
>