This series is meant to improve zone->lock scalability for order 0 pages.
With will-it-scale/page_fault1 workload, on a 2 sockets Intel Skylake
server with 112 CPUs, CPU spend 80% of its time spinning on zone->lock.
Perf profile shows the most time consuming part under zone->lock is the
cache miss on "struct page", so here I'm trying to avoid those cache
misses.
Patch 1/4 adds some wrapper functions for page to be added/removed
into/from buddy and doesn't have functionality changes.
Patch 2/4 skip doing merge for order 0 pages to avoid cache misses on
buddy's "struct page". On a 2 sockets Intel Skylake, this has very good
effect on free path for will-it-scale/page_fault1 full load in that it
reduced zone->lock contention on free path from 35% to 1.1%. Also, it
shows good result on parallel free(*) workload by reducing zone->lock
contention from 90% to almost zero(lru lock increased from almost 0 to
90% though).
Patch 3/4 deals with allocation path zone->lock contention by not
touching pages on free_list one by one inside zone->lock. Together with
patch 2/4, zone->lock contention is entirely eliminated for
will-it-scale/page_fault1 full load, though this patch adds some
overhead to manage cluster on free path and it has some bad effects on
parallel free workload in that it increased zone->lock contention from
almost 0 to 25%.
Patch 4/4 is an optimization in free path due to cluster operation. It
decreased the number of times add_to_cluster() has to be called and
restored performance for parallel free workload by reducing zone->lock's
contention to almost 0% again.
The good thing about this patchset is, it eliminated zone->lock
contention for will-it-scale/page_fault1 and parallel free on big
servers(contention shifted to lru_lock). The bad thing are:
- it added some overhead in compaction path where it will do merging
for those merge-skipped order 0 pages;
- it is unfriendly to high order page allocation since we do not do
merging for order 0 pages now.
To see how much effect it has on compaction, mmtests/stress-highalloc is
used on a Desktop machine with 8 CPUs and 4G memory.
(mmtests/stress-highalloc: make N copies of kernel tree and start
building them to consume almost all memory with reclaimable file page
cache. These file page cache will not be returned to buddy so effectively
makes it a worst case for high order page workload. Then after 5 minutes,
start allocating X order-9 pages to see how well compaction works).
With a delay of 100ms between allocations:
kernel success_rate average_time_of_alloc_one_hugepage
base 58% 3.95927e+06 ns
patch2/4 58% 5.45935e+06 ns
patch4/4 57% 6.59174e+06 ns
With a delay of 1ms between allocations:
kernel success_rate average_time_of_alloc_one_hugepage
base 53% 3.17362e+06 ns
patch2/4 44% 2.31637e+06 ns
patch4/4 59% 2.73029e+06 ns
If we compare patch4/4's result with base, it performed OK I think.
This is probably due to compaction is a heavy job so the added overhead
doesn't affect much.
To see how much effect it has on workload that uses hugepage, I did the
following test on a 2 sockets Intel Skylake with 112 CPUs/64G memory:
1 Break all high order pages by starting a program that consumes almost
all memory with anonymous pages and then exit. This is used to create
an extreme bad case for this patchset compared to vanilla that always
does merging;
2 Start 56 processes of will-it-scale/page_fault1 that use hugepages
through calling madvise(MADV_HUGEPAGE). To make things worse for this
patchset, start another 56 processes of will-it-scale/page_fault1 that
uses order 0 pages to continually cause trouble for the 56 THP users.
Let them run for 5 minutes.
Score result(higher is better):
kernel order0 THP
base 1522246 10540254
patch2/4 5266247 +246% 3309816 -69%
patch4/4 2234073 +47% 9610295 -8.8%
TBH, I'm not sure if the way I tried above is good enough to expose the
problem of this patchset. So if you have any thoughts on this patchset,
please feel free to let me know, thanks.
(*) Parallel free is a workload that I used to see how well parallel
freeing a large VMA can be. I tested this on a 4 sockets Intel Skylake
machine with 768G memory. The test program starts by doing a 512G anon
memory allocation with mmap() and then exit to see how fast it can exit.
The parallel is implemented inside kernel and has been posted before:
http://lkml.kernel.org/r/[email protected]
A branch is maintained here in case someone wants to give it a try:
https://github.com/aaronlu/linux zone_lock_rfc_v2
v2:
Rewrote allocation path optimization, compaction could happen now and no
crashes that I'm aware of.
v1 is here:
https://lkml.kernel.org/r/[email protected]
Aaron Lu (4):
mm/page_alloc: use helper functions to add/remove a page to/from buddy
mm/__free_one_page: skip merge for order-0 page unless compaction
failed
mm/rmqueue_bulk: alloc without touching individual page structure
mm/free_pcppages_bulk: reduce overhead of cluster operation on free
path
Documentation/vm/struct_page_field | 10 +
include/linux/mm_types.h | 3 +
include/linux/mmzone.h | 35 +++
mm/compaction.c | 17 +-
mm/internal.h | 61 +++++
mm/page_alloc.c | 488 +++++++++++++++++++++++++++++++++----
6 files changed, 563 insertions(+), 51 deletions(-)
create mode 100644 Documentation/vm/struct_page_field
--
2.14.3
There are multiple places that add/remove a page into/from buddy,
introduce helper functions for them.
This also makes it easier to add code when a page is added/removed
to/from buddy.
Signed-off-by: Aaron Lu <[email protected]>
---
mm/page_alloc.c | 65 ++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 39 insertions(+), 26 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3db9cfb2265b..3cdf1e10d412 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -736,12 +736,41 @@ static inline void set_page_order(struct page *page, unsigned int order)
__SetPageBuddy(page);
}
+static inline void add_to_buddy_common(struct page *page, struct zone *zone,
+ unsigned int order, int mt)
+{
+ set_page_order(page, order);
+ zone->free_area[order].nr_free++;
+}
+
+static inline void add_to_buddy_head(struct page *page, struct zone *zone,
+ unsigned int order, int mt)
+{
+ add_to_buddy_common(page, zone, order, mt);
+ list_add(&page->lru, &zone->free_area[order].free_list[mt]);
+}
+
+static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
+ unsigned int order, int mt)
+{
+ add_to_buddy_common(page, zone, order, mt);
+ list_add_tail(&page->lru, &zone->free_area[order].free_list[mt]);
+}
+
static inline void rmv_page_order(struct page *page)
{
__ClearPageBuddy(page);
set_page_private(page, 0);
}
+static inline void remove_from_buddy(struct page *page, struct zone *zone,
+ unsigned int order)
+{
+ list_del(&page->lru);
+ zone->free_area[order].nr_free--;
+ rmv_page_order(page);
+}
+
/*
* This function checks whether a page is free && is the buddy
* we can do coalesce a page and its buddy if
@@ -845,13 +874,10 @@ static inline void __free_one_page(struct page *page,
* Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
* merge with it and move up one order.
*/
- if (page_is_guard(buddy)) {
+ if (page_is_guard(buddy))
clear_page_guard(zone, buddy, order, migratetype);
- } else {
- list_del(&buddy->lru);
- zone->free_area[order].nr_free--;
- rmv_page_order(buddy);
- }
+ else
+ remove_from_buddy(buddy, zone, order);
combined_pfn = buddy_pfn & pfn;
page = page + (combined_pfn - pfn);
pfn = combined_pfn;
@@ -883,8 +909,6 @@ static inline void __free_one_page(struct page *page,
}
done_merging:
- set_page_order(page, order);
-
/*
* If this is not the largest possible page, check if the buddy
* of the next-highest order is free. If it is, it's possible
@@ -901,15 +925,12 @@ static inline void __free_one_page(struct page *page,
higher_buddy = higher_page + (buddy_pfn - combined_pfn);
if (pfn_valid_within(buddy_pfn) &&
page_is_buddy(higher_page, higher_buddy, order + 1)) {
- list_add_tail(&page->lru,
- &zone->free_area[order].free_list[migratetype]);
- goto out;
+ add_to_buddy_tail(page, zone, order, migratetype);
+ return;
}
}
- list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
-out:
- zone->free_area[order].nr_free++;
+ add_to_buddy_head(page, zone, order, migratetype);
}
/*
@@ -1731,9 +1752,7 @@ static inline void expand(struct zone *zone, struct page *page,
if (set_page_guard(zone, &page[size], high, migratetype))
continue;
- list_add(&page[size].lru, &area->free_list[migratetype]);
- area->nr_free++;
- set_page_order(&page[size], high);
+ add_to_buddy_head(&page[size], zone, high, migratetype);
}
}
@@ -1877,9 +1896,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
struct page, lru);
if (!page)
continue;
- list_del(&page->lru);
- rmv_page_order(page);
- area->nr_free--;
+ remove_from_buddy(page, zone, current_order);
expand(zone, page, order, current_order, area, migratetype);
set_pcppage_migratetype(page, migratetype);
return page;
@@ -2795,9 +2812,7 @@ int __isolate_free_page(struct page *page, unsigned int order)
}
/* Remove page from free list */
- list_del(&page->lru);
- zone->free_area[order].nr_free--;
- rmv_page_order(page);
+ remove_from_buddy(page, zone, order);
/*
* Set the pageblock if the isolated page is at least half of a
@@ -7886,9 +7901,7 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
pr_info("remove from free list %lx %d %lx\n",
pfn, 1 << order, end_pfn);
#endif
- list_del(&page->lru);
- rmv_page_order(page);
- zone->free_area[order].nr_free--;
+ remove_from_buddy(page, zone, order);
for (i = 0; i < (1 << order); i++)
SetPageReserved((page+i));
pfn += (1 << order);
--
2.14.3
After "no_merge for order 0", the biggest overhead in free path for
order 0 pages is now add_to_cluster(). As pages are freed one by one,
it caused frequent operation of add_to_cluster().
Ideally, if only one migratetype pcp list has pages to free and
count=pcp->batch in free_pcppages_bulk(), we can avoid calling
add_to_cluster() one time per page but adding them in one go as
a single cluster. Let's call this ideal case as single_mt and
single_mt_unmovable represents when only unmovable pcp list has
pages and count in free_pcppages_bulk() equals to pcp->batch.
Ditto for single_mt_movable and single_mt_reclaimable.
I added some counters to see how often this ideal case is. On my
desktop, after boot:
free_pcppages_bulk: 6268
single_mt: 3885 (62%)
free_pcppages_bulk means the number of time this function gets called.
single_mt means number of times when only one pcp migratetype list has
pages to be freed and count=pcp->batch.
single_mt can be further devided into the following 3 cases:
single_mt_unmovable: 263 (4%)
single_mt_movable: 2566 (41%)
single_mt_reclaimable: 1056 (17%)
After kbuild with a distro kconfig:
free_pcppages_bulk: 9100508
single_mt: 8440310 (93%)
Again, single_mt can be further devided:
single_mt_unmovable: 290 (0%)
single_mt_movable: 8435483 (92.75%)
single_mt_reclaimable: 4537 (0.05%)
Considering capturing the case of single_mt_movable requires fewer
lines of code and it is the most often ideal case, I think capturing
this case alone is enough.
This optimization brings zone->lock contention down from 25% to
almost zero again using the parallel free workload.
Signed-off-by: Aaron Lu <[email protected]>
---
mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 42 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ac93833a2877..ad15e4ef99d6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1281,6 +1281,36 @@ static bool bulkfree_pcp_prepare(struct page *page)
}
#endif /* CONFIG_DEBUG_VM */
+static inline bool free_cluster_pages(struct zone *zone, struct list_head *list,
+ int mt, int count)
+{
+ struct cluster *c;
+ struct page *page, *n;
+
+ if (!can_skip_merge(zone, 0))
+ return false;
+
+ if (count != this_cpu_ptr(zone->pageset)->pcp.batch)
+ return false;
+
+ c = new_cluster(zone, count, list_first_entry(list, struct page, lru));
+ if (unlikely(!c))
+ return false;
+
+ list_for_each_entry_safe(page, n, list, lru) {
+ set_page_order(page, 0);
+ set_page_merge_skipped(page);
+ page->cluster = c;
+ list_add(&page->lru, &zone->free_area[0].free_list[mt]);
+ }
+
+ INIT_LIST_HEAD(list);
+ zone->free_area[0].nr_free += count;
+ __mod_zone_page_state(zone, NR_FREE_PAGES, count);
+
+ return true;
+}
+
/*
* Frees a number of pages from the PCP lists
* Assumes all pages on list are in same zone, and of same order.
@@ -1295,9 +1325,9 @@ static bool bulkfree_pcp_prepare(struct page *page)
static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp)
{
- int migratetype = 0;
- int batch_free = 0;
- bool isolated_pageblocks;
+ int migratetype = MIGRATE_MOVABLE;
+ int batch_free = 0, saved_count = count;
+ bool isolated_pageblocks, single_mt = false;
struct page *page, *tmp;
LIST_HEAD(head);
@@ -1319,8 +1349,11 @@ static void free_pcppages_bulk(struct zone *zone, int count,
} while (list_empty(list));
/* This is the only non-empty list. Free them all. */
- if (batch_free == MIGRATE_PCPTYPES)
+ if (batch_free == MIGRATE_PCPTYPES) {
batch_free = count;
+ if (batch_free == saved_count)
+ single_mt = true;
+ }
do {
unsigned long pfn, buddy_pfn;
@@ -1359,9 +1392,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,
spin_lock(&zone->lock);
isolated_pageblocks = has_isolate_pageblock(zone);
+ if (!isolated_pageblocks && single_mt)
+ free_cluster_pages(zone, &head, migratetype, saved_count);
+
/*
* Use safe version since after __free_one_page(),
* page->lru.next will not point to original list.
+ *
+ * If free_cluster_pages() succeeds, head will be an empty list here.
*/
list_for_each_entry_safe(page, tmp, &head, lru) {
int mt = get_pcppage_migratetype(page);
--
2.14.3
Profile on Intel Skylake server shows the most time consuming part
under zone->lock on allocation path is accessing those to-be-returned
page's "struct page" on the free_list inside zone->lock. One explanation
is, different CPUs are releasing pages to the head of free_list and
those page's 'struct page' may very well be cache cold for the allocating
CPU when it grabs these pages from free_list' head. The purpose here
is to avoid touching these pages one by one inside zone->lock.
One idea is, we just take the requested number of pages off free_list
with something like list_cut_position() and then adjust nr_free of
free_area accordingly inside zone->lock and other operations like
clearing PageBuddy flag for these pages are done outside of zone->lock.
list_cut_position() needs to know where to cut, that's what the new
'struct cluster' meant to provide. All pages on order 0's free_list
belongs to a cluster so when a number of pages is needed, the cluster
to which head page of free_list belongs is checked and then tail page
of the cluster could be found. With tail page, list_cut_position() can
be used to drop the cluster off free_list. The 'struct cluster' also has
'nr' to tell how many pages this cluster has so nr_free of free_area can
be adjusted inside the lock too.
This caused a race window though: from the moment zone->lock is dropped
till these pages' PageBuddy flags get cleared, these pages are not in
buddy but still have PageBuddy flag set.
This doesn't cause problems for users that access buddy pages through
free_list. But there are other users, like move_freepages() which is
used to move a pageblock pages from one migratetype to another in
fallback allocation path, will test PageBuddy flag of a page derived
from PFN. The end result could be that for pages in the race window,
they are moved back to free_list of another migratetype. For this
reason, a synchronization function zone_wait_cluster_alloc() is
introduced to wait till all pages are in correct state. This function
is meant to be called with zone->lock held, so after this function
returns, we do not need to worry about new pages becoming racy state.
Another user is compaction, where it will scan a pageblock for
migratable candidates. In this process, pages derived from PFN will
be checked for PageBuddy flag to decide if it is a merge skipped page.
To avoid a racy page getting merged back into buddy, the
zone_wait_and_disable_cluster_alloc() function is introduced to:
1 disable clustered allocation by increasing zone->cluster.disable_depth;
2 wait till the race window pass by calling zone_wait_cluster_alloc().
This function is also meant to be called with zone->lock held so after
it returns, all pages are in correct state and no more cluster alloc
will be attempted till zone_enable_cluster_alloc() is called to decrease
zone->cluster.disable_depth.
The two patches could eliminate zone->lock contention entirely but at
the same time, pgdat->lru_lock contention rose to 82%. Final performance
increased about 8.3%.
Suggested-by: Ying Huang <[email protected]>
Signed-off-by: Aaron Lu <[email protected]>
---
Documentation/vm/struct_page_field | 5 +
include/linux/mm_types.h | 2 +
include/linux/mmzone.h | 35 +++++
mm/compaction.c | 4 +
mm/internal.h | 34 +++++
mm/page_alloc.c | 288 +++++++++++++++++++++++++++++++++++--
6 files changed, 360 insertions(+), 8 deletions(-)
diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_page_field
index 1ab6c19ccc7a..bab738ea4e0a 100644
--- a/Documentation/vm/struct_page_field
+++ b/Documentation/vm/struct_page_field
@@ -3,3 +3,8 @@ Used to indicate this page skipped merging when added to buddy. This
field only makes sense if the page is in Buddy and is order zero.
It's a bug if any higher order pages in Buddy has this field set.
Shares space with index.
+
+cluster:
+Order 0 Buddy pages are grouped in cluster on free_list to speed up
+allocation. This field stores the cluster pointer for them.
+Shares space with mapping.
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 7edc4e102a8e..49fe9d755a7c 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -84,6 +84,8 @@ struct page {
void *s_mem; /* slab first object */
atomic_t compound_mapcount; /* first tail page */
/* page_deferred_list().next -- second tail page */
+
+ struct cluster *cluster; /* order 0 cluster this page belongs to */
};
/* Second double word */
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7522a6987595..09ba9d3cc385 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -355,6 +355,40 @@ enum zone_type {
#ifndef __GENERATING_BOUNDS_H
+struct cluster {
+ struct page *tail; /* tail page of the cluster */
+ int nr; /* how many pages are in this cluster */
+};
+
+struct order0_cluster {
+ /* order 0 cluster array, dynamically allocated */
+ struct cluster *array;
+ /*
+ * order 0 cluster array length, also used to indicate if cluster
+ * allocation is enabled for this zone(cluster allocation is disabled
+ * for small zones whose batch size is smaller than 1, like DMA zone)
+ */
+ int len;
+ /*
+ * smallest position from where we search for an
+ * empty cluster from the cluster array
+ */
+ int zero_bit;
+ /* bitmap used to quickly locate an empty cluster from cluster array */
+ unsigned long *bitmap;
+
+ /* disable cluster allocation to avoid new pages becoming racy state. */
+ unsigned long disable_depth;
+
+ /*
+ * used to indicate if there are pages allocated in cluster mode
+ * still in racy state. Caller with zone->lock held could use helper
+ * function zone_wait_cluster_alloc() to wait all such pages to exit
+ * the race window.
+ */
+ atomic_t in_progress;
+};
+
struct zone {
/* Read-mostly fields */
@@ -459,6 +493,7 @@ struct zone {
/* free areas of different sizes */
struct free_area free_area[MAX_ORDER];
+ struct order0_cluster cluster;
/* zone flags, see below */
unsigned long flags;
diff --git a/mm/compaction.c b/mm/compaction.c
index fb9031fdca41..e71fa82786a1 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1601,6 +1601,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
migrate_prep_local();
+ zone_wait_and_disable_cluster_alloc(zone);
+
while ((ret = compact_finished(zone, cc)) == COMPACT_CONTINUE) {
int err;
@@ -1699,6 +1701,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
zone->compact_cached_free_pfn = free_pfn;
}
+ zone_enable_cluster_alloc(zone);
+
count_compact_events(COMPACTMIGRATE_SCANNED, cc->total_migrate_scanned);
count_compact_events(COMPACTFREE_SCANNED, cc->total_free_scanned);
diff --git a/mm/internal.h b/mm/internal.h
index 2bfbaae2d835..1b0535af1b49 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -557,12 +557,46 @@ static inline bool can_skip_merge(struct zone *zone, int order)
if (order)
return false;
+ /*
+ * Clustered allocation is only disabled when high-order pages
+ * are needed, e.g. in compaction and CMA alloc, so we should
+ * also skip merging in that case.
+ */
+ if (zone->cluster.disable_depth)
+ return false;
+
return true;
}
+
+static inline void zone_wait_cluster_alloc(struct zone *zone)
+{
+ while (atomic_read(&zone->cluster.in_progress))
+ cpu_relax();
+}
+
+static inline void zone_wait_and_disable_cluster_alloc(struct zone *zone)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&zone->lock, flags);
+ zone->cluster.disable_depth++;
+ zone_wait_cluster_alloc(zone);
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
+
+static inline void zone_enable_cluster_alloc(struct zone *zone)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&zone->lock, flags);
+ zone->cluster.disable_depth--;
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
#else /* CONFIG_COMPACTION */
static inline bool can_skip_merge(struct zone *zone, int order)
{
return false;
}
+static inline void zone_wait_cluster_alloc(struct zone *zone) {}
+static inline void zone_wait_and_disable_cluster_alloc(struct zone *zone) {}
+static inline void zone_enable_cluster_alloc(struct zone *zone) {}
#endif /* CONFIG_COMPACTION */
#endif /* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eb78014dfbde..ac93833a2877 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -746,6 +746,82 @@ static inline void set_page_order(struct page *page, unsigned int order)
__SetPageBuddy(page);
}
+static inline struct cluster *new_cluster(struct zone *zone, int nr,
+ struct page *tail)
+{
+ struct order0_cluster *cluster = &zone->cluster;
+ int n = find_next_zero_bit(cluster->bitmap, cluster->len, cluster->zero_bit);
+ if (n == cluster->len) {
+ printk_ratelimited("node%d zone %s cluster used up\n",
+ zone->zone_pgdat->node_id, zone->name);
+ return NULL;
+ }
+ cluster->zero_bit = n;
+ set_bit(n, cluster->bitmap);
+ cluster->array[n].nr = nr;
+ cluster->array[n].tail = tail;
+ return &cluster->array[n];
+}
+
+static inline struct cluster *add_to_cluster_common(struct page *page,
+ struct zone *zone, struct page *neighbor)
+{
+ struct cluster *c;
+
+ if (neighbor) {
+ int batch = this_cpu_ptr(zone->pageset)->pcp.batch;
+ c = neighbor->cluster;
+ if (c && c->nr < batch) {
+ page->cluster = c;
+ c->nr++;
+ return c;
+ }
+ }
+
+ c = new_cluster(zone, 1, page);
+ if (unlikely(!c))
+ return NULL;
+
+ page->cluster = c;
+ return c;
+}
+
+/*
+ * Add this page to the cluster where the previous head page belongs.
+ * Called after page is added to free_list(and becoming the new head).
+ */
+static inline void add_to_cluster_head(struct page *page, struct zone *zone,
+ int order, int mt)
+{
+ struct page *neighbor;
+
+ if (order || !zone->cluster.len)
+ return;
+
+ neighbor = page->lru.next == &zone->free_area[0].free_list[mt] ?
+ NULL : list_entry(page->lru.next, struct page, lru);
+ add_to_cluster_common(page, zone, neighbor);
+}
+
+/*
+ * Add this page to the cluster where the previous tail page belongs.
+ * Called after page is added to free_list(and becoming the new tail).
+ */
+static inline void add_to_cluster_tail(struct page *page, struct zone *zone,
+ int order, int mt)
+{
+ struct page *neighbor;
+ struct cluster *c;
+
+ if (order || !zone->cluster.len)
+ return;
+
+ neighbor = page->lru.prev == &zone->free_area[0].free_list[mt] ?
+ NULL : list_entry(page->lru.prev, struct page, lru);
+ c = add_to_cluster_common(page, zone, neighbor);
+ c->tail = page;
+}
+
static inline void add_to_buddy_common(struct page *page, struct zone *zone,
unsigned int order, int mt)
{
@@ -765,6 +841,7 @@ static inline void add_to_buddy_head(struct page *page, struct zone *zone,
{
add_to_buddy_common(page, zone, order, mt);
list_add(&page->lru, &zone->free_area[order].free_list[mt]);
+ add_to_cluster_head(page, zone, order, mt);
}
static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
@@ -772,6 +849,7 @@ static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
{
add_to_buddy_common(page, zone, order, mt);
list_add_tail(&page->lru, &zone->free_area[order].free_list[mt]);
+ add_to_cluster_tail(page, zone, order, mt);
}
static inline void rmv_page_order(struct page *page)
@@ -780,9 +858,29 @@ static inline void rmv_page_order(struct page *page)
set_page_private(page, 0);
}
+/* called before removed from free_list */
+static inline void remove_from_cluster(struct page *page, struct zone *zone)
+{
+ struct cluster *c = page->cluster;
+ if (!c)
+ return;
+
+ page->cluster = NULL;
+ c->nr--;
+ if (!c->nr) {
+ int bit = c - zone->cluster.array;
+ c->tail = NULL;
+ clear_bit(bit, zone->cluster.bitmap);
+ if (bit < zone->cluster.zero_bit)
+ zone->cluster.zero_bit = bit;
+ } else if (page == c->tail)
+ c->tail = list_entry(page->lru.prev, struct page, lru);
+}
+
static inline void remove_from_buddy(struct page *page, struct zone *zone,
unsigned int order)
{
+ remove_from_cluster(page, zone);
list_del(&page->lru);
zone->free_area[order].nr_free--;
rmv_page_order(page);
@@ -2025,6 +2123,17 @@ static int move_freepages(struct zone *zone,
if (num_movable)
*num_movable = 0;
+ /*
+ * Cluster alloced pages may have their PageBuddy flag unclear yet
+ * after dropping zone->lock in rmqueue_bulk() and steal here could
+ * move them back to free_list. So it's necessary to wait till all
+ * those pages have their flags properly cleared.
+ *
+ * We do not need to disable cluster alloc though since we already
+ * held zone->lock and no allocation could happen.
+ */
+ zone_wait_cluster_alloc(zone);
+
for (page = start_page; page <= end_page;) {
if (!pfn_valid_within(page_to_pfn(page))) {
page++;
@@ -2049,8 +2158,10 @@ static int move_freepages(struct zone *zone,
}
order = page_order(page);
+ remove_from_cluster(page, zone);
list_move(&page->lru,
&zone->free_area[order].free_list[migratetype]);
+ add_to_cluster_head(page, zone, order, migratetype);
page += 1 << order;
pages_moved += 1 << order;
}
@@ -2199,7 +2310,9 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
single_page:
area = &zone->free_area[current_order];
+ remove_from_cluster(page, zone);
list_move(&page->lru, &area->free_list[start_type]);
+ add_to_cluster_head(page, zone, current_order, start_type);
}
/*
@@ -2460,6 +2573,145 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype)
return page;
}
+static int __init zone_order0_cluster_init(void)
+{
+ struct zone *zone;
+
+ for_each_zone(zone) {
+ int len, mt, batch;
+ unsigned long flags;
+ struct order0_cluster *cluster;
+
+ if (!managed_zone(zone))
+ continue;
+
+ /* no need to enable cluster allocation for batch<=1 zone */
+ preempt_disable();
+ batch = this_cpu_ptr(zone->pageset)->pcp.batch;
+ preempt_enable();
+ if (batch <= 1)
+ continue;
+
+ cluster = &zone->cluster;
+ /* FIXME: possible overflow of int type */
+ len = DIV_ROUND_UP(zone->managed_pages, batch);
+ cluster->array = vzalloc(len * sizeof(struct cluster));
+ if (!cluster->array)
+ return -ENOMEM;
+ cluster->bitmap = vzalloc(DIV_ROUND_UP(len, BITS_PER_LONG) *
+ sizeof(unsigned long));
+ if (!cluster->bitmap)
+ return -ENOMEM;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ cluster->len = len;
+ for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) {
+ struct page *page;
+ list_for_each_entry_reverse(page,
+ &zone->free_area[0].free_list[mt], lru)
+ add_to_cluster_head(page, zone, 0, mt);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ }
+
+ return 0;
+}
+subsys_initcall(zone_order0_cluster_init);
+
+static inline int __rmqueue_bulk_cluster(struct zone *zone, unsigned long count,
+ struct list_head *list, int mt)
+{
+ struct list_head *head = &zone->free_area[0].free_list[mt];
+ int nr = 0;
+
+ while (nr < count) {
+ struct page *head_page;
+ struct list_head *tail, tmp_list;
+ struct cluster *c;
+ int bit;
+
+ head_page = list_first_entry_or_null(head, struct page, lru);
+ if (!head_page || !head_page->cluster)
+ break;
+
+ c = head_page->cluster;
+ tail = &c->tail->lru;
+
+ /* drop the cluster off free_list and attach to list */
+ list_cut_position(&tmp_list, head, tail);
+ list_splice_tail(&tmp_list, list);
+
+ nr += c->nr;
+ zone->free_area[0].nr_free -= c->nr;
+
+ /* this cluster is empty now */
+ c->tail = NULL;
+ c->nr = 0;
+ bit = c - zone->cluster.array;
+ clear_bit(bit, zone->cluster.bitmap);
+ if (bit < zone->cluster.zero_bit)
+ zone->cluster.zero_bit = bit;
+ }
+
+ return nr;
+}
+
+static inline int rmqueue_bulk_cluster(struct zone *zone, unsigned int order,
+ unsigned long count, struct list_head *list,
+ int migratetype)
+{
+ int alloced;
+ struct page *page;
+
+ /*
+ * Cluster alloc races with merging so don't try cluster alloc when we
+ * can't skip merging. Note that can_skip_merge() keeps the same return
+ * value from here till all pages have their flags properly processed,
+ * i.e. the end of the function where in_progress is incremented, even
+ * we have dropped the lock in the middle because the only place that
+ * can change can_skip_merge()'s return value is compaction code and
+ * compaction needs to wait on in_progress.
+ */
+ if (!can_skip_merge(zone, 0))
+ return 0;
+
+ /* Cluster alloc is disabled, mostly compaction is already in progress */
+ if (zone->cluster.disable_depth)
+ return 0;
+
+ /* Cluster alloc is disabled for this zone */
+ if (unlikely(!zone->cluster.len))
+ return 0;
+
+ alloced = __rmqueue_bulk_cluster(zone, count, list, migratetype);
+ if (!alloced)
+ return 0;
+
+ /*
+ * Cache miss on page structure could slow things down
+ * dramatically so accessing these alloced pages without
+ * holding lock for better performance.
+ *
+ * Since these pages still have PageBuddy set, there is a race
+ * window between now and when PageBuddy is cleared for them
+ * below. Any operation that would scan a pageblock and check
+ * PageBuddy(page), e.g. compaction, will need to wait till all
+ * such pages are properly processed. in_progress is used for
+ * such purpose so increase it now before dropping the lock.
+ */
+ atomic_inc(&zone->cluster.in_progress);
+ spin_unlock(&zone->lock);
+
+ list_for_each_entry(page, list, lru) {
+ rmv_page_order(page);
+ page->cluster = NULL;
+ set_pcppage_migratetype(page, migratetype);
+ }
+ atomic_dec(&zone->cluster.in_progress);
+
+ return alloced;
+}
+
/*
* Obtain a specified number of elements from the buddy allocator, all under
* a single hold of the lock, for efficiency. Add them to the supplied list.
@@ -2469,17 +2721,23 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
unsigned long count, struct list_head *list,
int migratetype)
{
- int i, alloced = 0;
+ int i, alloced;
+ struct page *page, *tmp;
spin_lock(&zone->lock);
- for (i = 0; i < count; ++i) {
- struct page *page = __rmqueue(zone, order, migratetype);
+ alloced = rmqueue_bulk_cluster(zone, order, count, list, migratetype);
+ if (alloced > 0) {
+ if (alloced >= count)
+ goto out;
+ else
+ spin_lock(&zone->lock);
+ }
+
+ for (; alloced < count; alloced++) {
+ page = __rmqueue(zone, order, migratetype);
if (unlikely(page == NULL))
break;
- if (unlikely(check_pcp_refill(page)))
- continue;
-
/*
* Split buddy pages returned by expand() are received here in
* physical page order. The page is added to the tail of
@@ -2491,7 +2749,18 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* pages are ordered properly.
*/
list_add_tail(&page->lru, list);
- alloced++;
+ }
+ spin_unlock(&zone->lock);
+
+out:
+ i = alloced;
+ list_for_each_entry_safe(page, tmp, list, lru) {
+ if (unlikely(check_pcp_refill(page))) {
+ list_del(&page->lru);
+ alloced--;
+ continue;
+ }
+
if (is_migrate_cma(get_pcppage_migratetype(page)))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
-(1 << order));
@@ -2504,7 +2773,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
* pages added to the pcp list.
*/
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
- spin_unlock(&zone->lock);
return alloced;
}
@@ -7744,6 +8012,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
unsigned long outer_start, outer_end;
unsigned int order;
int ret = 0;
+ struct zone *zone = page_zone(pfn_to_page(start));
struct compact_control cc = {
.nr_migratepages = 0,
@@ -7786,6 +8055,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
if (ret)
return ret;
+ zone_wait_and_disable_cluster_alloc(zone);
/*
* In case of -EBUSY, we'd like to know which page causes problem.
* So, just fall through. test_pages_isolated() has a tracepoint
@@ -7868,6 +8138,8 @@ int alloc_contig_range(unsigned long start, unsigned long end,
done:
undo_isolate_page_range(pfn_max_align_down(start),
pfn_max_align_up(end), migratetype);
+
+ zone_enable_cluster_alloc(zone);
return ret;
}
--
2.14.3
Running will-it-scale/page_fault1 process mode workload on a 2 sockets
Intel Skylake server showed severe lock contention of zone->lock, as
high as about 80%(42% on allocation path and 35% on free path) CPU
cycles are burnt spinning. With perf, the most time consuming part inside
that lock on free path is cache missing on page structures, mostly on
the to-be-freed page's buddy due to merging.
One way to avoid this overhead is not do any merging at all for order-0
pages. With this approach, the lock contention for zone->lock on free
path dropped to 1.1% but allocation side still has as high as 42% lock
contention. In the meantime, the dropped lock contention on free side
doesn't translate to performance increase, instead, it's consumed by
increased lock contention of the per node lru_lock(rose from 5% to 37%)
and the final performance slightly dropped about 1%.
Though performance dropped a little, it almost eliminated zone lock
contention on free path and it is the foundation for the next patch
that eliminates zone lock contention for allocation path.
A new document file called "struct_page_filed" is added to explain
the newly reused field in "struct page".
Suggested-by: Dave Hansen <[email protected]>
Signed-off-by: Aaron Lu <[email protected]>
---
Documentation/vm/struct_page_field | 5 +++
include/linux/mm_types.h | 1 +
mm/compaction.c | 13 +++++-
mm/internal.h | 27 ++++++++++++
mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++-----
5 files changed, 122 insertions(+), 13 deletions(-)
create mode 100644 Documentation/vm/struct_page_field
diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_page_field
new file mode 100644
index 000000000000..1ab6c19ccc7a
--- /dev/null
+++ b/Documentation/vm/struct_page_field
@@ -0,0 +1,5 @@
+buddy_merge_skipped:
+Used to indicate this page skipped merging when added to buddy. This
+field only makes sense if the page is in Buddy and is order zero.
+It's a bug if any higher order pages in Buddy has this field set.
+Shares space with index.
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index fd1af6b9591d..7edc4e102a8e 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -91,6 +91,7 @@ struct page {
pgoff_t index; /* Our offset within mapping. */
void *freelist; /* sl[aou]b first free object */
/* page_deferred_list().prev -- second tail page */
+ bool buddy_merge_skipped; /* skipped merging when added to buddy */
};
union {
diff --git a/mm/compaction.c b/mm/compaction.c
index 2c8999d027ab..fb9031fdca41 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* potential isolation targets.
*/
if (PageBuddy(page)) {
- unsigned long freepage_order = page_order_unsafe(page);
+ unsigned long freepage_order;
+ /*
+ * If this is a merge_skipped page, do merge now
+ * since high-order pages are needed. zone lock
+ * isn't taken for the merge_skipped check so the
+ * check could be wrong but the worst case is we
+ * lose a merge opportunity.
+ */
+ if (page_merge_was_skipped(page))
+ try_to_merge_page(page);
+
+ freepage_order = page_order_unsafe(page);
/*
* Without lock, we cannot be sure that what we got is
* a valid page order. Consider only values in the
diff --git a/mm/internal.h b/mm/internal.h
index e6bd35182dae..2bfbaae2d835 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -538,4 +538,31 @@ static inline bool is_migrate_highatomic_page(struct page *page)
}
void setup_zone_pageset(struct zone *zone);
+
+static inline bool page_merge_was_skipped(struct page *page)
+{
+ return page->buddy_merge_skipped;
+}
+
+void try_to_merge_page(struct page *page);
+
+#ifdef CONFIG_COMPACTION
+static inline bool can_skip_merge(struct zone *zone, int order)
+{
+ /* Compaction has failed in this zone, we shouldn't skip merging */
+ if (zone->compact_considered)
+ return false;
+
+ /* Only consider no_merge for order 0 pages */
+ if (order)
+ return false;
+
+ return true;
+}
+#else /* CONFIG_COMPACTION */
+static inline bool can_skip_merge(struct zone *zone, int order)
+{
+ return false;
+}
+#endif /* CONFIG_COMPACTION */
#endif /* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3cdf1e10d412..eb78014dfbde 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -730,6 +730,16 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) {}
#endif
+static inline void set_page_merge_skipped(struct page *page)
+{
+ page->buddy_merge_skipped = true;
+}
+
+static inline void clear_page_merge_skipped(struct page *page)
+{
+ page->buddy_merge_skipped = false;
+}
+
static inline void set_page_order(struct page *page, unsigned int order)
{
set_page_private(page, order);
@@ -739,6 +749,13 @@ static inline void set_page_order(struct page *page, unsigned int order)
static inline void add_to_buddy_common(struct page *page, struct zone *zone,
unsigned int order, int mt)
{
+ /*
+ * Always clear buddy_merge_skipped when added to buddy because
+ * buddy_merge_skipped shares space with index and index could
+ * be used as migratetype for PCP pages.
+ */
+ clear_page_merge_skipped(page);
+
set_page_order(page, order);
zone->free_area[order].nr_free++;
}
@@ -769,6 +786,7 @@ static inline void remove_from_buddy(struct page *page, struct zone *zone,
list_del(&page->lru);
zone->free_area[order].nr_free--;
rmv_page_order(page);
+ clear_page_merge_skipped(page);
}
/*
@@ -839,7 +857,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
* -- nyc
*/
-static inline void __free_one_page(struct page *page,
+static inline void do_merge(struct page *page,
unsigned long pfn,
struct zone *zone, unsigned int order,
int migratetype)
@@ -851,16 +869,6 @@ static inline void __free_one_page(struct page *page,
max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
- VM_BUG_ON(!zone_is_initialized(zone));
- VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
-
- VM_BUG_ON(migratetype == -1);
- if (likely(!is_migrate_isolate(migratetype)))
- __mod_zone_freepage_state(zone, 1 << order, migratetype);
-
- VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
- VM_BUG_ON_PAGE(bad_range(zone, page), page);
-
continue_merging:
while (order < max_order - 1) {
buddy_pfn = __find_buddy_pfn(pfn, order);
@@ -933,6 +941,61 @@ static inline void __free_one_page(struct page *page,
add_to_buddy_head(page, zone, order, migratetype);
}
+void try_to_merge_page(struct page *page)
+{
+ unsigned long pfn, buddy_pfn, flags;
+ struct page *buddy;
+ struct zone *zone;
+
+ /*
+ * No need to do merging if buddy is not free.
+ * zone lock isn't taken so this could be wrong but worst case
+ * is we lose a merge opportunity.
+ */
+ pfn = page_to_pfn(page);
+ buddy_pfn = __find_buddy_pfn(pfn, 0);
+ buddy = page + (buddy_pfn - pfn);
+ if (!PageBuddy(buddy))
+ return;
+
+ zone = page_zone(page);
+ spin_lock_irqsave(&zone->lock, flags);
+ /* Verify again after taking the lock */
+ if (likely(PageBuddy(page) && page_merge_was_skipped(page) &&
+ PageBuddy(buddy))) {
+ int mt = get_pageblock_migratetype(page);
+
+ remove_from_buddy(page, zone, 0);
+ do_merge(page, pfn, zone, 0, mt);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
+
+static inline void __free_one_page(struct page *page,
+ unsigned long pfn,
+ struct zone *zone, unsigned int order,
+ int migratetype)
+{
+ VM_BUG_ON(!zone_is_initialized(zone));
+ VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+
+ VM_BUG_ON(migratetype == -1);
+ if (likely(!is_migrate_isolate(migratetype)))
+ __mod_zone_freepage_state(zone, 1 << order, migratetype);
+
+ VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
+ VM_BUG_ON_PAGE(bad_range(zone, page), page);
+
+ if (can_skip_merge(zone, order)) {
+ add_to_buddy_head(page, zone, 0, migratetype);
+ set_page_merge_skipped(page);
+ return;
+ }
+
+ do_merge(page, pfn, zone, order, migratetype);
+}
+
+
/*
* A bad page could be due to a number of fields. Instead of multiple branches,
* try and check multiple fields with one check. The caller must do a detailed
@@ -1183,8 +1246,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
* can be offset by reduced memory latency later. To
* avoid excessive prefetching due to large count, only
* prefetch buddy for the last pcp->batch nr of pages.
+ *
+ * If merge can be skipped, no need to prefetch buddy.
*/
- if (count > pcp->batch)
+ if (can_skip_merge(zone, 0) || count > pcp->batch)
continue;
pfn = page_to_pfn(page);
buddy_pfn = __find_buddy_pfn(pfn, 0);
--
2.14.3
On 03/20/2018 09:54 AM, Aaron Lu wrote:
> There are multiple places that add/remove a page into/from buddy,
> introduce helper functions for them.
>
> This also makes it easier to add code when a page is added/removed
> to/from buddy.
>
> Signed-off-by: Aaron Lu <[email protected]>
> ---
> mm/page_alloc.c | 65 ++++++++++++++++++++++++++++++++++-----------------------
> 1 file changed, 39 insertions(+), 26 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3db9cfb2265b..3cdf1e10d412 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -736,12 +736,41 @@ static inline void set_page_order(struct page *page, unsigned int order)
> __SetPageBuddy(page);
> }
>
> +static inline void add_to_buddy_common(struct page *page, struct zone *zone,
> + unsigned int order, int mt)
> +{
> + set_page_order(page, order);
> + zone->free_area[order].nr_free++;
The 'mt' parameter seems unused here. Otherwise
Acked-by: Vlastimil Babka <[email protected]>
> +}
> +
> +static inline void add_to_buddy_head(struct page *page, struct zone *zone,
> + unsigned int order, int mt)
> +{
> + add_to_buddy_common(page, zone, order, mt);
> + list_add(&page->lru, &zone->free_area[order].free_list[mt]);
> +}
> +
> +static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
> + unsigned int order, int mt)
> +{
> + add_to_buddy_common(page, zone, order, mt);
> + list_add_tail(&page->lru, &zone->free_area[order].free_list[mt]);
> +}
> +
> static inline void rmv_page_order(struct page *page)
> {
> __ClearPageBuddy(page);
> set_page_private(page, 0);
> }
>
> +static inline void remove_from_buddy(struct page *page, struct zone *zone,
> + unsigned int order)
> +{
> + list_del(&page->lru);
> + zone->free_area[order].nr_free--;
> + rmv_page_order(page);
> +}
> +
> /*
> * This function checks whether a page is free && is the buddy
> * we can do coalesce a page and its buddy if
> @@ -845,13 +874,10 @@ static inline void __free_one_page(struct page *page,
> * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
> * merge with it and move up one order.
> */
> - if (page_is_guard(buddy)) {
> + if (page_is_guard(buddy))
> clear_page_guard(zone, buddy, order, migratetype);
> - } else {
> - list_del(&buddy->lru);
> - zone->free_area[order].nr_free--;
> - rmv_page_order(buddy);
> - }
> + else
> + remove_from_buddy(buddy, zone, order);
> combined_pfn = buddy_pfn & pfn;
> page = page + (combined_pfn - pfn);
> pfn = combined_pfn;
> @@ -883,8 +909,6 @@ static inline void __free_one_page(struct page *page,
> }
>
> done_merging:
> - set_page_order(page, order);
> -
> /*
> * If this is not the largest possible page, check if the buddy
> * of the next-highest order is free. If it is, it's possible
> @@ -901,15 +925,12 @@ static inline void __free_one_page(struct page *page,
> higher_buddy = higher_page + (buddy_pfn - combined_pfn);
> if (pfn_valid_within(buddy_pfn) &&
> page_is_buddy(higher_page, higher_buddy, order + 1)) {
> - list_add_tail(&page->lru,
> - &zone->free_area[order].free_list[migratetype]);
> - goto out;
> + add_to_buddy_tail(page, zone, order, migratetype);
> + return;
> }
> }
>
> - list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
> -out:
> - zone->free_area[order].nr_free++;
> + add_to_buddy_head(page, zone, order, migratetype);
> }
>
> /*
> @@ -1731,9 +1752,7 @@ static inline void expand(struct zone *zone, struct page *page,
> if (set_page_guard(zone, &page[size], high, migratetype))
> continue;
>
> - list_add(&page[size].lru, &area->free_list[migratetype]);
> - area->nr_free++;
> - set_page_order(&page[size], high);
> + add_to_buddy_head(&page[size], zone, high, migratetype);
> }
> }
>
> @@ -1877,9 +1896,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
> struct page, lru);
> if (!page)
> continue;
> - list_del(&page->lru);
> - rmv_page_order(page);
> - area->nr_free--;
> + remove_from_buddy(page, zone, current_order);
> expand(zone, page, order, current_order, area, migratetype);
> set_pcppage_migratetype(page, migratetype);
> return page;
> @@ -2795,9 +2812,7 @@ int __isolate_free_page(struct page *page, unsigned int order)
> }
>
> /* Remove page from free list */
> - list_del(&page->lru);
> - zone->free_area[order].nr_free--;
> - rmv_page_order(page);
> + remove_from_buddy(page, zone, order);
>
> /*
> * Set the pageblock if the isolated page is at least half of a
> @@ -7886,9 +7901,7 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
> pr_info("remove from free list %lx %d %lx\n",
> pfn, 1 << order, end_pfn);
> #endif
> - list_del(&page->lru);
> - rmv_page_order(page);
> - zone->free_area[order].nr_free--;
> + remove_from_buddy(page, zone, order);
> for (i = 0; i < (1 << order); i++)
> SetPageReserved((page+i));
> pfn += (1 << order);
>
On 03/20/2018 09:54 AM, Aaron Lu wrote:
> Running will-it-scale/page_fault1 process mode workload on a 2 sockets
> Intel Skylake server showed severe lock contention of zone->lock, as
> high as about 80%(42% on allocation path and 35% on free path) CPU
> cycles are burnt spinning. With perf, the most time consuming part inside
> that lock on free path is cache missing on page structures, mostly on
> the to-be-freed page's buddy due to merging.
But why, with all the prefetching in place?
> One way to avoid this overhead is not do any merging at all for order-0
> pages. With this approach, the lock contention for zone->lock on free
> path dropped to 1.1% but allocation side still has as high as 42% lock
> contention. In the meantime, the dropped lock contention on free side
> doesn't translate to performance increase, instead, it's consumed by
> increased lock contention of the per node lru_lock(rose from 5% to 37%)
> and the final performance slightly dropped about 1%.
>
> Though performance dropped a little, it almost eliminated zone lock
> contention on free path and it is the foundation for the next patch
> that eliminates zone lock contention for allocation path.
Not thrilled about such disruptive change in the name of a
microbenchmark :/ Shouldn't normally the pcplists hide the overhead?
If not, wouldn't it make more sense to turn zone->lock into a range lock?
> A new document file called "struct_page_filed" is added to explain
> the newly reused field in "struct page".
Sounds rather ad-hoc for a single field, I'd rather document it via
comments.
> Suggested-by: Dave Hansen <[email protected]>
> Signed-off-by: Aaron Lu <[email protected]>
> ---
> Documentation/vm/struct_page_field | 5 +++
> include/linux/mm_types.h | 1 +
> mm/compaction.c | 13 +++++-
> mm/internal.h | 27 ++++++++++++
> mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++-----
> 5 files changed, 122 insertions(+), 13 deletions(-)
> create mode 100644 Documentation/vm/struct_page_field
>
> diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_page_field
> new file mode 100644
> index 000000000000..1ab6c19ccc7a
> --- /dev/null
> +++ b/Documentation/vm/struct_page_field
> @@ -0,0 +1,5 @@
> +buddy_merge_skipped:
> +Used to indicate this page skipped merging when added to buddy. This
> +field only makes sense if the page is in Buddy and is order zero.
> +It's a bug if any higher order pages in Buddy has this field set.
> +Shares space with index.
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index fd1af6b9591d..7edc4e102a8e 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -91,6 +91,7 @@ struct page {
> pgoff_t index; /* Our offset within mapping. */
> void *freelist; /* sl[aou]b first free object */
> /* page_deferred_list().prev -- second tail page */
> + bool buddy_merge_skipped; /* skipped merging when added to buddy */
> };
>
> union {
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 2c8999d027ab..fb9031fdca41 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * potential isolation targets.
> */
> if (PageBuddy(page)) {
> - unsigned long freepage_order = page_order_unsafe(page);
> + unsigned long freepage_order;
>
> + /*
> + * If this is a merge_skipped page, do merge now
> + * since high-order pages are needed. zone lock
> + * isn't taken for the merge_skipped check so the
> + * check could be wrong but the worst case is we
> + * lose a merge opportunity.
> + */
> + if (page_merge_was_skipped(page))
> + try_to_merge_page(page);
> +
> + freepage_order = page_order_unsafe(page);
> /*
> * Without lock, we cannot be sure that what we got is
> * a valid page order. Consider only values in the
> diff --git a/mm/internal.h b/mm/internal.h
> index e6bd35182dae..2bfbaae2d835 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -538,4 +538,31 @@ static inline bool is_migrate_highatomic_page(struct page *page)
> }
>
> void setup_zone_pageset(struct zone *zone);
> +
> +static inline bool page_merge_was_skipped(struct page *page)
> +{
> + return page->buddy_merge_skipped;
> +}
> +
> +void try_to_merge_page(struct page *page);
> +
> +#ifdef CONFIG_COMPACTION
> +static inline bool can_skip_merge(struct zone *zone, int order)
> +{
> + /* Compaction has failed in this zone, we shouldn't skip merging */
> + if (zone->compact_considered)
> + return false;
> +
> + /* Only consider no_merge for order 0 pages */
> + if (order)
> + return false;
> +
> + return true;
> +}
> +#else /* CONFIG_COMPACTION */
> +static inline bool can_skip_merge(struct zone *zone, int order)
> +{
> + return false;
> +}
> +#endif /* CONFIG_COMPACTION */
> #endif /* __MM_INTERNAL_H */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3cdf1e10d412..eb78014dfbde 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -730,6 +730,16 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
> unsigned int order, int migratetype) {}
> #endif
>
> +static inline void set_page_merge_skipped(struct page *page)
> +{
> + page->buddy_merge_skipped = true;
> +}
> +
> +static inline void clear_page_merge_skipped(struct page *page)
> +{
> + page->buddy_merge_skipped = false;
> +}
> +
> static inline void set_page_order(struct page *page, unsigned int order)
> {
> set_page_private(page, order);
> @@ -739,6 +749,13 @@ static inline void set_page_order(struct page *page, unsigned int order)
> static inline void add_to_buddy_common(struct page *page, struct zone *zone,
> unsigned int order, int mt)
> {
> + /*
> + * Always clear buddy_merge_skipped when added to buddy because
> + * buddy_merge_skipped shares space with index and index could
> + * be used as migratetype for PCP pages.
> + */
> + clear_page_merge_skipped(page);
> +
> set_page_order(page, order);
> zone->free_area[order].nr_free++;
> }
> @@ -769,6 +786,7 @@ static inline void remove_from_buddy(struct page *page, struct zone *zone,
> list_del(&page->lru);
> zone->free_area[order].nr_free--;
> rmv_page_order(page);
> + clear_page_merge_skipped(page);
> }
>
> /*
> @@ -839,7 +857,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
> * -- nyc
> */
>
> -static inline void __free_one_page(struct page *page,
> +static inline void do_merge(struct page *page,
> unsigned long pfn,
> struct zone *zone, unsigned int order,
> int migratetype)
> @@ -851,16 +869,6 @@ static inline void __free_one_page(struct page *page,
>
> max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
>
> - VM_BUG_ON(!zone_is_initialized(zone));
> - VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> -
> - VM_BUG_ON(migratetype == -1);
> - if (likely(!is_migrate_isolate(migratetype)))
> - __mod_zone_freepage_state(zone, 1 << order, migratetype);
> -
> - VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
> - VM_BUG_ON_PAGE(bad_range(zone, page), page);
> -
> continue_merging:
> while (order < max_order - 1) {
> buddy_pfn = __find_buddy_pfn(pfn, order);
> @@ -933,6 +941,61 @@ static inline void __free_one_page(struct page *page,
> add_to_buddy_head(page, zone, order, migratetype);
> }
>
> +void try_to_merge_page(struct page *page)
> +{
> + unsigned long pfn, buddy_pfn, flags;
> + struct page *buddy;
> + struct zone *zone;
> +
> + /*
> + * No need to do merging if buddy is not free.
> + * zone lock isn't taken so this could be wrong but worst case
> + * is we lose a merge opportunity.
> + */
> + pfn = page_to_pfn(page);
> + buddy_pfn = __find_buddy_pfn(pfn, 0);
> + buddy = page + (buddy_pfn - pfn);
> + if (!PageBuddy(buddy))
> + return;
> +
> + zone = page_zone(page);
> + spin_lock_irqsave(&zone->lock, flags);
> + /* Verify again after taking the lock */
> + if (likely(PageBuddy(page) && page_merge_was_skipped(page) &&
> + PageBuddy(buddy))) {
> + int mt = get_pageblock_migratetype(page);
> +
> + remove_from_buddy(page, zone, 0);
> + do_merge(page, pfn, zone, 0, mt);
> + }
> + spin_unlock_irqrestore(&zone->lock, flags);
> +}
> +
> +static inline void __free_one_page(struct page *page,
> + unsigned long pfn,
> + struct zone *zone, unsigned int order,
> + int migratetype)
> +{
> + VM_BUG_ON(!zone_is_initialized(zone));
> + VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> +
> + VM_BUG_ON(migratetype == -1);
> + if (likely(!is_migrate_isolate(migratetype)))
> + __mod_zone_freepage_state(zone, 1 << order, migratetype);
> +
> + VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
> + VM_BUG_ON_PAGE(bad_range(zone, page), page);
> +
> + if (can_skip_merge(zone, order)) {
> + add_to_buddy_head(page, zone, 0, migratetype);
> + set_page_merge_skipped(page);
> + return;
> + }
> +
> + do_merge(page, pfn, zone, order, migratetype);
> +}
> +
> +
> /*
> * A bad page could be due to a number of fields. Instead of multiple branches,
> * try and check multiple fields with one check. The caller must do a detailed
> @@ -1183,8 +1246,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> * can be offset by reduced memory latency later. To
> * avoid excessive prefetching due to large count, only
> * prefetch buddy for the last pcp->batch nr of pages.
> + *
> + * If merge can be skipped, no need to prefetch buddy.
> */
> - if (count > pcp->batch)
> + if (can_skip_merge(zone, 0) || count > pcp->batch)
> continue;
> pfn = page_to_pfn(page);
> buddy_pfn = __find_buddy_pfn(pfn, 0);
>
On Tue, Mar 20, 2018 at 12:35:46PM +0100, Vlastimil Babka wrote:
> On 03/20/2018 09:54 AM, Aaron Lu wrote:
> > There are multiple places that add/remove a page into/from buddy,
> > introduce helper functions for them.
> >
> > This also makes it easier to add code when a page is added/removed
> > to/from buddy.
> >
> > Signed-off-by: Aaron Lu <[email protected]>
> > ---
> > mm/page_alloc.c | 65 ++++++++++++++++++++++++++++++++++-----------------------
> > 1 file changed, 39 insertions(+), 26 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3db9cfb2265b..3cdf1e10d412 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -736,12 +736,41 @@ static inline void set_page_order(struct page *page, unsigned int order)
> > __SetPageBuddy(page);
> > }
> >
> > +static inline void add_to_buddy_common(struct page *page, struct zone *zone,
> > + unsigned int order, int mt)
> > +{
> > + set_page_order(page, order);
> > + zone->free_area[order].nr_free++;
>
> The 'mt' parameter seems unused here. Otherwise
An right, forgot to remove it...
>
> Acked-by: Vlastimil Babka <[email protected]>
Thanks!
>
> > +}
> > +
> > +static inline void add_to_buddy_head(struct page *page, struct zone *zone,
> > + unsigned int order, int mt)
> > +{
> > + add_to_buddy_common(page, zone, order, mt);
> > + list_add(&page->lru, &zone->free_area[order].free_list[mt]);
> > +}
> > +
> > +static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
> > + unsigned int order, int mt)
> > +{
> > + add_to_buddy_common(page, zone, order, mt);
> > + list_add_tail(&page->lru, &zone->free_area[order].free_list[mt]);
> > +}
> > +
> > static inline void rmv_page_order(struct page *page)
> > {
> > __ClearPageBuddy(page);
> > set_page_private(page, 0);
> > }
> >
> > +static inline void remove_from_buddy(struct page *page, struct zone *zone,
> > + unsigned int order)
> > +{
> > + list_del(&page->lru);
> > + zone->free_area[order].nr_free--;
> > + rmv_page_order(page);
> > +}
> > +
> > /*
> > * This function checks whether a page is free && is the buddy
> > * we can do coalesce a page and its buddy if
> > @@ -845,13 +874,10 @@ static inline void __free_one_page(struct page *page,
> > * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
> > * merge with it and move up one order.
> > */
> > - if (page_is_guard(buddy)) {
> > + if (page_is_guard(buddy))
> > clear_page_guard(zone, buddy, order, migratetype);
> > - } else {
> > - list_del(&buddy->lru);
> > - zone->free_area[order].nr_free--;
> > - rmv_page_order(buddy);
> > - }
> > + else
> > + remove_from_buddy(buddy, zone, order);
> > combined_pfn = buddy_pfn & pfn;
> > page = page + (combined_pfn - pfn);
> > pfn = combined_pfn;
> > @@ -883,8 +909,6 @@ static inline void __free_one_page(struct page *page,
> > }
> >
> > done_merging:
> > - set_page_order(page, order);
> > -
> > /*
> > * If this is not the largest possible page, check if the buddy
> > * of the next-highest order is free. If it is, it's possible
> > @@ -901,15 +925,12 @@ static inline void __free_one_page(struct page *page,
> > higher_buddy = higher_page + (buddy_pfn - combined_pfn);
> > if (pfn_valid_within(buddy_pfn) &&
> > page_is_buddy(higher_page, higher_buddy, order + 1)) {
> > - list_add_tail(&page->lru,
> > - &zone->free_area[order].free_list[migratetype]);
> > - goto out;
> > + add_to_buddy_tail(page, zone, order, migratetype);
> > + return;
> > }
> > }
> >
> > - list_add(&page->lru, &zone->free_area[order].free_list[migratetype]);
> > -out:
> > - zone->free_area[order].nr_free++;
> > + add_to_buddy_head(page, zone, order, migratetype);
> > }
> >
> > /*
> > @@ -1731,9 +1752,7 @@ static inline void expand(struct zone *zone, struct page *page,
> > if (set_page_guard(zone, &page[size], high, migratetype))
> > continue;
> >
> > - list_add(&page[size].lru, &area->free_list[migratetype]);
> > - area->nr_free++;
> > - set_page_order(&page[size], high);
> > + add_to_buddy_head(&page[size], zone, high, migratetype);
> > }
> > }
> >
> > @@ -1877,9 +1896,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
> > struct page, lru);
> > if (!page)
> > continue;
> > - list_del(&page->lru);
> > - rmv_page_order(page);
> > - area->nr_free--;
> > + remove_from_buddy(page, zone, current_order);
> > expand(zone, page, order, current_order, area, migratetype);
> > set_pcppage_migratetype(page, migratetype);
> > return page;
> > @@ -2795,9 +2812,7 @@ int __isolate_free_page(struct page *page, unsigned int order)
> > }
> >
> > /* Remove page from free list */
> > - list_del(&page->lru);
> > - zone->free_area[order].nr_free--;
> > - rmv_page_order(page);
> > + remove_from_buddy(page, zone, order);
> >
> > /*
> > * Set the pageblock if the isolated page is at least half of a
> > @@ -7886,9 +7901,7 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
> > pr_info("remove from free list %lx %d %lx\n",
> > pfn, 1 << order, end_pfn);
> > #endif
> > - list_del(&page->lru);
> > - rmv_page_order(page);
> > - zone->free_area[order].nr_free--;
> > + remove_from_buddy(page, zone, order);
> > for (i = 0; i < (1 << order); i++)
> > SetPageReserved((page+i));
> > pfn += (1 << order);
> >
>
On Tue, Mar 20, 2018 at 12:45:50PM +0100, Vlastimil Babka wrote:
> On 03/20/2018 09:54 AM, Aaron Lu wrote:
> > Running will-it-scale/page_fault1 process mode workload on a 2 sockets
> > Intel Skylake server showed severe lock contention of zone->lock, as
> > high as about 80%(42% on allocation path and 35% on free path) CPU
> > cycles are burnt spinning. With perf, the most time consuming part inside
> > that lock on free path is cache missing on page structures, mostly on
> > the to-be-freed page's buddy due to merging.
>
> But why, with all the prefetching in place?
The prefetch is just for its order 0 buddy, if merge happens, then its
order 1 buddy will also be checked and on and on, so the cache misses
are much more in merge mode.
>
> > One way to avoid this overhead is not do any merging at all for order-0
> > pages. With this approach, the lock contention for zone->lock on free
> > path dropped to 1.1% but allocation side still has as high as 42% lock
> > contention. In the meantime, the dropped lock contention on free side
> > doesn't translate to performance increase, instead, it's consumed by
> > increased lock contention of the per node lru_lock(rose from 5% to 37%)
> > and the final performance slightly dropped about 1%.
> >
> > Though performance dropped a little, it almost eliminated zone lock
> > contention on free path and it is the foundation for the next patch
> > that eliminates zone lock contention for allocation path.
>
> Not thrilled about such disruptive change in the name of a
> microbenchmark :/ Shouldn't normally the pcplists hide the overhead?
Sadly, with the default pcp count, it didn't avoid the lock contention.
We can of course increase pcp->count to a large enough value to avoid
entering buddy and thus avoid zone->lock contention, but that would
require admin to manually change the value on a per-machine per-workload
basis I believe.
> If not, wouldn't it make more sense to turn zone->lock into a range lock?
Not familiar with range lock, will need to take a look at it, thanks for
the pointer.
>
> > A new document file called "struct_page_filed" is added to explain
> > the newly reused field in "struct page".
>
> Sounds rather ad-hoc for a single field, I'd rather document it via
> comments.
Dave would like to have a document to explain all those "struct page"
fields that are repurposed under different scenarios and this is the
very start of the document :-)
I probably should have explained the intent of the document more.
Thanks for taking a look at this.
> > Suggested-by: Dave Hansen <[email protected]>
> > Signed-off-by: Aaron Lu <[email protected]>
> > ---
> > Documentation/vm/struct_page_field | 5 +++
> > include/linux/mm_types.h | 1 +
> > mm/compaction.c | 13 +++++-
> > mm/internal.h | 27 ++++++++++++
> > mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++-----
> > 5 files changed, 122 insertions(+), 13 deletions(-)
> > create mode 100644 Documentation/vm/struct_page_field
> >
On Tue, Mar 20, 2018 at 03:29:33PM -0700, Figo.zhang wrote:
> 2018-03-20 1:54 GMT-07:00 Aaron Lu <[email protected]>:
>
> > Profile on Intel Skylake server shows the most time consuming part
> > under zone->lock on allocation path is accessing those to-be-returned
> > page's "struct page" on the free_list inside zone->lock. One explanation
> > is, different CPUs are releasing pages to the head of free_list and
> > those page's 'struct page' may very well be cache cold for the allocating
> > CPU when it grabs these pages from free_list' head. The purpose here
> > is to avoid touching these pages one by one inside zone->lock.
> >
> > One idea is, we just take the requested number of pages off free_list
> > with something like list_cut_position() and then adjust nr_free of
> > free_area accordingly inside zone->lock and other operations like
> > clearing PageBuddy flag for these pages are done outside of zone->lock.
> >
>
> sounds good!
> your idea is reducing the lock contention in rmqueue_bulk() function by
Right, the idea is to reduce the lock held time.
> split the order-0
> freelist into two list, one is without zone->lock, other is need zone->lock?
But not by splitting freelist into two lists, I didn't do that.
I moved part of the things done previously inside the lock outside, i.e.
clearing PageBuddy flag etc. is now done outside so that we do not need
to take the penalty of cache miss on those "struct page"s inside the
lock and have all other CPUs waiting.
>
> it seems that it is a big lock granularity holding the zone->lock in
> rmqueue_bulk() ,
> why not we change like it?
It is believed frequently taking and dropping lock is worse than taking
it and do all needed things and then drop.
>
> static int rmqueue_bulk(struct zone *zone, unsigned int order,
> unsigned long count, struct list_head *list,
> int migratetype, bool cold)
> {
>
> for (i = 0; i < count; ++i) {
> spin_lock(&zone->lock);
> struct page *page = __rmqueue(zone, order, migratetype);
> spin_unlock(&zone->lock);
> ...
> }
In this case, spin_lock() and spin_unlock() should be outside the loop.
> __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
>
> return i;
> }
On Tue, Mar 20, 2018 at 03:58:51PM -0700, Figo.zhang wrote:
> 2018-03-20 1:54 GMT-07:00 Aaron Lu <[email protected]>:
>
> > Running will-it-scale/page_fault1 process mode workload on a 2 sockets
> > Intel Skylake server showed severe lock contention of zone->lock, as
> > high as about 80%(42% on allocation path and 35% on free path) CPU
> > cycles are burnt spinning. With perf, the most time consuming part inside
> > that lock on free path is cache missing on page structures, mostly on
> > the to-be-freed page's buddy due to merging.
> >
> > One way to avoid this overhead is not do any merging at all for order-0
> > pages. With this approach, the lock contention for zone->lock on free
> > path dropped to 1.1% but allocation side still has as high as 42% lock
> > contention. In the meantime, the dropped lock contention on free side
> > doesn't translate to performance increase, instead, it's consumed by
> > increased lock contention of the per node lru_lock(rose from 5% to 37%)
> > and the final performance slightly dropped about 1%.
> >
> > Though performance dropped a little, it almost eliminated zone lock
> > contention on free path and it is the foundation for the next patch
> > that eliminates zone lock contention for allocation path.
> >
> > A new document file called "struct_page_filed" is added to explain
> > the newly reused field in "struct page".
> >
> > Suggested-by: Dave Hansen <[email protected]>
> > Signed-off-by: Aaron Lu <[email protected]>
> > ---
> > Documentation/vm/struct_page_field | 5 +++
> > include/linux/mm_types.h | 1 +
> > mm/compaction.c | 13 +++++-
> > mm/internal.h | 27 ++++++++++++
> > mm/page_alloc.c | 89 ++++++++++++++++++++++++++++++
> > +++-----
> > 5 files changed, 122 insertions(+), 13 deletions(-)
> > create mode 100644 Documentation/vm/struct_page_field
> >
> > diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_
> > page_field
> > new file mode 100644
> > index 000000000000..1ab6c19ccc7a
> > --- /dev/null
> > +++ b/Documentation/vm/struct_page_field
> > @@ -0,0 +1,5 @@
> > +buddy_merge_skipped:
> > +Used to indicate this page skipped merging when added to buddy. This
> > +field only makes sense if the page is in Buddy and is order zero.
> > +It's a bug if any higher order pages in Buddy has this field set.
> > +Shares space with index.
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index fd1af6b9591d..7edc4e102a8e 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -91,6 +91,7 @@ struct page {
> > pgoff_t index; /* Our offset within mapping. */
> > void *freelist; /* sl[aou]b first free object */
> > /* page_deferred_list().prev -- second tail page */
> > + bool buddy_merge_skipped; /* skipped merging when added to
> > buddy */
> > };
> >
> > union {
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 2c8999d027ab..fb9031fdca41 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -776,8 +776,19 @@ isolate_migratepages_block(struct compact_control
> > *cc, unsigned long low_pfn,
> > * potential isolation targets.
> > */
> > if (PageBuddy(page)) {
> > - unsigned long freepage_order =
> > page_order_unsafe(page);
> > + unsigned long freepage_order;
> >
> > + /*
> > + * If this is a merge_skipped page, do merge now
> > + * since high-order pages are needed. zone lock
> > + * isn't taken for the merge_skipped check so the
> > + * check could be wrong but the worst case is we
> > + * lose a merge opportunity.
> > + */
> > + if (page_merge_was_skipped(page))
> > + try_to_merge_page(page);
> > +
> > + freepage_order = page_order_unsafe(page);
> > /*
> > * Without lock, we cannot be sure that what we
> > got is
> > * a valid page order. Consider only values in the
> >
>
> when the system memory is very very low and try a lot of failures and then
If the system memory is very very low, it doesn't appear there is a need
to do compaction since compaction needs to have enough order 0 pages to
make a high order one.
> go into
> __alloc_pages_direct_compact() to has a opportunity to do your
> try_to_merge_page(), is it the best timing for here to
> do order-0 migration?
try_to_merge_page(), as I added in this patch, doesn't do any page
migration but merging. It will not cause page migration.
On Tue, Mar 20, 2018 at 09:21:33PM -0700, Figo.zhang wrote:
> suppose that in free_one_page() will try to merge to high order anytime ,
> but now in your patch,
> those merge has postponed when system in low memory status, it is very easy
> let system trigger
> low memory state and get poor performance.
Merge or not merge, the size of free memory is not affected.
On Tue, Mar 20, 2018 at 10:59:16PM -0700, Figo.zhang wrote:
> 2018-03-20 21:53 GMT-07:00 Aaron Lu <[email protected]>:
>
> > On Tue, Mar 20, 2018 at 09:21:33PM -0700, Figo.zhang wrote:
> > > suppose that in free_one_page() will try to merge to high order anytime ,
> > > but now in your patch,
> > > those merge has postponed when system in low memory status, it is very
> > easy
> > > let system trigger
> > > low memory state and get poor performance.
> >
> > Merge or not merge, the size of free memory is not affected.
> >
>
> yes, the total free memory is not impact, but will influence the higher
> order allocation.
Yes, that's correct.
On 03/20/2018 03:11 PM, Aaron Lu wrote:
> On Tue, Mar 20, 2018 at 12:45:50PM +0100, Vlastimil Babka wrote:
>> But why, with all the prefetching in place?
>
> The prefetch is just for its order 0 buddy, if merge happens, then its
> order 1 buddy will also be checked and on and on, so the cache misses
> are much more in merge mode.
I see.
>> Not thrilled about such disruptive change in the name of a
>> microbenchmark :/ Shouldn't normally the pcplists hide the overhead?
>
> Sadly, with the default pcp count, it didn't avoid the lock contention.
> We can of course increase pcp->count to a large enough value to avoid
> entering buddy and thus avoid zone->lock contention, but that would
> require admin to manually change the value on a per-machine per-workload
> basis I believe.
Well, anyone who really cares about performance has to invest some time
to tuning anyway, I believe?
>> If not, wouldn't it make more sense to turn zone->lock into a range lock?
>
> Not familiar with range lock, will need to take a look at it, thanks for
> the pointer.
The suggestion was rather quick and not well thought-out. Range lock
itself is insufficient - for merging/splitting buddies it's ok for
working with struct pages because the candidate buddies are within a
MAX_ORDER range. But the freelists contain pages from the whole zone.
>>
>>> A new document file called "struct_page_filed" is added to explain
>>> the newly reused field in "struct page".
>>
>> Sounds rather ad-hoc for a single field, I'd rather document it via
>> comments.
>
> Dave would like to have a document to explain all those "struct page"
> fields that are repurposed under different scenarios and this is the
> very start of the document :-)
Oh, I see.
> I probably should have explained the intent of the document more.
>
> Thanks for taking a look at this.
>
>>> Suggested-by: Dave Hansen <[email protected]>
>>> Signed-off-by: Aaron Lu <[email protected]>
>>> ---
>>> Documentation/vm/struct_page_field | 5 +++
>>> include/linux/mm_types.h | 1 +
>>> mm/compaction.c | 13 +++++-
>>> mm/internal.h | 27 ++++++++++++
>>> mm/page_alloc.c | 89 +++++++++++++++++++++++++++++++++-----
>>> 5 files changed, 122 insertions(+), 13 deletions(-)
>>> create mode 100644 Documentation/vm/struct_page_field
>>>
On 03/20/2018 09:54 AM, Aaron Lu wrote:
> Profile on Intel Skylake server shows the most time consuming part
> under zone->lock on allocation path is accessing those to-be-returned
> page's "struct page" on the free_list inside zone->lock. One explanation
> is, different CPUs are releasing pages to the head of free_list and
> those page's 'struct page' may very well be cache cold for the allocating
> CPU when it grabs these pages from free_list' head. The purpose here
> is to avoid touching these pages one by one inside zone->lock.
>
> One idea is, we just take the requested number of pages off free_list
> with something like list_cut_position() and then adjust nr_free of
> free_area accordingly inside zone->lock and other operations like
> clearing PageBuddy flag for these pages are done outside of zone->lock.
>
> list_cut_position() needs to know where to cut, that's what the new
> 'struct cluster' meant to provide. All pages on order 0's free_list
> belongs to a cluster so when a number of pages is needed, the cluster
> to which head page of free_list belongs is checked and then tail page
> of the cluster could be found. With tail page, list_cut_position() can
> be used to drop the cluster off free_list. The 'struct cluster' also has
> 'nr' to tell how many pages this cluster has so nr_free of free_area can
> be adjusted inside the lock too.
>
> This caused a race window though: from the moment zone->lock is dropped
> till these pages' PageBuddy flags get cleared, these pages are not in
> buddy but still have PageBuddy flag set.
>
> This doesn't cause problems for users that access buddy pages through
> free_list. But there are other users, like move_freepages() which is
> used to move a pageblock pages from one migratetype to another in
> fallback allocation path, will test PageBuddy flag of a page derived
> from PFN. The end result could be that for pages in the race window,
> they are moved back to free_list of another migratetype. For this
> reason, a synchronization function zone_wait_cluster_alloc() is
> introduced to wait till all pages are in correct state. This function
> is meant to be called with zone->lock held, so after this function
> returns, we do not need to worry about new pages becoming racy state.
>
> Another user is compaction, where it will scan a pageblock for
> migratable candidates. In this process, pages derived from PFN will
> be checked for PageBuddy flag to decide if it is a merge skipped page.
> To avoid a racy page getting merged back into buddy, the
> zone_wait_and_disable_cluster_alloc() function is introduced to:
> 1 disable clustered allocation by increasing zone->cluster.disable_depth;
> 2 wait till the race window pass by calling zone_wait_cluster_alloc().
> This function is also meant to be called with zone->lock held so after
> it returns, all pages are in correct state and no more cluster alloc
> will be attempted till zone_enable_cluster_alloc() is called to decrease
> zone->cluster.disable_depth.
I'm sorry, but I feel the added complexity here is simply too large to
justify the change. Especially if the motivation seems to be just the
microbenchmark. It would be better if this was motivated by a real
workload where zone lock contention was identified as the main issue,
and we would see the improvements on the workload. We could also e.g.
find out that the problem can be avoided at a different level.
Besides complexity, it may also add overhead to the non-contended case,
i.e. the atomic operations on in_progress. This goes against recent page
allocation optimizations by Mel Gorman etc.
Would perhaps prefetching the next page in freelist (in
remove_from_buddy()) help here?
Vlastimil
> The two patches could eliminate zone->lock contention entirely but at
> the same time, pgdat->lru_lock contention rose to 82%. Final performance
> increased about 8.3%.
>
> Suggested-by: Ying Huang <[email protected]>
> Signed-off-by: Aaron Lu <[email protected]>
> ---
> Documentation/vm/struct_page_field | 5 +
> include/linux/mm_types.h | 2 +
> include/linux/mmzone.h | 35 +++++
> mm/compaction.c | 4 +
> mm/internal.h | 34 +++++
> mm/page_alloc.c | 288 +++++++++++++++++++++++++++++++++++--
> 6 files changed, 360 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/vm/struct_page_field b/Documentation/vm/struct_page_field
> index 1ab6c19ccc7a..bab738ea4e0a 100644
> --- a/Documentation/vm/struct_page_field
> +++ b/Documentation/vm/struct_page_field
> @@ -3,3 +3,8 @@ Used to indicate this page skipped merging when added to buddy. This
> field only makes sense if the page is in Buddy and is order zero.
> It's a bug if any higher order pages in Buddy has this field set.
> Shares space with index.
> +
> +cluster:
> +Order 0 Buddy pages are grouped in cluster on free_list to speed up
> +allocation. This field stores the cluster pointer for them.
> +Shares space with mapping.
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 7edc4e102a8e..49fe9d755a7c 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -84,6 +84,8 @@ struct page {
> void *s_mem; /* slab first object */
> atomic_t compound_mapcount; /* first tail page */
> /* page_deferred_list().next -- second tail page */
> +
> + struct cluster *cluster; /* order 0 cluster this page belongs to */
> };
>
> /* Second double word */
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 7522a6987595..09ba9d3cc385 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -355,6 +355,40 @@ enum zone_type {
>
> #ifndef __GENERATING_BOUNDS_H
>
> +struct cluster {
> + struct page *tail; /* tail page of the cluster */
> + int nr; /* how many pages are in this cluster */
> +};
> +
> +struct order0_cluster {
> + /* order 0 cluster array, dynamically allocated */
> + struct cluster *array;
> + /*
> + * order 0 cluster array length, also used to indicate if cluster
> + * allocation is enabled for this zone(cluster allocation is disabled
> + * for small zones whose batch size is smaller than 1, like DMA zone)
> + */
> + int len;
> + /*
> + * smallest position from where we search for an
> + * empty cluster from the cluster array
> + */
> + int zero_bit;
> + /* bitmap used to quickly locate an empty cluster from cluster array */
> + unsigned long *bitmap;
> +
> + /* disable cluster allocation to avoid new pages becoming racy state. */
> + unsigned long disable_depth;
> +
> + /*
> + * used to indicate if there are pages allocated in cluster mode
> + * still in racy state. Caller with zone->lock held could use helper
> + * function zone_wait_cluster_alloc() to wait all such pages to exit
> + * the race window.
> + */
> + atomic_t in_progress;
> +};
> +
> struct zone {
> /* Read-mostly fields */
>
> @@ -459,6 +493,7 @@ struct zone {
>
> /* free areas of different sizes */
> struct free_area free_area[MAX_ORDER];
> + struct order0_cluster cluster;
>
> /* zone flags, see below */
> unsigned long flags;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index fb9031fdca41..e71fa82786a1 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1601,6 +1601,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
>
> migrate_prep_local();
>
> + zone_wait_and_disable_cluster_alloc(zone);
> +
> while ((ret = compact_finished(zone, cc)) == COMPACT_CONTINUE) {
> int err;
>
> @@ -1699,6 +1701,8 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
> zone->compact_cached_free_pfn = free_pfn;
> }
>
> + zone_enable_cluster_alloc(zone);
> +
> count_compact_events(COMPACTMIGRATE_SCANNED, cc->total_migrate_scanned);
> count_compact_events(COMPACTFREE_SCANNED, cc->total_free_scanned);
>
> diff --git a/mm/internal.h b/mm/internal.h
> index 2bfbaae2d835..1b0535af1b49 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -557,12 +557,46 @@ static inline bool can_skip_merge(struct zone *zone, int order)
> if (order)
> return false;
>
> + /*
> + * Clustered allocation is only disabled when high-order pages
> + * are needed, e.g. in compaction and CMA alloc, so we should
> + * also skip merging in that case.
> + */
> + if (zone->cluster.disable_depth)
> + return false;
> +
> return true;
> }
> +
> +static inline void zone_wait_cluster_alloc(struct zone *zone)
> +{
> + while (atomic_read(&zone->cluster.in_progress))
> + cpu_relax();
> +}
> +
> +static inline void zone_wait_and_disable_cluster_alloc(struct zone *zone)
> +{
> + unsigned long flags;
> + spin_lock_irqsave(&zone->lock, flags);
> + zone->cluster.disable_depth++;
> + zone_wait_cluster_alloc(zone);
> + spin_unlock_irqrestore(&zone->lock, flags);
> +}
> +
> +static inline void zone_enable_cluster_alloc(struct zone *zone)
> +{
> + unsigned long flags;
> + spin_lock_irqsave(&zone->lock, flags);
> + zone->cluster.disable_depth--;
> + spin_unlock_irqrestore(&zone->lock, flags);
> +}
> #else /* CONFIG_COMPACTION */
> static inline bool can_skip_merge(struct zone *zone, int order)
> {
> return false;
> }
> +static inline void zone_wait_cluster_alloc(struct zone *zone) {}
> +static inline void zone_wait_and_disable_cluster_alloc(struct zone *zone) {}
> +static inline void zone_enable_cluster_alloc(struct zone *zone) {}
> #endif /* CONFIG_COMPACTION */
> #endif /* __MM_INTERNAL_H */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eb78014dfbde..ac93833a2877 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -746,6 +746,82 @@ static inline void set_page_order(struct page *page, unsigned int order)
> __SetPageBuddy(page);
> }
>
> +static inline struct cluster *new_cluster(struct zone *zone, int nr,
> + struct page *tail)
> +{
> + struct order0_cluster *cluster = &zone->cluster;
> + int n = find_next_zero_bit(cluster->bitmap, cluster->len, cluster->zero_bit);
> + if (n == cluster->len) {
> + printk_ratelimited("node%d zone %s cluster used up\n",
> + zone->zone_pgdat->node_id, zone->name);
> + return NULL;
> + }
> + cluster->zero_bit = n;
> + set_bit(n, cluster->bitmap);
> + cluster->array[n].nr = nr;
> + cluster->array[n].tail = tail;
> + return &cluster->array[n];
> +}
> +
> +static inline struct cluster *add_to_cluster_common(struct page *page,
> + struct zone *zone, struct page *neighbor)
> +{
> + struct cluster *c;
> +
> + if (neighbor) {
> + int batch = this_cpu_ptr(zone->pageset)->pcp.batch;
> + c = neighbor->cluster;
> + if (c && c->nr < batch) {
> + page->cluster = c;
> + c->nr++;
> + return c;
> + }
> + }
> +
> + c = new_cluster(zone, 1, page);
> + if (unlikely(!c))
> + return NULL;
> +
> + page->cluster = c;
> + return c;
> +}
> +
> +/*
> + * Add this page to the cluster where the previous head page belongs.
> + * Called after page is added to free_list(and becoming the new head).
> + */
> +static inline void add_to_cluster_head(struct page *page, struct zone *zone,
> + int order, int mt)
> +{
> + struct page *neighbor;
> +
> + if (order || !zone->cluster.len)
> + return;
> +
> + neighbor = page->lru.next == &zone->free_area[0].free_list[mt] ?
> + NULL : list_entry(page->lru.next, struct page, lru);
> + add_to_cluster_common(page, zone, neighbor);
> +}
> +
> +/*
> + * Add this page to the cluster where the previous tail page belongs.
> + * Called after page is added to free_list(and becoming the new tail).
> + */
> +static inline void add_to_cluster_tail(struct page *page, struct zone *zone,
> + int order, int mt)
> +{
> + struct page *neighbor;
> + struct cluster *c;
> +
> + if (order || !zone->cluster.len)
> + return;
> +
> + neighbor = page->lru.prev == &zone->free_area[0].free_list[mt] ?
> + NULL : list_entry(page->lru.prev, struct page, lru);
> + c = add_to_cluster_common(page, zone, neighbor);
> + c->tail = page;
> +}
> +
> static inline void add_to_buddy_common(struct page *page, struct zone *zone,
> unsigned int order, int mt)
> {
> @@ -765,6 +841,7 @@ static inline void add_to_buddy_head(struct page *page, struct zone *zone,
> {
> add_to_buddy_common(page, zone, order, mt);
> list_add(&page->lru, &zone->free_area[order].free_list[mt]);
> + add_to_cluster_head(page, zone, order, mt);
> }
>
> static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
> @@ -772,6 +849,7 @@ static inline void add_to_buddy_tail(struct page *page, struct zone *zone,
> {
> add_to_buddy_common(page, zone, order, mt);
> list_add_tail(&page->lru, &zone->free_area[order].free_list[mt]);
> + add_to_cluster_tail(page, zone, order, mt);
> }
>
> static inline void rmv_page_order(struct page *page)
> @@ -780,9 +858,29 @@ static inline void rmv_page_order(struct page *page)
> set_page_private(page, 0);
> }
>
> +/* called before removed from free_list */
> +static inline void remove_from_cluster(struct page *page, struct zone *zone)
> +{
> + struct cluster *c = page->cluster;
> + if (!c)
> + return;
> +
> + page->cluster = NULL;
> + c->nr--;
> + if (!c->nr) {
> + int bit = c - zone->cluster.array;
> + c->tail = NULL;
> + clear_bit(bit, zone->cluster.bitmap);
> + if (bit < zone->cluster.zero_bit)
> + zone->cluster.zero_bit = bit;
> + } else if (page == c->tail)
> + c->tail = list_entry(page->lru.prev, struct page, lru);
> +}
> +
> static inline void remove_from_buddy(struct page *page, struct zone *zone,
> unsigned int order)
> {
> + remove_from_cluster(page, zone);
> list_del(&page->lru);
> zone->free_area[order].nr_free--;
> rmv_page_order(page);
> @@ -2025,6 +2123,17 @@ static int move_freepages(struct zone *zone,
> if (num_movable)
> *num_movable = 0;
>
> + /*
> + * Cluster alloced pages may have their PageBuddy flag unclear yet
> + * after dropping zone->lock in rmqueue_bulk() and steal here could
> + * move them back to free_list. So it's necessary to wait till all
> + * those pages have their flags properly cleared.
> + *
> + * We do not need to disable cluster alloc though since we already
> + * held zone->lock and no allocation could happen.
> + */
> + zone_wait_cluster_alloc(zone);
> +
> for (page = start_page; page <= end_page;) {
> if (!pfn_valid_within(page_to_pfn(page))) {
> page++;
> @@ -2049,8 +2158,10 @@ static int move_freepages(struct zone *zone,
> }
>
> order = page_order(page);
> + remove_from_cluster(page, zone);
> list_move(&page->lru,
> &zone->free_area[order].free_list[migratetype]);
> + add_to_cluster_head(page, zone, order, migratetype);
> page += 1 << order;
> pages_moved += 1 << order;
> }
> @@ -2199,7 +2310,9 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>
> single_page:
> area = &zone->free_area[current_order];
> + remove_from_cluster(page, zone);
> list_move(&page->lru, &area->free_list[start_type]);
> + add_to_cluster_head(page, zone, current_order, start_type);
> }
>
> /*
> @@ -2460,6 +2573,145 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype)
> return page;
> }
>
> +static int __init zone_order0_cluster_init(void)
> +{
> + struct zone *zone;
> +
> + for_each_zone(zone) {
> + int len, mt, batch;
> + unsigned long flags;
> + struct order0_cluster *cluster;
> +
> + if (!managed_zone(zone))
> + continue;
> +
> + /* no need to enable cluster allocation for batch<=1 zone */
> + preempt_disable();
> + batch = this_cpu_ptr(zone->pageset)->pcp.batch;
> + preempt_enable();
> + if (batch <= 1)
> + continue;
> +
> + cluster = &zone->cluster;
> + /* FIXME: possible overflow of int type */
> + len = DIV_ROUND_UP(zone->managed_pages, batch);
> + cluster->array = vzalloc(len * sizeof(struct cluster));
> + if (!cluster->array)
> + return -ENOMEM;
> + cluster->bitmap = vzalloc(DIV_ROUND_UP(len, BITS_PER_LONG) *
> + sizeof(unsigned long));
> + if (!cluster->bitmap)
> + return -ENOMEM;
> +
> + spin_lock_irqsave(&zone->lock, flags);
> + cluster->len = len;
> + for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) {
> + struct page *page;
> + list_for_each_entry_reverse(page,
> + &zone->free_area[0].free_list[mt], lru)
> + add_to_cluster_head(page, zone, 0, mt);
> + }
> + spin_unlock_irqrestore(&zone->lock, flags);
> + }
> +
> + return 0;
> +}
> +subsys_initcall(zone_order0_cluster_init);
> +
> +static inline int __rmqueue_bulk_cluster(struct zone *zone, unsigned long count,
> + struct list_head *list, int mt)
> +{
> + struct list_head *head = &zone->free_area[0].free_list[mt];
> + int nr = 0;
> +
> + while (nr < count) {
> + struct page *head_page;
> + struct list_head *tail, tmp_list;
> + struct cluster *c;
> + int bit;
> +
> + head_page = list_first_entry_or_null(head, struct page, lru);
> + if (!head_page || !head_page->cluster)
> + break;
> +
> + c = head_page->cluster;
> + tail = &c->tail->lru;
> +
> + /* drop the cluster off free_list and attach to list */
> + list_cut_position(&tmp_list, head, tail);
> + list_splice_tail(&tmp_list, list);
> +
> + nr += c->nr;
> + zone->free_area[0].nr_free -= c->nr;
> +
> + /* this cluster is empty now */
> + c->tail = NULL;
> + c->nr = 0;
> + bit = c - zone->cluster.array;
> + clear_bit(bit, zone->cluster.bitmap);
> + if (bit < zone->cluster.zero_bit)
> + zone->cluster.zero_bit = bit;
> + }
> +
> + return nr;
> +}
> +
> +static inline int rmqueue_bulk_cluster(struct zone *zone, unsigned int order,
> + unsigned long count, struct list_head *list,
> + int migratetype)
> +{
> + int alloced;
> + struct page *page;
> +
> + /*
> + * Cluster alloc races with merging so don't try cluster alloc when we
> + * can't skip merging. Note that can_skip_merge() keeps the same return
> + * value from here till all pages have their flags properly processed,
> + * i.e. the end of the function where in_progress is incremented, even
> + * we have dropped the lock in the middle because the only place that
> + * can change can_skip_merge()'s return value is compaction code and
> + * compaction needs to wait on in_progress.
> + */
> + if (!can_skip_merge(zone, 0))
> + return 0;
> +
> + /* Cluster alloc is disabled, mostly compaction is already in progress */
> + if (zone->cluster.disable_depth)
> + return 0;
> +
> + /* Cluster alloc is disabled for this zone */
> + if (unlikely(!zone->cluster.len))
> + return 0;
> +
> + alloced = __rmqueue_bulk_cluster(zone, count, list, migratetype);
> + if (!alloced)
> + return 0;
> +
> + /*
> + * Cache miss on page structure could slow things down
> + * dramatically so accessing these alloced pages without
> + * holding lock for better performance.
> + *
> + * Since these pages still have PageBuddy set, there is a race
> + * window between now and when PageBuddy is cleared for them
> + * below. Any operation that would scan a pageblock and check
> + * PageBuddy(page), e.g. compaction, will need to wait till all
> + * such pages are properly processed. in_progress is used for
> + * such purpose so increase it now before dropping the lock.
> + */
> + atomic_inc(&zone->cluster.in_progress);
> + spin_unlock(&zone->lock);
> +
> + list_for_each_entry(page, list, lru) {
> + rmv_page_order(page);
> + page->cluster = NULL;
> + set_pcppage_migratetype(page, migratetype);
> + }
> + atomic_dec(&zone->cluster.in_progress);
> +
> + return alloced;
> +}
> +
> /*
> * Obtain a specified number of elements from the buddy allocator, all under
> * a single hold of the lock, for efficiency. Add them to the supplied list.
> @@ -2469,17 +2721,23 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> unsigned long count, struct list_head *list,
> int migratetype)
> {
> - int i, alloced = 0;
> + int i, alloced;
> + struct page *page, *tmp;
>
> spin_lock(&zone->lock);
> - for (i = 0; i < count; ++i) {
> - struct page *page = __rmqueue(zone, order, migratetype);
> + alloced = rmqueue_bulk_cluster(zone, order, count, list, migratetype);
> + if (alloced > 0) {
> + if (alloced >= count)
> + goto out;
> + else
> + spin_lock(&zone->lock);
> + }
> +
> + for (; alloced < count; alloced++) {
> + page = __rmqueue(zone, order, migratetype);
> if (unlikely(page == NULL))
> break;
>
> - if (unlikely(check_pcp_refill(page)))
> - continue;
> -
> /*
> * Split buddy pages returned by expand() are received here in
> * physical page order. The page is added to the tail of
> @@ -2491,7 +2749,18 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> * pages are ordered properly.
> */
> list_add_tail(&page->lru, list);
> - alloced++;
> + }
> + spin_unlock(&zone->lock);
> +
> +out:
> + i = alloced;
> + list_for_each_entry_safe(page, tmp, list, lru) {
> + if (unlikely(check_pcp_refill(page))) {
> + list_del(&page->lru);
> + alloced--;
> + continue;
> + }
> +
> if (is_migrate_cma(get_pcppage_migratetype(page)))
> __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
> -(1 << order));
> @@ -2504,7 +2773,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> * pages added to the pcp list.
> */
> __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
> - spin_unlock(&zone->lock);
> return alloced;
> }
>
> @@ -7744,6 +8012,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> unsigned long outer_start, outer_end;
> unsigned int order;
> int ret = 0;
> + struct zone *zone = page_zone(pfn_to_page(start));
>
> struct compact_control cc = {
> .nr_migratepages = 0,
> @@ -7786,6 +8055,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> if (ret)
> return ret;
>
> + zone_wait_and_disable_cluster_alloc(zone);
> /*
> * In case of -EBUSY, we'd like to know which page causes problem.
> * So, just fall through. test_pages_isolated() has a tracepoint
> @@ -7868,6 +8138,8 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> done:
> undo_isolate_page_range(pfn_max_align_down(start),
> pfn_max_align_up(end), migratetype);
> +
> + zone_enable_cluster_alloc(zone);
> return ret;
> }
>
>
Hi Vlastimil,
Thanks for taking the time to reivew the patch, I appreciate that.
On Wed, Mar 21, 2018 at 01:55:01PM +0100, Vlastimil Babka wrote:
> On 03/20/2018 09:54 AM, Aaron Lu wrote:
> > Profile on Intel Skylake server shows the most time consuming part
> > under zone->lock on allocation path is accessing those to-be-returned
> > page's "struct page" on the free_list inside zone->lock. One explanation
> > is, different CPUs are releasing pages to the head of free_list and
> > those page's 'struct page' may very well be cache cold for the allocating
> > CPU when it grabs these pages from free_list' head. The purpose here
> > is to avoid touching these pages one by one inside zone->lock.
> >
> > One idea is, we just take the requested number of pages off free_list
> > with something like list_cut_position() and then adjust nr_free of
> > free_area accordingly inside zone->lock and other operations like
> > clearing PageBuddy flag for these pages are done outside of zone->lock.
> >
> > list_cut_position() needs to know where to cut, that's what the new
> > 'struct cluster' meant to provide. All pages on order 0's free_list
> > belongs to a cluster so when a number of pages is needed, the cluster
> > to which head page of free_list belongs is checked and then tail page
> > of the cluster could be found. With tail page, list_cut_position() can
> > be used to drop the cluster off free_list. The 'struct cluster' also has
> > 'nr' to tell how many pages this cluster has so nr_free of free_area can
> > be adjusted inside the lock too.
> >
> > This caused a race window though: from the moment zone->lock is dropped
> > till these pages' PageBuddy flags get cleared, these pages are not in
> > buddy but still have PageBuddy flag set.
> >
> > This doesn't cause problems for users that access buddy pages through
> > free_list. But there are other users, like move_freepages() which is
> > used to move a pageblock pages from one migratetype to another in
> > fallback allocation path, will test PageBuddy flag of a page derived
> > from PFN. The end result could be that for pages in the race window,
> > they are moved back to free_list of another migratetype. For this
> > reason, a synchronization function zone_wait_cluster_alloc() is
> > introduced to wait till all pages are in correct state. This function
> > is meant to be called with zone->lock held, so after this function
> > returns, we do not need to worry about new pages becoming racy state.
> >
> > Another user is compaction, where it will scan a pageblock for
> > migratable candidates. In this process, pages derived from PFN will
> > be checked for PageBuddy flag to decide if it is a merge skipped page.
> > To avoid a racy page getting merged back into buddy, the
> > zone_wait_and_disable_cluster_alloc() function is introduced to:
> > 1 disable clustered allocation by increasing zone->cluster.disable_depth;
> > 2 wait till the race window pass by calling zone_wait_cluster_alloc().
> > This function is also meant to be called with zone->lock held so after
> > it returns, all pages are in correct state and no more cluster alloc
> > will be attempted till zone_enable_cluster_alloc() is called to decrease
> > zone->cluster.disable_depth.
>
> I'm sorry, but I feel the added complexity here is simply too large to
> justify the change. Especially if the motivation seems to be just the
> microbenchmark. It would be better if this was motivated by a real
> workload where zone lock contention was identified as the main issue,
> and we would see the improvements on the workload. We could also e.g.
> find out that the problem can be avoided at a different level.
One thing I'm aware of is there is some app that consumes a ton of
memory and when it misbehaves or crashes, it takes some 10-20 minutes to
have it exit(munmap() takes a long time to free all those consumed
memory).
THP could help a lot, but it's beyond my understanding why they didn't
use it.
>
> Besides complexity, it may also add overhead to the non-contended case,
> i.e. the atomic operations on in_progress. This goes against recent page
> allocation optimizations by Mel Gorman etc.
>
> Would perhaps prefetching the next page in freelist (in
> remove_from_buddy()) help here?
I'm afraid there isn't enough a window for prefetch() to actually make
a difference, but I could give it a try.
We also considered prefetching free_list before taking the lock but
that prefetch could be useless(i.e. the prefetched page can be taken by
another CPU and gone after we actually acquired the lock) and iterate
the list outside lock can be dangerous.
> > The two patches could eliminate zone->lock contention entirely but at
> > the same time, pgdat->lru_lock contention rose to 82%. Final performance
> > increased about 8.3%.
> >
> > Suggested-by: Ying Huang <[email protected]>
> > Signed-off-by: Aaron Lu <[email protected]>
> > ---
> > Documentation/vm/struct_page_field | 5 +
> > include/linux/mm_types.h | 2 +
> > include/linux/mmzone.h | 35 +++++
> > mm/compaction.c | 4 +
> > mm/internal.h | 34 +++++
> > mm/page_alloc.c | 288 +++++++++++++++++++++++++++++++++++--
> > 6 files changed, 360 insertions(+), 8 deletions(-)
On 03/20/2018 04:54 AM, Aaron Lu wrote:
...snip...
> reduced zone->lock contention on free path from 35% to 1.1%. Also, it
> shows good result on parallel free(*) workload by reducing zone->lock
> contention from 90% to almost zero(lru lock increased from almost 0 to
> 90% though).
Hi Aaron, I'm looking through your series now. Just wanted to mention that I'm seeing the same interaction between zone->lock and lru_lock in my own testing. IOW, it's not enough to fix just one or the other: both need attention to get good performance on a big system, at least in this microbenchmark we've both been using.
There's anti-scaling at high core counts where overall system page faults per second actually decrease with more CPUs added to the test. This happens when either zone->lock or lru_lock contention are completely removed, but the anti-scaling goes away when both locks are fixed.
Anyway, I'll post some actual data on this stuff soon.
On Wed, Mar 21, 2018 at 01:44:25PM -0400, Daniel Jordan wrote:
> On 03/20/2018 04:54 AM, Aaron Lu wrote:
> ...snip...
> > reduced zone->lock contention on free path from 35% to 1.1%. Also, it
> > shows good result on parallel free(*) workload by reducing zone->lock
> > contention from 90% to almost zero(lru lock increased from almost 0 to
> > 90% though).
>
> Hi Aaron, I'm looking through your series now. Just wanted to mention that I'm seeing the same interaction between zone->lock and lru_lock in my own testing. IOW, it's not enough to fix just one or the other: both need attention to get good performance on a big system, at least in this microbenchmark we've both been using.
Agree.
>
> There's anti-scaling at high core counts where overall system page faults per second actually decrease with more CPUs added to the test. This happens when either zone->lock or lru_lock contention are completely removed, but the anti-scaling goes away when both locks are fixed.
>
> Anyway, I'll post some actual data on this stuff soon.
Looking forward to that, thanks.
In the meantime, I'll also try your lru_lock optimization work on top of
this patchset to see if the lock contention shifts back to zone->lock.
On 03/21/2018 09:30 PM, Aaron Lu wrote:
> On Wed, Mar 21, 2018 at 01:44:25PM -0400, Daniel Jordan wrote:
>> On 03/20/2018 04:54 AM, Aaron Lu wrote:
>> ...snip...
>>> reduced zone->lock contention on free path from 35% to 1.1%. Also, it
>>> shows good result on parallel free(*) workload by reducing zone->lock
>>> contention from 90% to almost zero(lru lock increased from almost 0 to
>>> 90% though).
>>
>> Hi Aaron, I'm looking through your series now. Just wanted to mention that I'm seeing the same interaction between zone->lock and lru_lock in my own testing. IOW, it's not enough to fix just one or the other: both need attention to get good performance on a big system, at least in this microbenchmark we've both been using.
>
> Agree.
>
>>
>> There's anti-scaling at high core counts where overall system page faults per second actually decrease with more CPUs added to the test. This happens when either zone->lock or lru_lock contention are completely removed, but the anti-scaling goes away when both locks are fixed.
>>
>> Anyway, I'll post some actual data on this stuff soon.
>
> Looking forward to that, thanks.
>
> In the meantime, I'll also try your lru_lock optimization work on top of
> this patchset to see if the lock contention shifts back to zone->lock.
The lru_lock series I posted is pretty outdated by now, and I've got a
totally new approach I plan to post soon, so it might make sense to wait
for that.
On Tue, Mar 20, 2018 at 10:11:01PM +0800, Aaron Lu wrote:
> > > A new document file called "struct_page_filed" is added to explain
> > > the newly reused field in "struct page".
> >
> > Sounds rather ad-hoc for a single field, I'd rather document it via
> > comments.
>
> Dave would like to have a document to explain all those "struct page"
> fields that are repurposed under different scenarios and this is the
> very start of the document :-)
>
> I probably should have explained the intent of the document more.
Dave and I are in agreement on "Shouldn't struct page be better documented".
I came up with this a few weeks ago; never quite got round to turning it
into a patch:
+---+-----------+-----------+--------------+----------+--------+--------------+
| B | slab | pagecache | tail 1 | anon | tail 1 | hugetlb |
+===+===========+===========+==============+==========+========+==============+
| 0 | flags |
+---+ |
| 4 | |
+---+-----------+-----------+--------------+----------+--------+--------------+
| 8 | s_mem | mapping | cmp_mapcount | anon_vma | defer | mapping |
+---+ | +--------------+ | list | |
|12 | | | | | | |
+---+-----------+-----------+--------------+----------+ +--------------+
|16 | freelist | index | | index |
+---+ | | | (shifted) |
|20 | | | | |
+---+-----------+-------------------------------------+--------+--------------+
|24 | counters | mapcount |
+---+ +-----------+--------------+----------+--------+--------------+
|28 | | refcount | | | | refcount |
+---+-----------+-----------+--------------+----------+--------+--------------+
|32 | next | lru | cmpd_head | | lru |
+---+ | | +-------------------+ +
|36 | | | | | |
+---+-----------+ +--------------+-------------------+ +
|40 | pages | | dtor / order | | |
+---+-----------+ +--------------+-------------------+ +
|44 | pobjects | | | | |
+---+-----------+-----------+--------------+----------------------------------+
|48 | slb_cache | private | | |
+---+ | +--------------+----------------------------------+
|52 | | | | |
+---+-----------+-----------+--------------+----------------------------------+
On 03/22/2018 01:15 PM, Matthew Wilcox wrote:
> On Tue, Mar 20, 2018 at 10:11:01PM +0800, Aaron Lu wrote:
>>>> A new document file called "struct_page_filed" is added to explain
>>>> the newly reused field in "struct page".
>>>
>>> Sounds rather ad-hoc for a single field, I'd rather document it via
>>> comments.
>>
>> Dave would like to have a document to explain all those "struct page"
>> fields that are repurposed under different scenarios and this is the
>> very start of the document :-)
>>
>> I probably should have explained the intent of the document more.
>
> Dave and I are in agreement on "Shouldn't struct page be better documented".
> I came up with this a few weeks ago; never quite got round to turning it
> into a patch:
>
> +---+-----------+-----------+--------------+----------+--------+--------------+
> | B | slab | pagecache | tail 1 | anon | tail 1 | hugetlb |
> +===+===========+===========+==============+==========+========+==============+
> | 0 | flags |
> +---+ |
> | 4 | |
> +---+-----------+-----------+--------------+----------+--------+--------------+
> | 8 | s_mem | mapping | cmp_mapcount | anon_vma | defer | mapping |
> +---+ | +--------------+ | list | |
> |12 | | | | | | |
> +---+-----------+-----------+--------------+----------+ +--------------+
> |16 | freelist | index | | index |
> +---+ | | | (shifted) |
> |20 | | | | |
> +---+-----------+-------------------------------------+--------+--------------+
> |24 | counters | mapcount |
> +---+ +-----------+--------------+----------+--------+--------------+
> |28 | | refcount | | | | refcount |
> +---+-----------+-----------+--------------+----------+--------+--------------+
> |32 | next | lru | cmpd_head | | lru |
> +---+ | | +-------------------+ +
> |36 | | | | | |
> +---+-----------+ +--------------+-------------------+ +
> |40 | pages | | dtor / order | | |
> +---+-----------+ +--------------+-------------------+ +
> |44 | pobjects | | | | |
> +---+-----------+-----------+--------------+----------------------------------+
> |48 | slb_cache | private | | |
> +---+ | +--------------+----------------------------------+
> |52 | | | | |
> +---+-----------+-----------+--------------+----------------------------------+
Shouldn't the anon column also contain lru?
On Thu, Mar 22, 2018 at 02:39:17PM -0400, Daniel Jordan wrote:
> Shouldn't the anon column also contain lru?
Probably; I didn't finish investigating everything. There should probably
also be another column for swap pages. Maybe some other users use a
significant amount of the struct page ... ?
On 03/21/2018 11:01 AM, Aaron Lu wrote:
>> I'm sorry, but I feel the added complexity here is simply too large to
>> justify the change. Especially if the motivation seems to be just the
>> microbenchmark. It would be better if this was motivated by a real
>> workload where zone lock contention was identified as the main issue,
>> and we would see the improvements on the workload. We could also e.g.
>> find out that the problem can be avoided at a different level.
>
> One thing I'm aware of is there is some app that consumes a ton of
> memory and when it misbehaves or crashes, it takes some 10-20 minutes to
> have it exit(munmap() takes a long time to free all those consumed
> memory).
>
> THP could help a lot, but it's beyond my understanding why they didn't
> use it.
One of our apps has the same issue with taking a long time to exit. The
time is in the kernel's munmap/exit path.
Also, Vlastimil, to your point about real workloads, I've seen
zone->lock and lru_lock heavily contended in a decision support
benchmark. Setting the pcp list sizes artificially high with
percpu_pagelist_fraction didn't make it go any faster, but given that
Aaron and I have seen the contention shift to lru_lock in this case, I'm
curious what will happen to the benchmark when both locks are no longer
contended. Will report back once this experiment is done.
On 03/20/2018 04:54 AM, Aaron Lu wrote:
> This series is meant to improve zone->lock scalability for order 0 pages.
> With will-it-scale/page_fault1 workload, on a 2 sockets Intel Skylake
> server with 112 CPUs, CPU spend 80% of its time spinning on zone->lock.
> Perf profile shows the most time consuming part under zone->lock is the
> cache miss on "struct page", so here I'm trying to avoid those cache
> misses.
I ran page_fault1 comparing 4.16-rc5 to your recent work, these four
patches plus the three others from your github branch zone_lock_rfc_v2.
Out of curiosity I also threw in another 4.16-rc5 with the pcp batch
size adjusted so high (10922 pages) that we always stay in the pcp lists
and out of buddy completely. I used your patch[*] in this last kernel.
This was on a 2-socket, 20-core broadwell server.
There were some small regressions a bit outside the noise at low process
counts (2-5) but I'm not sure they're repeatable. Anyway, it does
improve the microbenchmark across the board.
[*] lkml.kernel.org/r/20170919072342.GB7263 () intel ! com
On Thu, Mar 29, 2018 at 03:19:46PM -0400, Daniel Jordan wrote:
> On 03/20/2018 04:54 AM, Aaron Lu wrote:
> > This series is meant to improve zone->lock scalability for order 0 pages.
> > With will-it-scale/page_fault1 workload, on a 2 sockets Intel Skylake
> > server with 112 CPUs, CPU spend 80% of its time spinning on zone->lock.
> > Perf profile shows the most time consuming part under zone->lock is the
> > cache miss on "struct page", so here I'm trying to avoid those cache
> > misses.
>
> I ran page_fault1 comparing 4.16-rc5 to your recent work, these four patches
> plus the three others from your github branch zone_lock_rfc_v2. Out of
> curiosity I also threw in another 4.16-rc5 with the pcp batch size adjusted
> so high (10922 pages) that we always stay in the pcp lists and out of buddy
> completely. I used your patch[*] in this last kernel.
>
> This was on a 2-socket, 20-core broadwell server.
>
> There were some small regressions a bit outside the noise at low process
> counts (2-5) but I'm not sure they're repeatable. Anyway, it does improve
> the microbenchmark across the board.
Thanks for the result.
The limited improvement is expected since lock contention only shifts,
not entirely gone. So what is interesting to see is how it performs with
v4.16-rc5 + my_zone_lock_patchset + your_lru_lock_patchset
On 03/29/2018 09:42 PM, Aaron Lu wrote:
> On Thu, Mar 29, 2018 at 03:19:46PM -0400, Daniel Jordan wrote:
>> On 03/20/2018 04:54 AM, Aaron Lu wrote:
>>> This series is meant to improve zone->lock scalability for order 0 pages.
>>> With will-it-scale/page_fault1 workload, on a 2 sockets Intel Skylake
>>> server with 112 CPUs, CPU spend 80% of its time spinning on zone->lock.
>>> Perf profile shows the most time consuming part under zone->lock is the
>>> cache miss on "struct page", so here I'm trying to avoid those cache
>>> misses.
>>
>> I ran page_fault1 comparing 4.16-rc5 to your recent work, these four patches
>> plus the three others from your github branch zone_lock_rfc_v2. Out of
>> curiosity I also threw in another 4.16-rc5 with the pcp batch size adjusted
>> so high (10922 pages) that we always stay in the pcp lists and out of buddy
>> completely. I used your patch[*] in this last kernel.
>>
>> This was on a 2-socket, 20-core broadwell server.
>>
>> There were some small regressions a bit outside the noise at low process
>> counts (2-5) but I'm not sure they're repeatable. Anyway, it does improve
>> the microbenchmark across the board.
>
> Thanks for the result.
>
> The limited improvement is expected since lock contention only shifts,
> not entirely gone. So what is interesting to see is how it performs with
> v4.16-rc5 + my_zone_lock_patchset + your_lru_lock_patchset
Yep, that's 'coming soon.'