2020-10-05 12:17:22

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 0/5] mm: place pages to the freelist tail when onlining and undoing isolation

When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If the last memory block cannot get
offlined+removed() so will all dependent ones. We directly have unmovable
allocations all over the place.

This can be observed quite easily using virtio-mem, however, it can also
be observed when using DIMMs. The freshly onlined pages will usually be
placed to the head of the freelists, meaning they will be allocated next,
turning the just-added memory usually immediately un-removable. The
fresh pages are cold, prefering to allocate others (that might be hot)
also feels to be the natural thing to do.

It also applies to the hyper-v balloon xen-balloon, and ppc64 dlpar: when
adding separate, successive memory blocks, each memory block will have
unmovable allocations on them - for example gigantic pages will fail to
allocate.

While the ZONE_NORMAL doesn't provide any guarantees that memory can get
offlined+removed again (any kind of fragmentation with unmovable
allocations is possible), there are many scenarios (hotplugging a lot of
memory, running workload, hotunplug some memory/as much as possible) where
we can offline+remove quite a lot with this patchset.

a) To visualize the problem, a very simple example:

Start a VM with 4GB and 8GB of virtio-mem memory:

[root@localhost ~]# lsmem
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x00000000bfffffff 3G online yes 0-23
0x0000000100000000-0x000000033fffffff 9G online yes 32-103

Memory block size: 128M
Total online memory: 12G
Total offline memory: 0B

Then try to unplug as much as possible using virtio-mem. Observe which
memory blocks are still around. Without this patch set:

[root@localhost ~]# lsmem
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x00000000bfffffff 3G online yes 0-23
0x0000000100000000-0x000000013fffffff 1G online yes 32-39
0x0000000148000000-0x000000014fffffff 128M online yes 41
0x0000000158000000-0x000000015fffffff 128M online yes 43
0x0000000168000000-0x000000016fffffff 128M online yes 45
0x0000000178000000-0x000000017fffffff 128M online yes 47
0x0000000188000000-0x0000000197ffffff 256M online yes 49-50
0x00000001a0000000-0x00000001a7ffffff 128M online yes 52
0x00000001b0000000-0x00000001b7ffffff 128M online yes 54
0x00000001c0000000-0x00000001c7ffffff 128M online yes 56
0x00000001d0000000-0x00000001d7ffffff 128M online yes 58
0x00000001e0000000-0x00000001e7ffffff 128M online yes 60
0x00000001f0000000-0x00000001f7ffffff 128M online yes 62
0x0000000200000000-0x0000000207ffffff 128M online yes 64
0x0000000210000000-0x0000000217ffffff 128M online yes 66
0x0000000220000000-0x0000000227ffffff 128M online yes 68
0x0000000230000000-0x0000000237ffffff 128M online yes 70
0x0000000240000000-0x0000000247ffffff 128M online yes 72
0x0000000250000000-0x0000000257ffffff 128M online yes 74
0x0000000260000000-0x0000000267ffffff 128M online yes 76
0x0000000270000000-0x0000000277ffffff 128M online yes 78
0x0000000280000000-0x0000000287ffffff 128M online yes 80
0x0000000290000000-0x0000000297ffffff 128M online yes 82
0x00000002a0000000-0x00000002a7ffffff 128M online yes 84
0x00000002b0000000-0x00000002b7ffffff 128M online yes 86
0x00000002c0000000-0x00000002c7ffffff 128M online yes 88
0x00000002d0000000-0x00000002d7ffffff 128M online yes 90
0x00000002e0000000-0x00000002e7ffffff 128M online yes 92
0x00000002f0000000-0x00000002f7ffffff 128M online yes 94
0x0000000300000000-0x0000000307ffffff 128M online yes 96
0x0000000310000000-0x0000000317ffffff 128M online yes 98
0x0000000320000000-0x0000000327ffffff 128M online yes 100
0x0000000330000000-0x000000033fffffff 256M online yes 102-103

Memory block size: 128M
Total online memory: 8.1G
Total offline memory: 0B

With this patch set:

[root@localhost ~]# lsmem
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x00000000bfffffff 3G online yes 0-23
0x0000000100000000-0x000000013fffffff 1G online yes 32-39

Memory block size: 128M
Total online memory: 4G
Total offline memory: 0B

All memory can get unplugged, all memory block can get removed. Of course,
no workload ran and the system was basically idle, but it highlights the
issue - the fairly deterministic chain of unmovable allocations. When a
huge page for the 2MB memmap is needed, a just-onlined 4MB page will
be split. The remaining 2MB page will be used for the memmap of the next
memory block. So one memory block will hold the memmap of the two following
memory blocks. Finally the pages of the last-onlined memory block will get
used for the next bigger allocations - if any allocation is unmovable,
all dependent memory blocks cannot get unplugged and removed until that
allocation is gone.

Note that with bigger memory blocks (e.g., 256MB), *all* memory
blocks are dependent and none can get unplugged again!

b) Experiment with memory intensive workload

I performed an experiment with an older version of this patch set
(before we used undo_isolate_page_range() in online_pages():
Hotplug 56GB to a VM with an initial 4GB, onlining all memory to
ZONE_NORMAL right from the kernel when adding it. I then run various
memory intensive workloads that consume most system memory for a total of
45 minutes. Once finished, I try to unplug as much memory as possible.

With this change, I am able to remove via virtio-mem (adding individual
128MB memory blocks) 413 out of 448 added memory blocks. Via individual
(256MB) DIMMs 380 out of 448 added memory blocks. (I don't have any numbers
without this patchset, but looking at the above example, it's at most half
of the 448 memory blocks for virtio-mem, and most probably none for DIMMs).

Again, there are workloads that might behave very differently due to the
nature of ZONE_NORMAL.

This change also affects (besodes memory onlining):
- Other users of undo_isolate_page_range(): Pages are always placed to the
tail.
-- When memory offlining fails
-- When memory isolation fails after having isolated some pageblocks
-- When alloc_contig_range() either succeeds or fails
- Other users of __putback_isolated_page(): Pages are always placed to the
tail.
-- Free page reporting
- Other users of __free_pages_core()
-- AFAIKs, any memory that is getting exposed to the buddy during boot.
IIUC we will now usually allocate memory from lower addresses within
a zone first (especially during boot).
- Other users of generic_online_page()
-- Hyper-V balloon

v1 -> v2:
- Avoid changing indentation/alignment of function parameters
- Minor spelling fixes
- "mm/page_alloc: convert "report" flag of __free_one_page() to a proper
flag"
-- fop_t -> fpi_t
-- Clarify/extend documentation of FPI_SKIP_REPORT_NOTIFY
- "mm/page_alloc: move pages to tail in move_to_free_list()"
-- Perform change for all move_to_free_list()/move_freepages_block() users
to simplify.
-- Adjust subject/description accordingly.
- "mm/page_alloc: place pages to tail in __free_pages_core()"
-- s/init_single_page/__init_single_page/

RFC -> v1:
- Tweak some patch descriptions
- "mm/page_alloc: place pages to tail in __putback_isolated_page()"
-- FOP_TO_TAIL now has higher precedence than page shuffling
-- Add a note that nothing should rely on FOP_TO_TAIL for correctness
- "mm/page_alloc: always move pages to the tail of the freelist in
unset_migratetype_isolate()"
-- Use "bool" parameter for move_freepages_block() as requested
- "mm/page_alloc: place pages to tail in __free_pages_core()"
-- Eliminate set_page_refcounted() + page_ref_dec() and add a comment
- "mm/memory_hotplug: update comment regarding zone shuffling"
-- Added

David Hildenbrand (5):
mm/page_alloc: convert "report" flag of __free_one_page() to a proper
flag
mm/page_alloc: place pages to tail in __putback_isolated_page()
mm/page_alloc: move pages to tail in move_to_free_list()
mm/page_alloc: place pages to tail in __free_pages_core()
mm/memory_hotplug: update comment regarding zone shuffling

mm/memory_hotplug.c | 11 +++---
mm/page_alloc.c | 84 +++++++++++++++++++++++++++++++++++----------
mm/page_isolation.c | 5 +++
3 files changed, 75 insertions(+), 25 deletions(-)

--
2.26.2


2020-10-05 12:17:26

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag

Let's prepare for additional flags and avoid long parameter lists of bools.
Follow-up patches will also make use of the flags in __free_pages_ok().

Reviewed-by: Alexander Duyck <[email protected]>
Reviewed-by: Vlastimil Babka <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Reviewed-by: Pankaj Gupta <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Mike Rapoport <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
mm/page_alloc.c | 27 ++++++++++++++++++++++-----
1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7012d67a302d..2bf235b1953f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -78,6 +78,22 @@
#include "shuffle.h"
#include "page_reporting.h"

+/* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */
+typedef int __bitwise fpi_t;
+
+/* No special request */
+#define FPI_NONE ((__force fpi_t)0)
+
+/*
+ * Skip free page reporting notification for the (possibly merged) page.
+ * This does not hinder free page reporting from grabbing the page,
+ * reporting it and marking it "reported" - it only skips notifying
+ * the free page reporting infrastructure about a newly freed page. For
+ * example, used when temporarily pulling a page from a freelist and
+ * putting it back unmodified.
+ */
+#define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0))
+
/* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_FRACTION (8)
@@ -952,7 +968,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
static inline void __free_one_page(struct page *page,
unsigned long pfn,
struct zone *zone, unsigned int order,
- int migratetype, bool report)
+ int migratetype, fpi_t fpi_flags)
{
struct capture_control *capc = task_capc(zone);
unsigned long buddy_pfn;
@@ -1039,7 +1055,7 @@ static inline void __free_one_page(struct page *page,
add_to_free_list(page, zone, order, migratetype);

/* Notify page reporting subsystem of freed page */
- if (report)
+ if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY))
page_reporting_notify_free(order);
}

@@ -1380,7 +1396,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
if (unlikely(isolated_pageblocks))
mt = get_pageblock_migratetype(page);

- __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
+ __free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE);
trace_mm_page_pcpu_drain(page, 0, mt);
}
spin_unlock(&zone->lock);
@@ -1396,7 +1412,7 @@ static void free_one_page(struct zone *zone,
is_migrate_isolate(migratetype))) {
migratetype = get_pfnblock_migratetype(page, pfn);
}
- __free_one_page(page, pfn, zone, order, migratetype, true);
+ __free_one_page(page, pfn, zone, order, migratetype, FPI_NONE);
spin_unlock(&zone->lock);
}

@@ -3289,7 +3305,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
lockdep_assert_held(&zone->lock);

/* Return isolated page to tail of freelist. */
- __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
+ __free_one_page(page, page_to_pfn(page), zone, order, mt,
+ FPI_SKIP_REPORT_NOTIFY);
}

/*
--
2.26.2

2020-10-05 12:18:31

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list()

Whenever we move pages between freelists via move_to_free_list()/
move_freepages_block(), we don't actually touch the pages:
1. Page isolation doesn't actually touch the pages, it simply isolates
pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
When undoing isolation, we move the pages back to the target list.
2. Page stealing (steal_suitable_fallback()) moves free pages directly
between lists without touching them.
3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
free pages directly between freelists without touching them.

We already place pages to the tail of the freelists when undoing isolation
via __putback_isolated_page(), let's do it in any case (e.g., if order <=
pageblock_order) and document the behavior. To simplify, let's move the
pages to the tail for all move_to_free_list()/move_freepages_block() users.

In 2., the target list is empty, so there should be no change. In 3.,
we might observe a change, however, highatomic is more concerned about
allocations succeeding than cache hotness - if we ever realize this
change degrades a workload, we can special-case this instance and add a
proper comment.

This change results in all pages getting onlined via online_pages() to
be placed to the tail of the freelist.

Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Pankaj Gupta <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Scott Cheloha <[email protected]>
Cc: Michael Ellerman <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
mm/page_alloc.c | 10 +++++++---
mm/page_isolation.c | 5 +++++
2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df5ff0cd6df1..b187e46cf640 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone,
area->nr_free++;
}

-/* Used for pages which are on another list */
+/*
+ * Used for pages which are on another list. Move the pages to the tail
+ * of the list - so the moved pages won't immediately be considered for
+ * allocation again (e.g., optimization for memory onlining).
+ */
static inline void move_to_free_list(struct page *page, struct zone *zone,
unsigned int order, int migratetype)
{
struct free_area *area = &zone->free_area[order];

- list_move(&page->lru, &area->free_list[migratetype]);
+ list_move_tail(&page->lru, &area->free_list[migratetype]);
}

static inline void del_page_from_free_list(struct page *page, struct zone *zone,
@@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
#endif

/*
- * Move the free pages in a range to the free lists of the requested type.
+ * Move the free pages in a range to the freelist tail of the requested type.
* Note that start_page and end_pages are not aligned on a pageblock
* boundary. If alignment is required, use move_freepages_block()
*/
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index abfe26ad59fd..83692b937784 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
* If we isolate freepage with more than pageblock_order, there
* should be no freepage in the range, so we could avoid costly
* pageblock scanning for freepage moving.
+ *
+ * We didn't actually touch any of the isolated pages, so place them
+ * to the tail of the freelist. This is an optimization for memory
+ * onlining - just onlined memory won't immediately be considered for
+ * allocation.
*/
if (!isolated_page) {
nr_pages = move_freepages_block(zone, page, migratetype, NULL);
--
2.26.2

2020-10-05 12:18:54

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core()

__free_pages_core() is used when exposing fresh memory to the buddy
during system boot and when onlining memory in generic_online_page().

generic_online_page() is used in two cases:

1. Direct memory onlining in online_pages().
2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
balloon and virtio-mem), when parts of a section are kept
fake-offline to be fake-onlined later on.

In 1, we already place pages to the tail of the freelist. Pages will be
freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
via undo_isolate_page_range().

In 2, we currently don't implement a proper rule. In case of virtio-mem,
where we currently always online MAX_ORDER - 1 pages, the pages will be
placed to the HEAD of the freelist - undesireable. While the hyper-v
balloon calls generic_online_page() with single pages, usually it will
call it on successive single pages in a larger block.

The pages are fresh, so place them to the tail of the freelist and avoid
the PCP. In __free_pages_core(), remove the now superflouos call to
set_page_refcounted() and add a comment regarding page initialization and
the refcount.

Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
is usually of limited use in virtualized environments), we might want to
shuffle after a sequence of generic_online_page() calls in the
relevant callers.

Reviewed-by: Vlastimil Babka <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: Pankaj Gupta <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: "K. Y. Srinivasan" <[email protected]>
Cc: Haiyang Zhang <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Wei Liu <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
mm/page_alloc.c | 33 +++++++++++++++++++++++----------
1 file changed, 23 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b187e46cf640..3dadcc6d4009 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -275,7 +275,8 @@ bool pm_suspended_storage(void)
unsigned int pageblock_order __read_mostly;
#endif

-static void __free_pages_ok(struct page *page, unsigned int order);
+static void __free_pages_ok(struct page *page, unsigned int order,
+ fpi_t fpi_flags);

/*
* results with 256, 32 in the lowmem_reserve sysctl:
@@ -687,7 +688,7 @@ static void bad_page(struct page *page, const char *reason)
void free_compound_page(struct page *page)
{
mem_cgroup_uncharge(page);
- __free_pages_ok(page, compound_order(page));
+ __free_pages_ok(page, compound_order(page), FPI_NONE);
}

void prep_compound_page(struct page *page, unsigned int order)
@@ -1423,14 +1424,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,
static void free_one_page(struct zone *zone,
struct page *page, unsigned long pfn,
unsigned int order,
- int migratetype)
+ int migratetype, fpi_t fpi_flags)
{
spin_lock(&zone->lock);
if (unlikely(has_isolate_pageblock(zone) ||
is_migrate_isolate(migratetype))) {
migratetype = get_pfnblock_migratetype(page, pfn);
}
- __free_one_page(page, pfn, zone, order, migratetype, FPI_NONE);
+ __free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
spin_unlock(&zone->lock);
}

@@ -1508,7 +1509,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
}
}

-static void __free_pages_ok(struct page *page, unsigned int order)
+static void __free_pages_ok(struct page *page, unsigned int order,
+ fpi_t fpi_flags)
{
unsigned long flags;
int migratetype;
@@ -1520,7 +1522,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
migratetype = get_pfnblock_migratetype(page, pfn);
local_irq_save(flags);
__count_vm_events(PGFREE, 1 << order);
- free_one_page(page_zone(page), page, pfn, order, migratetype);
+ free_one_page(page_zone(page), page, pfn, order, migratetype,
+ fpi_flags);
local_irq_restore(flags);
}

@@ -1530,6 +1533,11 @@ void __free_pages_core(struct page *page, unsigned int order)
struct page *p = page;
unsigned int loop;

+ /*
+ * When initializing the memmap, __init_single_page() sets the refcount
+ * of all pages to 1 ("allocated"/"not free"). We have to set the
+ * refcount of all involved pages to 0.
+ */
prefetchw(p);
for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
prefetchw(p + 1);
@@ -1540,8 +1548,12 @@ void __free_pages_core(struct page *page, unsigned int order)
set_page_count(p, 0);

atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
- set_page_refcounted(page);
- __free_pages(page, order);
+
+ /*
+ * Bypass PCP and place fresh pages right to the tail, primarily
+ * relevant for memory onlining.
+ */
+ __free_pages_ok(page, order, FPI_TO_TAIL);
}

#ifdef CONFIG_NEED_MULTIPLE_NODES
@@ -3168,7 +3180,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
*/
if (migratetype >= MIGRATE_PCPTYPES) {
if (unlikely(is_migrate_isolate(migratetype))) {
- free_one_page(zone, page, pfn, 0, migratetype);
+ free_one_page(zone, page, pfn, 0, migratetype,
+ FPI_NONE);
return;
}
migratetype = MIGRATE_MOVABLE;
@@ -4991,7 +5004,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
if (order == 0) /* Via pcp? */
free_unref_page(page);
else
- __free_pages_ok(page, order);
+ __free_pages_ok(page, order, FPI_NONE);
}

void __free_pages(struct page *page, unsigned int order)
--
2.26.2

2020-10-05 12:18:56

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()

__putback_isolated_page() already documents that pages will be placed to
the tail of the freelist - this is, however, not the case for
"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
the case for all existing users.

This change affects two users:
- free page reporting
- page isolation, when undoing the isolation (including memory onlining).

This behavior is desireable for pages that haven't really been touched
lately, so exactly the two users that don't actually read/write page
content, but rather move untouched pages.

The new behavior is especially desirable for memory onlining, where we
allow allocation of newly onlined pages via undo_isolate_page_range()
in online_pages(). Right now, we always place them to the head of the
freelist, resulting in undesireable behavior: Assume we add
individual memory chunks via add_memory() and online them right away to
the NORMAL zone. We create a dependency chain of unmovable allocations
e.g., via the memmap. The memmap of the next chunk will be placed onto
previous chunks - if the last block cannot get offlined+removed, all
dependent ones cannot get offlined+removed. While this can already be
observed with individual DIMMs, it's more of an issue for virtio-mem
(and I suspect also ppc DLPAR).

Document that this should only be used for optimizations, and no code
should rely on this behavior for correction (if the order of the
freelists ever changes).

We won't care about page shuffling: memory onlining already properly
shuffles after onlining. free page reporting doesn't care about
physically contiguous ranges, and there are already cases where page
isolation will simply move (physically close) free pages to (currently)
the head of the freelists via move_freepages_block() instead of
shuffling. If this becomes ever relevant, we should shuffle the whole
zone when undoing isolation of larger ranges, and after
free_contig_range().

Reviewed-by: Alexander Duyck <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Reviewed-by: Wei Yang <[email protected]>
Reviewed-by: Pankaj Gupta <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Scott Cheloha <[email protected]>
Cc: Michael Ellerman <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
mm/page_alloc.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2bf235b1953f..df5ff0cd6df1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -94,6 +94,18 @@ typedef int __bitwise fpi_t;
*/
#define FPI_SKIP_REPORT_NOTIFY ((__force fpi_t)BIT(0))

+/*
+ * Place the (possibly merged) page to the tail of the freelist. Will ignore
+ * page shuffling (relevant code - e.g., memory onlining - is expected to
+ * shuffle the whole zone).
+ *
+ * Note: No code should rely on this flag for correctness - it's purely
+ * to allow for optimizations when handing back either fresh pages
+ * (memory onlining) or untouched pages (page isolation, free page
+ * reporting).
+ */
+#define FPI_TO_TAIL ((__force fpi_t)BIT(1))
+
/* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_FRACTION (8)
@@ -1044,7 +1056,9 @@ static inline void __free_one_page(struct page *page,
done_merging:
set_page_order(page, order);

- if (is_shuffle_order(order))
+ if (fpi_flags & FPI_TO_TAIL)
+ to_tail = true;
+ else if (is_shuffle_order(order))
to_tail = shuffle_pick_tail();
else
to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
@@ -3306,7 +3320,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)

/* Return isolated page to tail of freelist. */
__free_one_page(page, page_to_pfn(page), zone, order, mt,
- FPI_SKIP_REPORT_NOTIFY);
+ FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
}

/*
--
2.26.2

2020-10-05 12:20:40

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling

As we no longer shuffle via generic_online_page() and when undoing
isolation, we can simplify the comment.

We now effectively shuffle only once (properly) when onlining new
memory.

Reviewed-by: Wei Yang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Mike Rapoport <[email protected]>
Cc: Pankaj Gupta <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
mm/memory_hotplug.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 03a00cb68bf7..b44d4c7ba73b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);

/*
- * When exposing larger, physically contiguous memory areas to the
- * buddy, shuffling in the buddy (when freeing onlined pages, putting
- * them either to the head or the tail of the freelist) is only helpful
- * for maintaining the shuffle, but not for creating the initial
- * shuffle. Shuffle the whole zone to make sure the just onlined pages
- * are properly distributed across the whole freelist. Make sure to
- * shuffle once pageblocks are no longer isolated.
+ * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
+ * the tail of the freelist when undoing isolation). Shuffle the whole
+ * zone to make sure the just onlined pages are properly distributed
+ * across the whole freelist - to create an initial shuffle.
*/
shuffle_zone(zone);

--
2.26.2

2020-10-06 12:14:21

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list()

On Mon 05-10-20 14:15:32, David Hildenbrand wrote:
> Whenever we move pages between freelists via move_to_free_list()/
> move_freepages_block(), we don't actually touch the pages:
> 1. Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> When undoing isolation, we move the pages back to the target list.
> 2. Page stealing (steal_suitable_fallback()) moves free pages directly
> between lists without touching them.
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
> free pages directly between freelists without touching them.
>
> We already place pages to the tail of the freelists when undoing isolation
> via __putback_isolated_page(), let's do it in any case (e.g., if order <=
> pageblock_order) and document the behavior. To simplify, let's move the
> pages to the tail for all move_to_free_list()/move_freepages_block() users.
>
> In 2., the target list is empty, so there should be no change. In 3.,
> we might observe a change, however, highatomic is more concerned about
> allocations succeeding than cache hotness - if we ever realize this
> change degrades a workload, we can special-case this instance and add a
> proper comment.
>
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
>
> Reviewed-by: Oscar Salvador <[email protected]>
> Acked-by: Pankaj Gupta <[email protected]>
> Reviewed-by: Wei Yang <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Wei Yang <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Scott Cheloha <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>

Much simpler!
Acked-by: Michal Hocko <[email protected]>

Thanks!

> ---
> mm/page_alloc.c | 10 +++++++---
> mm/page_isolation.c | 5 +++++
> 2 files changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df5ff0cd6df1..b187e46cf640 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone,
> area->nr_free++;
> }
>
> -/* Used for pages which are on another list */
> +/*
> + * Used for pages which are on another list. Move the pages to the tail
> + * of the list - so the moved pages won't immediately be considered for
> + * allocation again (e.g., optimization for memory onlining).
> + */
> static inline void move_to_free_list(struct page *page, struct zone *zone,
> unsigned int order, int migratetype)
> {
> struct free_area *area = &zone->free_area[order];
>
> - list_move(&page->lru, &area->free_list[migratetype]);
> + list_move_tail(&page->lru, &area->free_list[migratetype]);
> }
>
> static inline void del_page_from_free_list(struct page *page, struct zone *zone,
> @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
> #endif
>
> /*
> - * Move the free pages in a range to the free lists of the requested type.
> + * Move the free pages in a range to the freelist tail of the requested type.
> * Note that start_page and end_pages are not aligned on a pageblock
> * boundary. If alignment is required, use move_freepages_block()
> */
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..83692b937784 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
> * If we isolate freepage with more than pageblock_order, there
> * should be no freepage in the range, so we could avoid costly
> * pageblock scanning for freepage moving.
> + *
> + * We didn't actually touch any of the isolated pages, so place them
> + * to the tail of the freelist. This is an optimization for memory
> + * onlining - just onlined memory won't immediately be considered for
> + * allocation.
> */
> if (!isolated_page) {
> nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> --
> 2.26.2

--
Michal Hocko
SUSE Labs

2020-10-21 06:55:40

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
>
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
>
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
>
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> freelist, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
>
> Document that this should only be used for optimizations, and no code
> should rely on this behavior for correction (if the order of the
> freelists ever changes).
>
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
>
> Reviewed-by: Alexander Duyck <[email protected]>
> Reviewed-by: Oscar Salvador <[email protected]>
> Reviewed-by: Wei Yang <[email protected]>
> Reviewed-by: Pankaj Gupta <[email protected]>
> Acked-by: Michal Hocko <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

2020-10-21 16:05:58

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list()

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> Whenever we move pages between freelists via move_to_free_list()/
> move_freepages_block(), we don't actually touch the pages:
> 1. Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> When undoing isolation, we move the pages back to the target list.
> 2. Page stealing (steal_suitable_fallback()) moves free pages directly
> between lists without touching them.
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
> free pages directly between freelists without touching them.
>
> We already place pages to the tail of the freelists when undoing isolation
> via __putback_isolated_page(), let's do it in any case (e.g., if order <=
> pageblock_order) and document the behavior. To simplify, let's move the
> pages to the tail for all move_to_free_list()/move_freepages_block() users.
>
> In 2., the target list is empty, so there should be no change. In 3.,
> we might observe a change, however, highatomic is more concerned about
> allocations succeeding than cache hotness - if we ever realize this
> change degrades a workload, we can special-case this instance and add a
> proper comment.
>
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
>
> Reviewed-by: Oscar Salvador <[email protected]>
> Acked-by: Pankaj Gupta <[email protected]>
> Reviewed-by: Wei Yang <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

2020-10-21 16:14:18

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
>
> We now effectively shuffle only once (properly) when onlining new
> memory.
>
> Reviewed-by: Wei Yang <[email protected]>
> Acked-by: Michal Hocko <[email protected]>

Acked-by: Vlastimil Babka <[email protected]>

> Cc: Andrew Morton <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Wei Yang <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Pankaj Gupta <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>
> ---
> mm/memory_hotplug.c | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 03a00cb68bf7..b44d4c7ba73b 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
> undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>
> /*
> - * When exposing larger, physically contiguous memory areas to the
> - * buddy, shuffling in the buddy (when freeing onlined pages, putting
> - * them either to the head or the tail of the freelist) is only helpful
> - * for maintaining the shuffle, but not for creating the initial
> - * shuffle. Shuffle the whole zone to make sure the just onlined pages
> - * are properly distributed across the whole freelist. Make sure to
> - * shuffle once pageblocks are no longer isolated.
> + * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> + * the tail of the freelist when undoing isolation). Shuffle the whole
> + * zone to make sure the just onlined pages are properly distributed
> + * across the whole freelist - to create an initial shuffle.
> */
> shuffle_zone(zone);
>
>

2020-10-21 19:21:23

by Pankaj Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling

> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
>
> We now effectively shuffle only once (properly) when onlining new
> memory.
>
> Reviewed-by: Wei Yang <[email protected]>
> Acked-by: Michal Hocko <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Wei Yang <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Pankaj Gupta <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>
> ---
> mm/memory_hotplug.c | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 03a00cb68bf7..b44d4c7ba73b 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
> undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>
> /*
> - * When exposing larger, physically contiguous memory areas to the
> - * buddy, shuffling in the buddy (when freeing onlined pages, putting
> - * them either to the head or the tail of the freelist) is only helpful
> - * for maintaining the shuffle, but not for creating the initial
> - * shuffle. Shuffle the whole zone to make sure the just onlined pages
> - * are properly distributed across the whole freelist. Make sure to
> - * shuffle once pageblocks are no longer isolated.
> + * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> + * the tail of the freelist when undoing isolation). Shuffle the whole
> + * zone to make sure the just onlined pages are properly distributed
> + * across the whole freelist - to create an initial shuffle.
> */
> shuffle_zone(zone);
>

Acked-by: Pankaj Gupta <[email protected]>

2021-09-07 22:44:12

by Sean Anderson

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core()

Hi David,

This patch breaks booting on my custom Xilinx ZynqMP board. Booting
fails just after/during GIC initialization:

[ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[ 0.000000] Linux version 5.14.0 (sean@plantagenet) (aarch64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #251 SMP Tue Sep 7 18:11:50 EDT 2021
[ 0.000000] Machine model: xlnx,zynqmp
[ 0.000000] earlycon: cdns0 at MMIO 0x00000000ff010000 (options '115200n8')
[ 0.000000] printk: bootconsole [cdns0] enabled
[ 0.000000] efi: UEFI not found.
[ 0.000000] Zone ranges:
[ 0.000000] DMA32 [mem 0x0000000000000000-0x00000000ffffffff]
[ 0.000000] Normal [mem 0x0000000100000000-0x000000087fffffff]
[ 0.000000] Movable zone start for each node
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000000000000-0x000000007fefffff]
[ 0.000000] node 0: [mem 0x0000000800000000-0x000000087fffffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x000000087fffffff]
[ 0.000000] On node 0, zone Normal: 256 pages in unavailable ranges
[ 0.000000] cma: Reserved 1000 MiB at 0x0000000041400000
[ 0.000000] psci: probing for conduit method from DT.
[ 0.000000] psci: PSCIv1.1 detected in firmware.
[ 0.000000] psci: Using standard PSCI v0.2 function IDs
[ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
[ 0.000000] psci: SMC Calling Convention v1.1
[ 0.000000] percpu: Embedded 19 pages/cpu s46752 r0 d31072 u77824
[ 0.000000] Detected VIPT I-cache on CPU0
[ 0.000000] CPU features: detected: ARM erratum 845719
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 1033987
[ 0.000000] Kernel command line: earlycon clk_ignore_unused root=/dev/mmcblk0p2 rootwait rw cma=1000M
[ 0.000000] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[ 0.000000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[ 0.000000] software IO TLB: mapped [mem 0x000000003d400000-0x0000000041400000] (64MB)
[ 0.000000] Memory: 3023384K/4193280K available (4288K kernel code, 514K rwdata, 1200K rodata, 896K init, 187K bss, 145896K reserved, 1024000K cma-reserved)
[ 0.000000] rcu: Hierarchical RCU implementation.
[ 0.000000] rcu: RCU event tracing is enabled.
[ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[ 0.000000] GIC: Adjusting CPU interface base to 0x00000000f902f000
[ 0.000000] Root IRQ handler: gic_handle_irq
[ 0.000000] GIC: Using split EOI/Deactivate mode

and I bisected it to this patch. Applying the following patch (for 5.14)
fixes booting again:

---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eeb3a9cb36bb..d4317392cadb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1688,7 +1688,7 @@ void __free_pages_core(struct page *page, unsigned int order)
* Bypass PCP and place fresh pages right to the tail, primarily
* relevant for memory onlining.
*/
- __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON);
+ __free_pages_ok(page, order, FPI_NONE);
}

#ifdef CONFIG_NUMA
--
2.25.1

I have attached my config; but note that it lacks any storage drivers to
make bisecting easier.

--Sean


Attachments:
.config (82.59 kB)

2021-09-08 07:47:39

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core()

On 08.09.21 00:40, Sean Anderson wrote:
> Hi David,
>
> This patch breaks booting on my custom Xilinx ZynqMP board. Booting
> fails just after/during GIC initialization:
>
> [ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
> [ 0.000000] Linux version 5.14.0 (sean@plantagenet) (aarch64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #251 SMP Tue Sep 7 18:11:50 EDT 2021
> [ 0.000000] Machine model: xlnx,zynqmp
> [ 0.000000] earlycon: cdns0 at MMIO 0x00000000ff010000 (options '115200n8')
> [ 0.000000] printk: bootconsole [cdns0] enabled
> [ 0.000000] efi: UEFI not found.
> [ 0.000000] Zone ranges:
> [ 0.000000] DMA32 [mem 0x0000000000000000-0x00000000ffffffff]
> [ 0.000000] Normal [mem 0x0000000100000000-0x000000087fffffff]
> [ 0.000000] Movable zone start for each node
> [ 0.000000] Early memory node ranges
> [ 0.000000] node 0: [mem 0x0000000000000000-0x000000007fefffff]
> [ 0.000000] node 0: [mem 0x0000000800000000-0x000000087fffffff]
> [ 0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x000000087fffffff]
> [ 0.000000] On node 0, zone Normal: 256 pages in unavailable ranges
> [ 0.000000] cma: Reserved 1000 MiB at 0x0000000041400000
> [ 0.000000] psci: probing for conduit method from DT.
> [ 0.000000] psci: PSCIv1.1 detected in firmware.
> [ 0.000000] psci: Using standard PSCI v0.2 function IDs
> [ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
> [ 0.000000] psci: SMC Calling Convention v1.1
> [ 0.000000] percpu: Embedded 19 pages/cpu s46752 r0 d31072 u77824
> [ 0.000000] Detected VIPT I-cache on CPU0
> [ 0.000000] CPU features: detected: ARM erratum 845719
> [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 1033987
> [ 0.000000] Kernel command line: earlycon clk_ignore_unused root=/dev/mmcblk0p2 rootwait rw cma=1000M
> [ 0.000000] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
> [ 0.000000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
> [ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
> [ 0.000000] software IO TLB: mapped [mem 0x000000003d400000-0x0000000041400000] (64MB)
> [ 0.000000] Memory: 3023384K/4193280K available (4288K kernel code, 514K rwdata, 1200K rodata, 896K init, 187K bss, 145896K reserved, 1024000K cma-reserved)
> [ 0.000000] rcu: Hierarchical RCU implementation.
> [ 0.000000] rcu: RCU event tracing is enabled.
> [ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
> [ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> [ 0.000000] GIC: Adjusting CPU interface base to 0x00000000f902f000
> [ 0.000000] Root IRQ handler: gic_handle_irq
> [ 0.000000] GIC: Using split EOI/Deactivate mode
>
> and I bisected it to this patch. Applying the following patch (for 5.14)
> fixes booting again:

Hi Sean,

unfortunately that patch most likely (with 99.9999% confidence) revealed
another latent BUG in your setup.

Some memory that shouldn't be handed to the buddy as free memory is
getting now allocated earlier than later, resulting in that issue.


I had all different kinds of reports, but they were mostly

a) Firmware bugs that result in uncached memory getting exposed to the
buddy, resulting in severe performance degradation such that the system
will no longer boot. [3]

I wrote kstream [1] to be run under the old kernel, to identify these.

b) BUGs that result in unsuitable memory getting exposed to either the
buddy or devices, resulting in errors during device initialization. [6]

c) Use after free BUGs.

Exposing memory, such as used for ACPI tables, to the buddy as free
memory although it's still in use. [4]

d) Hypervisor BUGs

The last report (heavy performance degradation) was due to a BUG in
dpdk. [2]


What the exact symptoms you're experiencing? Really slow boot/stall?
Then it could be a) and kstream might help.


[1] https://github.com/davidhildenbrand/kstream
[2]
https://lore.kernel.org/dpdk-dev/[email protected]/T/#u
[3]
https://lore.kernel.org/r/MW3PR12MB4537C3C6EFD9CA3A4B32084DF36B9@MW3PR12MB4537.namprd12.prod.outlook.com
[4] https://lkml.kernel.org/r/4650320.31r3eYUQgx@kreacher
[5] https://lkml.kernel.org/r/[email protected]
[6]
https://lore.kernel.org/r/[email protected]


--
Thanks,

David / dhildenb

2021-09-10 23:10:42

by Sean Anderson

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core()



On 9/8/21 2:21 AM, David Hildenbrand wrote:
> On 08.09.21 00:40, Sean Anderson wrote:
>> Hi David,
>>
>> This patch breaks booting on my custom Xilinx ZynqMP board. Booting
>> fails just after/during GIC initialization:
>>
>> [ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>> [ 0.000000] Linux version 5.14.0 (sean@plantagenet) (aarch64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #251 SMP Tue Sep 7 18:11:50 EDT 2021
>> [ 0.000000] Machine model: xlnx,zynqmp
>> [ 0.000000] earlycon: cdns0 at MMIO 0x00000000ff010000 (options '115200n8')
>> [ 0.000000] printk: bootconsole [cdns0] enabled
>> [ 0.000000] efi: UEFI not found.
>> [ 0.000000] Zone ranges:
>> [ 0.000000] DMA32 [mem 0x0000000000000000-0x00000000ffffffff]
>> [ 0.000000] Normal [mem 0x0000000100000000-0x000000087fffffff]
>> [ 0.000000] Movable zone start for each node
>> [ 0.000000] Early memory node ranges
>> [ 0.000000] node 0: [mem 0x0000000000000000-0x000000007fefffff]
>> [ 0.000000] node 0: [mem 0x0000000800000000-0x000000087fffffff]
>> [ 0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x000000087fffffff]
>> [ 0.000000] On node 0, zone Normal: 256 pages in unavailable ranges
>> [ 0.000000] cma: Reserved 1000 MiB at 0x0000000041400000
>> [ 0.000000] psci: probing for conduit method from DT.
>> [ 0.000000] psci: PSCIv1.1 detected in firmware.
>> [ 0.000000] psci: Using standard PSCI v0.2 function IDs
>> [ 0.000000] psci: MIGRATE_INFO_TYPE not supported.
>> [ 0.000000] psci: SMC Calling Convention v1.1
>> [ 0.000000] percpu: Embedded 19 pages/cpu s46752 r0 d31072 u77824
>> [ 0.000000] Detected VIPT I-cache on CPU0
>> [ 0.000000] CPU features: detected: ARM erratum 845719
>> [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 1033987
>> [ 0.000000] Kernel command line: earlycon clk_ignore_unused root=/dev/mmcblk0p2 rootwait rw cma=1000M
>> [ 0.000000] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
>> [ 0.000000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>> [ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
>> [ 0.000000] software IO TLB: mapped [mem 0x000000003d400000-0x0000000041400000] (64MB)
>> [ 0.000000] Memory: 3023384K/4193280K available (4288K kernel code, 514K rwdata, 1200K rodata, 896K init, 187K bss, 145896K reserved, 1024000K cma-reserved)
>> [ 0.000000] rcu: Hierarchical RCU implementation.
>> [ 0.000000] rcu: RCU event tracing is enabled.
>> [ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>> [ 0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>> [ 0.000000] GIC: Adjusting CPU interface base to 0x00000000f902f000
>> [ 0.000000] Root IRQ handler: gic_handle_irq
>> [ 0.000000] GIC: Using split EOI/Deactivate mode
>>
>> and I bisected it to this patch. Applying the following patch (for 5.14)
>> fixes booting again:
>
> Hi Sean,
>
> unfortunately that patch most likely (with 99.9999% confidence) revealed another latent BUG in your setup.

I suspected as much; however even after inspecting this patch I was
unsure what I should investigate further.

>
> Some memory that shouldn't be handed to the buddy as free memory is getting now allocated earlier than later, resulting in that issue.
>
>
> I had all different kinds of reports, but they were mostly
>
> a) Firmware bugs that result in uncached memory getting exposed to the buddy, resulting in severe performance degradation such that the system will no longer boot. [3]
>
> I wrote kstream [1] to be run under the old kernel, to identify these.
>
> b) BUGs that result in unsuitable memory getting exposed to either the buddy or devices, resulting in errors during device initialization. [6]
>
> c) Use after free BUGs.
>
> Exposing memory, such as used for ACPI tables, to the buddy as free memory although it's still in use. [4]
>
> d) Hypervisor BUGs
>
> The last report (heavy performance degradation) was due to a BUG in dpdk. [2]
>
>
> What the exact symptoms you're experiencing? Really slow boot/stall? Then it could be a) and kstream might help.

Well, as it turns out DDR chips of half the correct size were installed.
This caused the upper half of memory to alias to the lower half. As it
happened, due to some lucky circumstances this didn't initially cause
problems. Sorry for the noise.

--Sean