2021-07-21 06:43:54

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 1/9] mm/numa: automatically generate node migration order

From: Dave Hansen <[email protected]>

This patchset is generated via

- `git format-patch` the V10 patchset in the mmots tree of 2021-07-15

The changes are as follows,

- Revise the patch description of [1/9] based on the previous discussion

- Rename can_demote_anon_pages() to can_demote() to reflect the fact
that the function is for anon and file pages.

--

Patch series "Migrate Pages in lieu of discard", v11.

We're starting to see systems with more and more kinds of memory such as
Intel's implementation of persistent memory.

Let's say you have a system with some DRAM and some persistent memory.
Today, once DRAM fills up, reclaim will start and some of the DRAM
contents will be thrown out. Allocations will, at some point, start
falling over to the slower persistent memory.

That has two nasty properties. First, the newer allocations can end up in
the slower persistent memory. Second, reclaimed data in DRAM are just
discarded even if there are gobs of space in persistent memory that could
be used.

This patchset implements a solution to these problems. At the end of the
reclaim process in shrink_page_list() just before the last page refcount
is dropped, the page is migrated to persistent memory instead of being
dropped.

While I've talked about a DRAM/PMEM pairing, this approach would function
in any environment where memory tiers exist.

This is not perfect. It "strands" pages in slower memory and never brings
them back to fast DRAM. Huang Ying has follow-on work which repurposes
NUMA balancing to promote hot pages back to DRAM.

This is also all based on an upstream mechanism that allows persistent
memory to be onlined and used as if it were volatile:

http://lkml.kernel.org/r/[email protected]

With that, the DRAM and PMEM in each socket will be represented as 2
separate NUMA nodes, with the CPUs sit in the DRAM node. So the
general inter-NUMA demotion mechanism introduced in the patchset can
migrate the cold DRAM pages to the PMEM node.

We have tested the patchset with the postgresql and pgbench. On a
2-socket server machine with DRAM and PMEM, the kernel with the patchset
can improve the score of pgbench up to 22.1% compared with that of the
DRAM only + disk case. This comes from the reduced disk read throughput
(which reduces up to 70.8%).

== Open Issues ==

* Memory policies and cpusets that, for instance, restrict allocations
to DRAM can be demoted to PMEM whenever they opt in to this
new mechanism. A cgroup-level API to opt-in or opt-out of
these migrations will likely be required as a follow-on.
* Could be more aggressive about where anon LRU scanning occurs
since it no longer necessarily involves I/O. get_scan_count()
for instance says: "If we have no swap space, do not bother
scanning anon pages"


This patch (of 9):

Prepare for the kernel to auto-migrate pages to other memory nodes with a
node migration table. This allows creating single migration target for
each NUMA node to enable the kernel to do NUMA page migrations instead of
simply discarding colder pages. A node with no target is a "terminal
node", so reclaim acts normally there. The migration target does not
fundamentally _need_ to be a single node, but this implementation starts
there to limit complexity.

When memory fills up on a node, memory contents can be automatically
migrated to another node. The biggest problems are knowing when to
migrate and to where the migration should be targeted.

The most straightforward way to generate the "to where" list would be to
follow the page allocator fallback lists. Those lists already tell us if
memory is full where to look next. It would also be logical to move
memory in that order.

But, the allocator fallback lists have a fatal flaw: most nodes appear in
all the lists. This would potentially lead to migration cycles (A->B,
B->A, A->B, ...).

Instead of using the allocator fallback lists directly, keep a separate
node migration ordering. But, reuse the same data used to generate page
allocator fallback in the first place: find_next_best_node().

This means that the firmware data used to populate node distances
essentially dictates the ordering for now. It should also be
architecture-neutral since all NUMA architectures have a working
find_next_best_node().

RCU is used to allow lock-less read of node_demotion[] and prevent
demotion cycles been observed. If multiple reads of node_demotion[] are
performed, a single rcu_read_lock() must be held over all reads to ensure
no cycles are observed. Details are as follows.

=== What does RCU provide? ===

Imagine a simple loop which walks down the demotion path looking
for the last node:

terminal_node = start_node;
while (node_demotion[terminal_node] != NUMA_NO_NODE) {
terminal_node = node_demotion[terminal_node];
}

The initial values are:

node_demotion[0] = 1;
node_demotion[1] = NUMA_NO_NODE;

and are updated to:

node_demotion[0] = NUMA_NO_NODE;
node_demotion[1] = 0;

What guarantees that the cycle is not observed:

node_demotion[0] = 1;
node_demotion[1] = 0;

and would loop forever?

With RCU, a rcu_read_lock/unlock() can be placed around the loop. Since
the write side does a synchronize_rcu(), the loop that observed the old
contents is known to be complete before the synchronize_rcu() has
completed.

RCU, combined with disable_all_migrate_targets(), ensures that the old
migration state is not visible by the time __set_migration_target_nodes()
is called.

=== What does READ_ONCE() provide? ===

READ_ONCE() forbids the compiler from merging or reordering successive
reads of node_demotion[]. This ensures that any updates are *eventually*
observed.

Consider the above loop again. The compiler could theoretically read the
entirety of node_demotion[] into local storage (registers) and never go
back to memory, and *permanently* observe bad values for node_demotion[].

Note: RCU does not provide any universal compiler-ordering
guarantees:

https://lore.kernel.org/lkml/[email protected]/

This code is unused for now. It will be called later in the
series.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/internal.h | 5 ++
mm/migrate.c | 216 ++++++++++++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 2 +-
3 files changed, 222 insertions(+), 1 deletion(-)

diff --git a/mm/internal.h b/mm/internal.h
index 57e28261a3b1..cf3cb933eba3 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -543,12 +543,17 @@ static inline void mminit_validate_memmodel_limits(unsigned long *start_pfn,

#ifdef CONFIG_NUMA
extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int);
+extern int find_next_best_node(int node, nodemask_t *used_node_mask);
#else
static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
unsigned int order)
{
return NODE_RECLAIM_NOSCAN;
}
+static inline int find_next_best_node(int node, nodemask_t *used_node_mask)
+{
+ return NUMA_NO_NODE;
+}
#endif

extern int hwpoison_filter(struct page *p);
diff --git a/mm/migrate.c b/mm/migrate.c
index 34a9ad3e0a4f..b7a40ab47648 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1099,6 +1099,80 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
return rc;
}

+
+/*
+ * node_demotion[] example:
+ *
+ * Consider a system with two sockets. Each socket has
+ * three classes of memory attached: fast, medium and slow.
+ * Each memory class is placed in its own NUMA node. The
+ * CPUs are placed in the node with the "fast" memory. The
+ * 6 NUMA nodes (0-5) might be split among the sockets like
+ * this:
+ *
+ * Socket A: 0, 1, 2
+ * Socket B: 3, 4, 5
+ *
+ * When Node 0 fills up, its memory should be migrated to
+ * Node 1. When Node 1 fills up, it should be migrated to
+ * Node 2. The migration path start on the nodes with the
+ * processors (since allocations default to this node) and
+ * fast memory, progress through medium and end with the
+ * slow memory:
+ *
+ * 0 -> 1 -> 2 -> stop
+ * 3 -> 4 -> 5 -> stop
+ *
+ * This is represented in the node_demotion[] like this:
+ *
+ * { 1, // Node 0 migrates to 1
+ * 2, // Node 1 migrates to 2
+ * -1, // Node 2 does not migrate
+ * 4, // Node 3 migrates to 4
+ * 5, // Node 4 migrates to 5
+ * -1} // Node 5 does not migrate
+ */
+
+/*
+ * Writes to this array occur without locking. Cycles are
+ * not allowed: Node X demotes to Y which demotes to X...
+ *
+ * If multiple reads are performed, a single rcu_read_lock()
+ * must be held over all reads to ensure that no cycles are
+ * observed.
+ */
+static int node_demotion[MAX_NUMNODES] __read_mostly =
+ {[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE};
+
+/**
+ * next_demotion_node() - Get the next node in the demotion path
+ * @node: The starting node to lookup the next node
+ *
+ * @returns: node id for next memory node in the demotion path hierarchy
+ * from @node; NUMA_NO_NODE if @node is terminal. This does not keep
+ * @node online or guarantee that it *continues* to be the next demotion
+ * target.
+ */
+int next_demotion_node(int node)
+{
+ int target;
+
+ /*
+ * node_demotion[] is updated without excluding this
+ * function from running. RCU doesn't provide any
+ * compiler barriers, so the READ_ONCE() is required
+ * to avoid compiler reordering or read merging.
+ *
+ * Make sure to use RCU over entire code blocks if
+ * node_demotion[] reads need to be consistent.
+ */
+ rcu_read_lock();
+ target = READ_ONCE(node_demotion[node]);
+ rcu_read_unlock();
+
+ return target;
+}
+
/*
* Obtain the lock on page, remove all ptes and migrate the page
* to the newly allocated page in newpage.
@@ -2982,3 +3056,145 @@ void migrate_vma_finalize(struct migrate_vma *migrate)
}
EXPORT_SYMBOL(migrate_vma_finalize);
#endif /* CONFIG_DEVICE_PRIVATE */
+
+/* Disable reclaim-based migration. */
+static void __disable_all_migrate_targets(void)
+{
+ int node;
+
+ for_each_online_node(node)
+ node_demotion[node] = NUMA_NO_NODE;
+}
+
+static void disable_all_migrate_targets(void)
+{
+ __disable_all_migrate_targets();
+
+ /*
+ * Ensure that the "disable" is visible across the system.
+ * Readers will see either a combination of before+disable
+ * state or disable+after. They will never see before and
+ * after state together.
+ *
+ * The before+after state together might have cycles and
+ * could cause readers to do things like loop until this
+ * function finishes. This ensures they can only see a
+ * single "bad" read and would, for instance, only loop
+ * once.
+ */
+ synchronize_rcu();
+}
+
+/*
+ * Find an automatic demotion target for 'node'.
+ * Failing here is OK. It might just indicate
+ * being at the end of a chain.
+ */
+static int establish_migrate_target(int node, nodemask_t *used)
+{
+ int migration_target;
+
+ /*
+ * Can not set a migration target on a
+ * node with it already set.
+ *
+ * No need for READ_ONCE() here since this
+ * in the write path for node_demotion[].
+ * This should be the only thread writing.
+ */
+ if (node_demotion[node] != NUMA_NO_NODE)
+ return NUMA_NO_NODE;
+
+ migration_target = find_next_best_node(node, used);
+ if (migration_target == NUMA_NO_NODE)
+ return NUMA_NO_NODE;
+
+ node_demotion[node] = migration_target;
+
+ return migration_target;
+}
+
+/*
+ * When memory fills up on a node, memory contents can be
+ * automatically migrated to another node instead of
+ * discarded at reclaim.
+ *
+ * Establish a "migration path" which will start at nodes
+ * with CPUs and will follow the priorities used to build the
+ * page allocator zonelists.
+ *
+ * The difference here is that cycles must be avoided. If
+ * node0 migrates to node1, then neither node1, nor anything
+ * node1 migrates to can migrate to node0.
+ *
+ * This function can run simultaneously with readers of
+ * node_demotion[]. However, it can not run simultaneously
+ * with itself. Exclusion is provided by memory hotplug events
+ * being single-threaded.
+ */
+static void __set_migration_target_nodes(void)
+{
+ nodemask_t next_pass = NODE_MASK_NONE;
+ nodemask_t this_pass = NODE_MASK_NONE;
+ nodemask_t used_targets = NODE_MASK_NONE;
+ int node;
+
+ /*
+ * Avoid any oddities like cycles that could occur
+ * from changes in the topology. This will leave
+ * a momentary gap when migration is disabled.
+ */
+ disable_all_migrate_targets();
+
+ /*
+ * Allocations go close to CPUs, first. Assume that
+ * the migration path starts at the nodes with CPUs.
+ */
+ next_pass = node_states[N_CPU];
+again:
+ this_pass = next_pass;
+ next_pass = NODE_MASK_NONE;
+ /*
+ * To avoid cycles in the migration "graph", ensure
+ * that migration sources are not future targets by
+ * setting them in 'used_targets'. Do this only
+ * once per pass so that multiple source nodes can
+ * share a target node.
+ *
+ * 'used_targets' will become unavailable in future
+ * passes. This limits some opportunities for
+ * multiple source nodes to share a destination.
+ */
+ nodes_or(used_targets, used_targets, this_pass);
+ for_each_node_mask(node, this_pass) {
+ int target_node = establish_migrate_target(node, &used_targets);
+
+ if (target_node == NUMA_NO_NODE)
+ continue;
+
+ /*
+ * Visit targets from this pass in the next pass.
+ * Eventually, every node will have been part of
+ * a pass, and will become set in 'used_targets'.
+ */
+ node_set(target_node, next_pass);
+ }
+ /*
+ * 'next_pass' contains nodes which became migration
+ * targets in this pass. Make additional passes until
+ * no more migrations targets are available.
+ */
+ if (!nodes_empty(next_pass))
+ goto again;
+}
+
+/*
+ * For callers that do not hold get_online_mems() already.
+ */
+__maybe_unused // <- temporay to prevent warnings during bisects
+static void set_migration_target_nodes(void)
+{
+ get_online_mems();
+ __set_migration_target_nodes();
+ put_online_mems();
+}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 29f41d095002..942417c78a8a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6139,7 +6139,7 @@ static int node_load[MAX_NUMNODES];
*
* Return: node id of the found node or %NUMA_NO_NODE if no node is found.
*/
-static int find_next_best_node(int node, nodemask_t *used_node_mask)
+int find_next_best_node(int node, nodemask_t *used_node_mask)
{
int n, val;
int min_val = INT_MAX;
--
2.30.2


2021-07-21 06:44:07

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 3/9] mm/migrate: enable returning precise migrate_pages() success count

From: Yang Shi <[email protected]>

Under normal circumstances, migrate_pages() returns the number of pages
migrated. In error conditions, it returns an error code. When returning
an error code, there is no way to know how many pages were migrated or not
migrated.

Make migrate_pages() return how many pages are demoted successfully for
all cases, including when encountering errors. Page reclaim behavior will
depend on this in subsequent patches.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Suggested-by: Oscar Salvador <[email protected]> [optional parameter]
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
include/linux/migrate.h | 5 +++--
mm/compaction.c | 2 +-
mm/gup.c | 2 +-
mm/memory-failure.c | 2 +-
mm/memory_hotplug.c | 2 +-
mm/mempolicy.c | 4 ++--
mm/migrate.c | 11 ++++++++---
mm/page_alloc.c | 2 +-
8 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 23dadf7aeba8..8ab88d46318e 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -41,7 +41,8 @@ extern int migrate_page(struct address_space *mapping,
struct page *newpage, struct page *page,
enum migrate_mode mode);
extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
- unsigned long private, enum migrate_mode mode, int reason);
+ unsigned long private, enum migrate_mode mode, int reason,
+ unsigned int *ret_succeeded);
extern struct page *alloc_migration_target(struct page *page, unsigned long private);
extern int isolate_movable_page(struct page *page, isolate_mode_t mode);

@@ -56,7 +57,7 @@ extern int migrate_page_move_mapping(struct address_space *mapping,
static inline void putback_movable_pages(struct list_head *l) {}
static inline int migrate_pages(struct list_head *l, new_page_t new,
free_page_t free, unsigned long private, enum migrate_mode mode,
- int reason)
+ int reason, unsigned int *ret_succeeded)
{ return -ENOSYS; }
static inline struct page *alloc_migration_target(struct page *page,
unsigned long private)
diff --git a/mm/compaction.c b/mm/compaction.c
index ed37e1cb4369..79aaf21058da 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2392,7 +2392,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)

err = migrate_pages(&cc->migratepages, compaction_alloc,
compaction_free, (unsigned long)cc, cc->mode,
- MR_COMPACTION);
+ MR_COMPACTION, NULL);

trace_mm_compaction_migratepages(cc->nr_migratepages, err,
&cc->migratepages);
diff --git a/mm/gup.c b/mm/gup.c
index 42b8b1fa6521..c4441fc4cfba 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1772,7 +1772,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
if (!list_empty(&movable_page_list)) {
ret = migrate_pages(&movable_page_list, alloc_migration_target,
NULL, (unsigned long)&mtc, MIGRATE_SYNC,
- MR_LONGTERM_PIN);
+ MR_LONGTERM_PIN, NULL);
if (ret && !list_empty(&movable_page_list))
putback_movable_pages(&movable_page_list);
}
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index eefd823deb67..3eed65e56f93 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2093,7 +2093,7 @@ static int __soft_offline_page(struct page *page)

if (isolate_page(hpage, &pagelist)) {
ret = migrate_pages(&pagelist, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE);
+ (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE, NULL);
if (!ret) {
bool release = !huge;

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0bb73fd1035a..d45c69d78b83 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1462,7 +1462,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
if (nodes_empty(nmask))
node_set(mtc.nid, nmask);
ret = migrate_pages(&source, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+ (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG, NULL);
if (ret) {
list_for_each_entry(page, &source, lru) {
if (__ratelimit(&migrate_rs)) {
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e5ce5a7e8d92..f58c38ea1e83 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1084,7 +1084,7 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest,

if (!list_empty(&pagelist)) {
err = migrate_pages(&pagelist, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL);
+ (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL);
if (err)
putback_movable_pages(&pagelist);
}
@@ -1338,7 +1338,7 @@ static long do_mbind(unsigned long start, unsigned long len,
if (!list_empty(&pagelist)) {
WARN_ON_ONCE(flags & MPOL_MF_LAZY);
nr_failed = migrate_pages(&pagelist, new_page, NULL,
- start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND);
+ start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL);
if (nr_failed)
putback_movable_pages(&pagelist);
}
diff --git a/mm/migrate.c b/mm/migrate.c
index a40c391f9ca7..35d34ef837ed 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1429,6 +1429,8 @@ static inline int try_split_thp(struct page *page, struct page **page2,
* @mode: The migration mode that specifies the constraints for
* page migration, if any.
* @reason: The reason for page migration.
+ * @ret_succeeded: Set to the number of pages migrated successfully if
+ * the caller passes a non-NULL pointer.
*
* The function returns after 10 attempts or if no pages are movable any more
* because the list has become empty or no retryable pages exist any more.
@@ -1439,7 +1441,7 @@ static inline int try_split_thp(struct page *page, struct page **page2,
*/
int migrate_pages(struct list_head *from, new_page_t get_new_page,
free_page_t put_new_page, unsigned long private,
- enum migrate_mode mode, int reason)
+ enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
{
int retry = 1;
int thp_retry = 1;
@@ -1594,6 +1596,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
if (!swapwrite)
current->flags &= ~PF_SWAPWRITE;

+ if (ret_succeeded)
+ *ret_succeeded = nr_succeeded;
+
return rc;
}

@@ -1663,7 +1668,7 @@ static int do_move_pages_to_node(struct mm_struct *mm,
};

err = migrate_pages(pagelist, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL);
+ (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL);
if (err)
putback_movable_pages(pagelist);
return err;
@@ -2178,7 +2183,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,

list_add(&page->lru, &migratepages);
nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
- MIGRATE_ASYNC, MR_NUMA_MISPLACED);
+ MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
if (nr_remaining) {
if (!list_empty(&migratepages)) {
list_del(&page->lru);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 942417c78a8a..62dc229c1dd1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8969,7 +8969,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
cc->nr_migratepages -= nr_reclaimed;

ret = migrate_pages(&cc->migratepages, alloc_migration_target,
- NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
+ NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE, NULL);

/*
* On -ENOMEM, migrate_pages() bails out right away. It is pointless
--
2.30.2

2021-07-21 06:44:38

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

From: Dave Hansen <[email protected]>

Reclaim-based migration is attempting to optimize data placement in memory
based on the system topology. If the system changes, so must the
migration ordering.

The implementation is conceptually simple and entirely unoptimized. On
any memory or CPU hotplug events, assume that a node was added or removed
and recalculate all migration targets. This ensures that the
node_demotion[] array is always ready to be used in case the new reclaim
mode is enabled.

This recalculation is far from optimal, most glaringly that it does not
even attempt to figure out the hotplug event would have some *actual*
effect on the demotion order. But, given the expected paucity of hotplug
events, this should be fine.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/migrate.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 89 insertions(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index b7a40ab47648..a40c391f9ca7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -49,6 +49,7 @@
#include <linux/sched/mm.h>
#include <linux/ptrace.h>
#include <linux/oom.h>
+#include <linux/memory.h>

#include <asm/tlbflush.h>

@@ -3057,6 +3058,7 @@ void migrate_vma_finalize(struct migrate_vma *migrate)
EXPORT_SYMBOL(migrate_vma_finalize);
#endif /* CONFIG_DEVICE_PRIVATE */

+#if defined(CONFIG_MEMORY_HOTPLUG)
/* Disable reclaim-based migration. */
static void __disable_all_migrate_targets(void)
{
@@ -3191,10 +3193,96 @@ static void __set_migration_target_nodes(void)
/*
* For callers that do not hold get_online_mems() already.
*/
-__maybe_unused // <- temporay to prevent warnings during bisects
static void set_migration_target_nodes(void)
{
get_online_mems();
__set_migration_target_nodes();
put_online_mems();
}
+
+/*
+ * React to hotplug events that might affect the migration targets
+ * like events that online or offline NUMA nodes.
+ *
+ * The ordering is also currently dependent on which nodes have
+ * CPUs. That means we need CPU on/offline notification too.
+ */
+static int migration_online_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
+static int migration_offline_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
+/*
+ * This leaves migrate-on-reclaim transiently disabled between
+ * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs
+ * whether reclaim-based migration is enabled or not, which
+ * ensures that the user can turn reclaim-based migration at
+ * any time without needing to recalculate migration targets.
+ *
+ * These callbacks already hold get_online_mems(). That is why
+ * __set_migration_target_nodes() can be used as opposed to
+ * set_migration_target_nodes().
+ */
+static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
+ unsigned long action, void *arg)
+{
+ switch (action) {
+ case MEM_GOING_OFFLINE:
+ /*
+ * Make sure there are not transient states where
+ * an offline node is a migration target. This
+ * will leave migration disabled until the offline
+ * completes and the MEM_OFFLINE case below runs.
+ */
+ disable_all_migrate_targets();
+ break;
+ case MEM_OFFLINE:
+ case MEM_ONLINE:
+ /*
+ * Recalculate the target nodes once the node
+ * reaches its final state (online or offline).
+ */
+ __set_migration_target_nodes();
+ break;
+ case MEM_CANCEL_OFFLINE:
+ /*
+ * MEM_GOING_OFFLINE disabled all the migration
+ * targets. Reenable them.
+ */
+ __set_migration_target_nodes();
+ break;
+ case MEM_GOING_ONLINE:
+ case MEM_CANCEL_ONLINE:
+ break;
+ }
+
+ return notifier_from_errno(0);
+}
+
+static int __init migrate_on_reclaim_init(void)
+{
+ int ret;
+
+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
+ migration_online_cpu,
+ migration_offline_cpu);
+ /*
+ * In the unlikely case that this fails, the automatic
+ * migration targets may become suboptimal for nodes
+ * where N_CPU changes. With such a small impact in a
+ * rare case, do not bother trying to do anything special.
+ */
+ WARN_ON(ret < 0);
+
+ hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
+ return 0;
+}
+late_initcall(migrate_on_reclaim_init);
+#endif /* CONFIG_MEMORY_HOTPLUG */
--
2.30.2

2021-07-21 06:45:06

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 8/9] mm/vmscan: never demote for memcg reclaim

From: Dave Hansen <[email protected]>

Global reclaim aims to reduce the amount of memory used on a given node or
set of nodes. Migrating pages to another node serves this purpose.

memcg reclaim is different. Its goal is to reduce the total memory
consumption of the entire memcg, across all nodes. Migration does not
assist memcg reclaim because it just moves page contents between nodes
rather than actually reducing memory consumption.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Suggested-by: Yang Shi <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/vmscan.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 67a320c6571d..60179903ed9e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -521,8 +521,13 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker,

static bool can_demote(int nid, struct scan_control *sc)
{
- if (sc && sc->no_demotion)
- return false;
+ if (sc) {
+ if (sc->no_demotion)
+ return false;
+ /* It is pointless to do demotion in memcg reclaim */
+ if (cgroup_reclaim(sc))
+ return false;
+ }
if (next_demotion_node(nid) == NUMA_NO_NODE)
return false;

--
2.30.2

2021-07-21 06:45:13

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 5/9] mm/vmscan: add page demotion counter

From: Yang Shi <[email protected]>

Account the number of demoted pages.

Add pgdemote_kswapd and pgdemote_direct VM counters showed in
/proc/vmstat.

[ daveh:
- __count_vm_events() a bit, and made them look at the THP
size directly rather than getting data from migrate_pages()
]

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Wei Xu <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
include/linux/vm_event_item.h | 2 ++
mm/vmscan.c | 5 +++++
mm/vmstat.c | 2 ++
3 files changed, 9 insertions(+)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index ae0dd1948c2b..a185cc75ff52 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -33,6 +33,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
PGREUSE,
PGSTEAL_KSWAPD,
PGSTEAL_DIRECT,
+ PGDEMOTE_KSWAPD,
+ PGDEMOTE_DIRECT,
PGSCAN_KSWAPD,
PGSCAN_DIRECT,
PGSCAN_DIRECT_THROTTLE,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49d03b5e3c18..90fa026cfa29 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1322,6 +1322,11 @@ static unsigned int demote_page_list(struct list_head *demote_pages,
target_nid, MIGRATE_ASYNC, MR_DEMOTION,
&nr_succeeded);

+ if (current_is_kswapd())
+ __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);
+ else
+ __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded);
+
return nr_succeeded;
}

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 6246bab9fae2..13ff25d0d96a 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1217,6 +1217,8 @@ const char * const vmstat_text[] = {
"pgreuse",
"pgsteal_kswapd",
"pgsteal_direct",
+ "pgdemote_kswapd",
+ "pgdemote_direct",
"pgscan_kswapd",
"pgscan_direct",
"pgscan_direct_throttle",
--
2.30.2

2021-07-21 06:45:13

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 4/9] mm/migrate: demote pages during reclaim

From: Dave Hansen <[email protected]>

This is mostly derived from a patch from Yang Shi:

https://lore.kernel.org/linux-mm/[email protected]/

Add code to the reclaim path (shrink_page_list()) to "demote" data to
another NUMA node instead of discarding the data. This always avoids the
cost of I/O needed to read the page back in and sometimes avoids the
writeout cost when the page is dirty.

A second pass through shrink_page_list() will be made if any demotions
fail. This essentially falls back to normal reclaim behavior in the case
that demotions fail. Previous versions of this patch may have simply
failed to reclaim pages which were eligible for demotion but were unable
to be demoted in practice.

For some cases, for example, MADV_PAGEOUT, the pages are always discarded
instead of demoted to follow the kernel API definition. Because
MADV_PAGEOUT is defined as freeing specified pages regardless in which
tier they are.

Note: This just adds the start of infrastructure for migration. It is
actually disabled next to the FIXME in migrate_demote_page_ok().

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Wei Xu <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
include/linux/migrate.h | 9 ++++
include/trace/events/migrate.h | 3 +-
mm/vmscan.c | 85 ++++++++++++++++++++++++++++++++++
3 files changed, 96 insertions(+), 1 deletion(-)

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 8ab88d46318e..326250996b4e 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -28,6 +28,7 @@ enum migrate_reason {
MR_NUMA_MISPLACED,
MR_CONTIG_RANGE,
MR_LONGTERM_PIN,
+ MR_DEMOTION,
MR_TYPES
};

@@ -167,6 +168,14 @@ struct migrate_vma {
int migrate_vma_setup(struct migrate_vma *args);
void migrate_vma_pages(struct migrate_vma *migrate);
void migrate_vma_finalize(struct migrate_vma *migrate);
+int next_demotion_node(int node);
+
+#else /* CONFIG_MIGRATION disabled: */
+
+static inline int next_demotion_node(int node)
+{
+ return NUMA_NO_NODE;
+}

#endif /* CONFIG_MIGRATION */

diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h
index 9fb2a3bbcdfb..779f3fad9ecd 100644
--- a/include/trace/events/migrate.h
+++ b/include/trace/events/migrate.h
@@ -21,7 +21,8 @@
EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \
EM( MR_NUMA_MISPLACED, "numa_misplaced") \
EM( MR_CONTIG_RANGE, "contig_range") \
- EMe(MR_LONGTERM_PIN, "longterm_pin")
+ EM( MR_LONGTERM_PIN, "longterm_pin") \
+ EMe(MR_DEMOTION, "demotion")

/*
* First define the enums in the above macros to be exported to userspace
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9e1d66c81e6f..49d03b5e3c18 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -41,6 +41,7 @@
#include <linux/kthread.h>
#include <linux/freezer.h>
#include <linux/memcontrol.h>
+#include <linux/migrate.h>
#include <linux/delayacct.h>
#include <linux/sysctl.h>
#include <linux/oom.h>
@@ -118,6 +119,9 @@ struct scan_control {
/* The file pages on the current node are dangerously low */
unsigned int file_is_tiny:1;

+ /* Always discard instead of demoting to lower tier memory */
+ unsigned int no_demotion:1;
+
/* Allocation order */
s8 order;

@@ -515,6 +519,17 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker,
return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]);
}

+static bool can_demote(int nid, struct scan_control *sc)
+{
+ if (sc->no_demotion)
+ return false;
+ if (next_demotion_node(nid) == NUMA_NO_NODE)
+ return false;
+
+ // FIXME: actually enable this later in the series
+ return false;
+}
+
/*
* This misses isolated pages which are not accounted for to save counters.
* As the data only determines if reclaim or compaction continues, it is
@@ -1267,6 +1282,49 @@ static void page_check_dirty_writeback(struct page *page,
mapping->a_ops->is_dirty_writeback(page, dirty, writeback);
}

+static struct page *alloc_demote_page(struct page *page, unsigned long node)
+{
+ struct migration_target_control mtc = {
+ /*
+ * Allocate from 'node', or fail quickly and quietly.
+ * When this happens, 'page' will likely just be discarded
+ * instead of migrated.
+ */
+ .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) |
+ __GFP_THISNODE | __GFP_NOWARN |
+ __GFP_NOMEMALLOC | GFP_NOWAIT,
+ .nid = node
+ };
+
+ return alloc_migration_target(page, (unsigned long)&mtc);
+}
+
+/*
+ * Take pages on @demote_list and attempt to demote them to
+ * another node. Pages which are not demoted are left on
+ * @demote_pages.
+ */
+static unsigned int demote_page_list(struct list_head *demote_pages,
+ struct pglist_data *pgdat)
+{
+ int target_nid = next_demotion_node(pgdat->node_id);
+ unsigned int nr_succeeded;
+ int err;
+
+ if (list_empty(demote_pages))
+ return 0;
+
+ if (target_nid == NUMA_NO_NODE)
+ return 0;
+
+ /* Demotion ignores all cpuset and mempolicy settings */
+ err = migrate_pages(demote_pages, alloc_demote_page, NULL,
+ target_nid, MIGRATE_ASYNC, MR_DEMOTION,
+ &nr_succeeded);
+
+ return nr_succeeded;
+}
+
/*
* shrink_page_list() returns the number of reclaimed pages
*/
@@ -1278,12 +1336,16 @@ static unsigned int shrink_page_list(struct list_head *page_list,
{
LIST_HEAD(ret_pages);
LIST_HEAD(free_pages);
+ LIST_HEAD(demote_pages);
unsigned int nr_reclaimed = 0;
unsigned int pgactivate = 0;
+ bool do_demote_pass;

memset(stat, 0, sizeof(*stat));
cond_resched();
+ do_demote_pass = can_demote(pgdat->node_id, sc);

+retry:
while (!list_empty(page_list)) {
struct address_space *mapping;
struct page *page;
@@ -1432,6 +1494,17 @@ static unsigned int shrink_page_list(struct list_head *page_list,
; /* try to reclaim the page below */
}

+ /*
+ * Before reclaiming the page, try to relocate
+ * its contents to another node.
+ */
+ if (do_demote_pass &&
+ (thp_migration_supported() || !PageTransHuge(page))) {
+ list_add(&page->lru, &demote_pages);
+ unlock_page(page);
+ continue;
+ }
+
/*
* Anonymous process memory has backing store?
* Try to allocate it some swap space here.
@@ -1684,6 +1757,17 @@ static unsigned int shrink_page_list(struct list_head *page_list,
list_add(&page->lru, &ret_pages);
VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
}
+ /* 'page_list' is always empty here */
+
+ /* Migrate pages selected for demotion */
+ nr_reclaimed += demote_page_list(&demote_pages, pgdat);
+ /* Pages that could not be demoted are still in @demote_pages */
+ if (!list_empty(&demote_pages)) {
+ /* Pages which failed to demoted go back on @page_list for retry: */
+ list_splice_init(&demote_pages, page_list);
+ do_demote_pass = false;
+ goto retry;
+ }

pgactivate = stat->nr_activate[0] + stat->nr_activate[1];

@@ -2329,6 +2413,7 @@ unsigned long reclaim_pages(struct list_head *page_list)
.may_writepage = 1,
.may_unmap = 1,
.may_swap = 1,
+ .no_demotion = 1,
};

noreclaim_flag = memalloc_noreclaim_save();
--
2.30.2

2021-07-21 06:46:58

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 7/9] mm/vmscan: Consider anonymous pages without swap

From: Keith Busch <[email protected]>

Reclaim anonymous pages if a migration path is available now that demotion
provides a non-swap recourse for reclaiming anon pages.

Note that this check is subtly different from the can_age_anon_pages()
checks. This mechanism checks whether a specific page in a specific
context can actually be reclaimed, given current swap space and cgroup
limits.

can_age_anon_pages() is a much simpler and more preliminary check which
just says whether there is a possibility of future reclaim.

Link: https://lkml.kernel.org/r/[email protected]
Cc: Keith Busch <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/vmscan.c | 34 ++++++++++++++++++++++++++++++----
1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d79bf91700de..67a320c6571d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -521,7 +521,7 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker,

static bool can_demote(int nid, struct scan_control *sc)
{
- if (sc->no_demotion)
+ if (sc && sc->no_demotion)
return false;
if (next_demotion_node(nid) == NUMA_NO_NODE)
return false;
@@ -530,6 +530,31 @@ static bool can_demote(int nid, struct scan_control *sc)
return false;
}

+static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
+ int nid,
+ struct scan_control *sc)
+{
+ if (memcg == NULL) {
+ /*
+ * For non-memcg reclaim, is there
+ * space in any swap device?
+ */
+ if (get_nr_swap_pages() > 0)
+ return true;
+ } else {
+ /* Is the memcg below its swap limit? */
+ if (mem_cgroup_get_nr_swap_pages(memcg) > 0)
+ return true;
+ }
+
+ /*
+ * The page can not be swapped.
+ *
+ * Can it be reclaimed from this node via demotion?
+ */
+ return can_demote(nid, sc);
+}
+
/*
* This misses isolated pages which are not accounted for to save counters.
* As the data only determines if reclaim or compaction continues, it is
@@ -541,7 +566,7 @@ unsigned long zone_reclaimable_pages(struct zone *zone)

nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) +
zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE);
- if (get_nr_swap_pages() > 0)
+ if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL))
nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);

@@ -2544,6 +2569,7 @@ enum scan_balance {
static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
unsigned long *nr)
{
+ struct pglist_data *pgdat = lruvec_pgdat(lruvec);
struct mem_cgroup *memcg = lruvec_memcg(lruvec);
unsigned long anon_cost, file_cost, total_cost;
int swappiness = mem_cgroup_swappiness(memcg);
@@ -2554,7 +2580,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
enum lru_list lru;

/* If we have no swap space, do not bother scanning anon pages. */
- if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) {
+ if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) {
scan_balance = SCAN_FILE;
goto out;
}
@@ -2924,7 +2950,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
*/
pages_for_compaction = compact_gap(sc->order);
inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
- if (get_nr_swap_pages() > 0)
+ if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);

return inactive_lru_pages > pages_for_compaction;
--
2.30.2

2021-07-21 06:49:35

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 6/9] mm/vmscan: add helper for querying ability to age anonymous pages

From: Dave Hansen <[email protected]>

Anonymous pages are kept on their own LRU(s). These lists could
theoretically always be scanned and maintained. But, without swap, there
is currently nothing the kernel can *do* with the results of a scanned,
sorted LRU for anonymous pages.

A check for '!total_swap_pages' currently serves as a valid check as to
whether anonymous LRUs should be maintained. However, another method will
be added shortly: page demotion.

Abstract out the 'total_swap_pages' checks into a helper, give it a
logically significant name, and check for the possibility of page
demotion.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: "Huang, Ying" <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Reviewed-by: Greg Thelen <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
mm/vmscan.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 90fa026cfa29..d79bf91700de 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2729,6 +2729,21 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
}
}

+/*
+ * Anonymous LRU management is a waste if there is
+ * ultimately no way to reclaim the memory.
+ */
+static bool can_age_anon_pages(struct pglist_data *pgdat,
+ struct scan_control *sc)
+{
+ /* Aging the anon LRU is valuable if swap is present: */
+ if (total_swap_pages > 0)
+ return true;
+
+ /* Also valuable if anon pages can be demoted: */
+ return can_demote(pgdat->node_id, sc);
+}
+
static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
{
unsigned long nr[NR_LRU_LISTS];
@@ -2838,7 +2853,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
* Even if we did not try to evict anon pages at all, we want to
* rebalance the anon lru active/inactive ratio.
*/
- if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON))
+ if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) &&
+ inactive_is_low(lruvec, LRU_INACTIVE_ANON))
shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
sc, LRU_ACTIVE_ANON);
}
@@ -3669,7 +3685,7 @@ static void age_active_anon(struct pglist_data *pgdat,
struct mem_cgroup *memcg;
struct lruvec *lruvec;

- if (!total_swap_pages)
+ if (!can_age_anon_pages(pgdat, sc))
return;

lruvec = mem_cgroup_lruvec(NULL, pgdat);
--
2.30.2

2021-07-21 06:50:45

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V11 9/9] mm/migrate: add sysfs interface to enable reclaim migration

Some method is obviously needed to enable reclaim-based migration.

Just like traditional autonuma, there will be some workloads that will
benefit like workloads with more "static" configurations where hot pages
stay hot and cold pages stay cold. If pages come and go from the hot and
cold sets, the benefits of this approach will be more limited.

The benefits are truly workload-based and *not* hardware-based. We do not
believe that there is a viable threshold where certain hardware
configurations should have this mechanism enabled while others do not.

To be conservative, earlier work defaulted to disable reclaim- based
migration and did not include a mechanism to enable it. This proposes add
a new sysfs file

/sys/kernel/mm/numa/demotion_enabled

as a method to enable it.

We are open to any alternative that allows end users to enable this
mechanism or disable it if workload harm is detected (just like
traditional autonuma).

Once this is enabled page demotion may move data to a NUMA node that does
not fall into the cpuset of the allocating process. This could be
construed to violate the guarantees of cpusets. However, since this is an
opt-in mechanism, the assumption is that anyone enabling it is content to
relax the guarantees.

Originally-by: Dave Hansen <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Huang Ying <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
.../ABI/testing/sysfs-kernel-mm-numa | 24 ++++++++
include/linux/mempolicy.h | 4 ++
mm/mempolicy.c | 61 +++++++++++++++++++
mm/vmscan.c | 5 +-
4 files changed, 92 insertions(+), 2 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-kernel-mm-numa

diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-numa b/Documentation/ABI/testing/sysfs-kernel-mm-numa
new file mode 100644
index 000000000000..77e559d4ed80
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-numa
@@ -0,0 +1,24 @@
+What: /sys/kernel/mm/numa/
+Date: June 2021
+Contact: Linux memory management mailing list <[email protected]>
+Description: Interface for NUMA
+
+What: /sys/kernel/mm/numa/demotion_enabled
+Date: June 2021
+Contact: Linux memory management mailing list <[email protected]>
+Description: Enable/disable demoting pages during reclaim
+
+ Page migration during reclaim is intended for systems
+ with tiered memory configurations. These systems have
+ multiple types of memory with varied performance
+ characteristics instead of plain NUMA systems where
+ the same kind of memory is found at varied distances.
+ Allowing page migration during reclaim enables these
+ systems to migrate pages from fast tiers to slow tiers
+ when the fast tier is under pressure. This migration
+ is performed before swap. It may move data to a NUMA
+ node that does not fall into the cpuset of the
+ allocating process which might be construed to violate
+ the guarantees of cpusets. This should not be enabled
+ on systems which need strict cpuset location
+ guarantees.
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 0aaf91b496e2..4ca025e2a77e 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -184,6 +184,8 @@ extern bool vma_migratable(struct vm_area_struct *vma);
extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long);
extern void mpol_put_task_policy(struct task_struct *);

+extern bool numa_demotion_enabled;
+
#else

struct mempolicy {};
@@ -292,5 +294,7 @@ static inline nodemask_t *policy_nodemask_current(gfp_t gfp)
{
return NULL;
}
+
+#define numa_demotion_enabled false
#endif /* CONFIG_NUMA */
#endif
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f58c38ea1e83..a0535b73697f 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -3062,3 +3062,64 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol)
p += scnprintf(p, buffer + maxlen - p, ":%*pbl",
nodemask_pr_args(&nodes));
}
+
+bool numa_demotion_enabled = false;
+
+#ifdef CONFIG_SYSFS
+static ssize_t numa_demotion_enabled_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%s\n",
+ numa_demotion_enabled? "true" : "false");
+}
+
+static ssize_t numa_demotion_enabled_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ if (!strncmp(buf, "true", 4) || !strncmp(buf, "1", 1))
+ numa_demotion_enabled = true;
+ else if (!strncmp(buf, "false", 5) || !strncmp(buf, "0", 1))
+ numa_demotion_enabled = false;
+ else
+ return -EINVAL;
+
+ return count;
+}
+
+static struct kobj_attribute numa_demotion_enabled_attr =
+ __ATTR(demotion_enabled, 0644, numa_demotion_enabled_show,
+ numa_demotion_enabled_store);
+
+static struct attribute *numa_attrs[] = {
+ &numa_demotion_enabled_attr.attr,
+ NULL,
+};
+
+static const struct attribute_group numa_attr_group = {
+ .attrs = numa_attrs,
+};
+
+static int __init numa_init_sysfs(void)
+{
+ int err;
+ struct kobject *numa_kobj;
+
+ numa_kobj = kobject_create_and_add("numa", mm_kobj);
+ if (!numa_kobj) {
+ pr_err("failed to create numa kobject\n");
+ return -ENOMEM;
+ }
+ err = sysfs_create_group(numa_kobj, &numa_attr_group);
+ if (err) {
+ pr_err("failed to register numa group\n");
+ goto delete_obj;
+ }
+ return 0;
+
+delete_obj:
+ kobject_put(numa_kobj);
+ return err;
+}
+subsys_initcall(numa_init_sysfs);
+#endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 60179903ed9e..fa59b1344e36 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -521,6 +521,8 @@ static long add_nr_deferred(long nr, struct shrinker *shrinker,

static bool can_demote(int nid, struct scan_control *sc)
{
+ if (!numa_demotion_enabled)
+ return false;
if (sc) {
if (sc->no_demotion)
return false;
@@ -531,8 +533,7 @@ static bool can_demote(int nid, struct scan_control *sc)
if (next_demotion_node(nid) == NUMA_NO_NODE)
return false;

- // FIXME: actually enable this later in the series
- return false;
+ return true;
}

static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
--
2.30.2

2021-07-21 21:12:51

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 4/9] mm/migrate: demote pages during reclaim

On 21 Jul 2021, at 2:39, Huang Ying wrote:

> From: Dave Hansen <[email protected]>
>
> This is mostly derived from a patch from Yang Shi:
>
> https://lore.kernel.org/linux-mm/[email protected]/
>
> Add code to the reclaim path (shrink_page_list()) to "demote" data to
> another NUMA node instead of discarding the data. This always avoids the
> cost of I/O needed to read the page back in and sometimes avoids the
> writeout cost when the page is dirty.
>
> A second pass through shrink_page_list() will be made if any demotions
> fail. This essentially falls back to normal reclaim behavior in the case
> that demotions fail. Previous versions of this patch may have simply
> failed to reclaim pages which were eligible for demotion but were unable
> to be demoted in practice.
>
> For some cases, for example, MADV_PAGEOUT, the pages are always discarded
> instead of demoted to follow the kernel API definition. Because
> MADV_PAGEOUT is defined as freeing specified pages regardless in which
> tier they are.
>
> Note: This just adds the start of infrastructure for migration. It is
> actually disabled next to the FIXME in migrate_demote_page_ok().
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Reviewed-by: Wei Xu <[email protected]>
> Reviewed-by: Oscar Salvador <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Thelen <[email protected]>
> Cc: Keith Busch <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> include/linux/migrate.h | 9 ++++
> include/trace/events/migrate.h | 3 +-
> mm/vmscan.c | 85 ++++++++++++++++++++++++++++++++++
> 3 files changed, 96 insertions(+), 1 deletion(-)
>

LGTM. Reviewed-by: Zi Yan <[email protected]>


Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 21:14:57

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 5/9] mm/vmscan: add page demotion counter

On 21 Jul 2021, at 2:39, Huang Ying wrote:

> From: Yang Shi <[email protected]>
>
> Account the number of demoted pages.
>
> Add pgdemote_kswapd and pgdemote_direct VM counters showed in
> /proc/vmstat.
>
> [ daveh:
> - __count_vm_events() a bit, and made them look at the THP
> size directly rather than getting data from migrate_pages()
> ]
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Yang Shi <[email protected]>
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Reviewed-by: Wei Xu <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Greg Thelen <[email protected]>
> Cc: Keith Busch <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> include/linux/vm_event_item.h | 2 ++
> mm/vmscan.c | 5 +++++
> mm/vmstat.c | 2 ++
> 3 files changed, 9 insertions(+)
>

LGTM. Reviewed-by: Zi Yan <[email protected]>



Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 21:17:27

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 6/9] mm/vmscan: add helper for querying ability to age anonymous pages

On 21 Jul 2021, at 2:39, Huang Ying wrote:

> From: Dave Hansen <[email protected]>
>
> Anonymous pages are kept on their own LRU(s). These lists could
> theoretically always be scanned and maintained. But, without swap, there
> is currently nothing the kernel can *do* with the results of a scanned,
> sorted LRU for anonymous pages.
>
> A check for '!total_swap_pages' currently serves as a valid check as to
> whether anonymous LRUs should be maintained. However, another method will
> be added shortly: page demotion.
>
> Abstract out the 'total_swap_pages' checks into a helper, give it a
> logically significant name, and check for the possibility of page
> demotion.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Reviewed-by: Greg Thelen <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Keith Busch <[email protected]>
> Cc: Yang Shi <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> mm/vmscan.c | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>

LGTM. Reviewed-by: Zi Yan <[email protected]>


Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 21:23:43

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 7/9] mm/vmscan: Consider anonymous pages without swap

On 21 Jul 2021, at 2:39, Huang Ying wrote:

> From: Keith Busch <[email protected]>
>
> Reclaim anonymous pages if a migration path is available now that demotion
> provides a non-swap recourse for reclaiming anon pages.
>
> Note that this check is subtly different from the can_age_anon_pages()
> checks. This mechanism checks whether a specific page in a specific
> context can actually be reclaimed, given current swap space and cgroup
> limits.
>
> can_age_anon_pages() is a much simpler and more preliminary check which
> just says whether there is a possibility of future reclaim.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Cc: Keith Busch <[email protected]>
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Thelen <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Yang Shi <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> mm/vmscan.c | 34 ++++++++++++++++++++++++++++++----
> 1 file changed, 30 insertions(+), 4 deletions(-)

LGTM. Reviewed-by: Zi Yan <[email protected]>


Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 21:40:16

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 8/9] mm/vmscan: never demote for memcg reclaim

On 21 Jul 2021, at 2:39, Huang Ying wrote:

> From: Dave Hansen <[email protected]>
>
> Global reclaim aims to reduce the amount of memory used on a given node or
> set of nodes. Migrating pages to another node serves this purpose.
>
> memcg reclaim is different. Its goal is to reduce the total memory
> consumption of the entire memcg, across all nodes. Migration does not
> assist memcg reclaim because it just moves page contents between nodes
> rather than actually reducing memory consumption.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Suggested-by: Yang Shi <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Thelen <[email protected]>
> Cc: Keith Busch <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> mm/vmscan.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)

LGTM. Reviewed-by: Zi Yan <[email protected]>.

Should this be folded into Patch 4 when can_demote() is introduced?



Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 21:59:41

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH -V11 8/9] mm/vmscan: never demote for memcg reclaim

On 7/21/21 2:38 PM, Zi Yan wrote:
> On 21 Jul 2021, at 2:39, Huang Ying wrote:
>> From: Dave Hansen <[email protected]>
>>
>> Global reclaim aims to reduce the amount of memory used on a
>> given node or set of nodes. Migrating pages to another node
>> serves this purpose.
>>
>> memcg reclaim is different. Its goal is to reduce the total
>> memory consumption of the entire memcg, across all nodes.
>> Migration does not assist memcg reclaim because it just moves
>> page contents between nodes rather than actually reducing memory
>> consumption.
...
> Should this be folded into Patch 4 when can_demote() is
> introduced?

I guess it could be. But, it's logically separate since it has its
own justification which is rather discrete.

I think it's best to keep it separate.

2021-07-21 22:02:03

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH -V11 8/9] mm/vmscan: never demote for memcg reclaim

On 21 Jul 2021, at 17:58, Dave Hansen wrote:

> On 7/21/21 2:38 PM, Zi Yan wrote:
>> On 21 Jul 2021, at 2:39, Huang Ying wrote:
>>> From: Dave Hansen <[email protected]>
>>>
>>> Global reclaim aims to reduce the amount of memory used on a
>>> given node or set of nodes. Migrating pages to another node
>>> serves this purpose.
>>>
>>> memcg reclaim is different. Its goal is to reduce the total
>>> memory consumption of the entire memcg, across all nodes.
>>> Migration does not assist memcg reclaim because it just moves
>>> page contents between nodes rather than actually reducing memory
>>> consumption.
> ...
>> Should this be folded into Patch 4 when can_demote() is
>> introduced?
>
> I guess it could be. But, it's logically separate since it has its
> own justification which is rather discrete.
>
> I think it's best to keep it separate.

Sure. I am OK with it.


Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2021-07-21 22:07:59

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH -V11 8/9] mm/vmscan: never demote for memcg reclaim

On Wed, Jul 21, 2021 at 2:58 PM Dave Hansen <[email protected]> wrote:
>
> On 7/21/21 2:38 PM, Zi Yan wrote:
> > On 21 Jul 2021, at 2:39, Huang Ying wrote:
> >> From: Dave Hansen <[email protected]>
> >>
> >> Global reclaim aims to reduce the amount of memory used on a
> >> given node or set of nodes. Migrating pages to another node
> >> serves this purpose.
> >>
> >> memcg reclaim is different. Its goal is to reduce the total
> >> memory consumption of the entire memcg, across all nodes.
> >> Migration does not assist memcg reclaim because it just moves
> >> page contents between nodes rather than actually reducing memory
> >> consumption.
> ...
> > Should this be folded into Patch 4 when can_demote() is
> > introduced?
>
> I guess it could be. But, it's logically separate since it has its
> own justification which is rather discrete.
>
> I think it's best to keep it separate.

Yes, I agree.

2022-02-24 00:44:59

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

On 2/23/22 15:02, Abhishek Goel wrote:
> If needed, I will provide experiment results and traces that were used
> to conclude this.

It would be great if you can provide some more info. Even just a CPU
time profile would be helpful.

It would also be great to understand more about what "hotplug on power
systems" actually means. Is this a synthetic benchmark, or are actual
end-users running into this issue? Are entire nodes of CPUs going
offline? Or is this just doing an offline/online of CPU 22 in a 100-CPU
NUMA node?

2022-02-24 01:54:26

by Abhishek Goel

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

Hi Dave,


> From: Dave Hansen <[email protected]>
>
> Reclaim-based migration is attempting to optimize data placement in memory
> based on the system topology. If the system changes, so must the
> migration ordering.
>
> The implementation is conceptually simple and entirely unoptimized. On
> any memory or CPU hotplug events, assume that a node was added or removed
> and recalculate all migration targets. This ensures that the
> node_demotion[] array is always ready to be used in case the new reclaim
> mode is enabled.
>
> This recalculation is far from optimal, most glaringly that it does not
> even attempt to figure out the hotplug event would have some *actual*
> effect on the demotion order. But, given the expected paucity of hotplug
> events, this should be fine.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Dave Hansen <[email protected]>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Reviewed-by: Zi Yan <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Thelen <[email protected]>
> Cc: Keith Busch <[email protected]>
> Cc: Yang Shi <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
> mm/migrate.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 89 insertions(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b7a40ab47648..a40c391f9ca7 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -49,6 +49,7 @@
> #include <linux/sched/mm.h>
> #include <linux/ptrace.h>
> #include <linux/oom.h>
> +#include <linux/memory.h>
>
> #include <asm/tlbflush.h>
>
> @@ -3057,6 +3058,7 @@ void migrate_vma_finalize(struct migrate_vma *migrate)
> EXPORT_SYMBOL(migrate_vma_finalize);
> #endif /* CONFIG_DEVICE_PRIVATE */
>
> +#if defined(CONFIG_MEMORY_HOTPLUG)
> /* Disable reclaim-based migration. */
> static void __disable_all_migrate_targets(void)
> {
> @@ -3191,10 +3193,96 @@ static void __set_migration_target_nodes(void)
> /*
> * For callers that do not hold get_online_mems() already.
> */
> -__maybe_unused // <- temporay to prevent warnings during bisects
> static void set_migration_target_nodes(void)
> {
> get_online_mems();
> __set_migration_target_nodes();
> put_online_mems();
> }
> +
> +/*
> + * React to hotplug events that might affect the migration targets
> + * like events that online or offline NUMA nodes.
> + *
> + * The ordering is also currently dependent on which nodes have
> + * CPUs. That means we need CPU on/offline notification too.
> + */
> +static int migration_online_cpu(unsigned int cpu)
> +{
> + set_migration_target_nodes();
> + return 0;
> +}
> +
> +static int migration_offline_cpu(unsigned int cpu)
> +{
> + set_migration_target_nodes();
> + return 0;
> +}
> +
> +/*
> + * This leaves migrate-on-reclaim transiently disabled between
> + * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs
> + * whether reclaim-based migration is enabled or not, which
> + * ensures that the user can turn reclaim-based migration at
> + * any time without needing to recalculate migration targets.
> + *
> + * These callbacks already hold get_online_mems(). That is why
> + * __set_migration_target_nodes() can be used as opposed to
> + * set_migration_target_nodes().
> + */
> +static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
> + unsigned long action, void *arg)
> +{
> + switch (action) {
> + case MEM_GOING_OFFLINE:
> + /*
> + * Make sure there are not transient states where
> + * an offline node is a migration target. This
> + * will leave migration disabled until the offline
> + * completes and the MEM_OFFLINE case below runs.
> + */
> + disable_all_migrate_targets();
> + break;
> + case MEM_OFFLINE:
> + case MEM_ONLINE:
> + /*
> + * Recalculate the target nodes once the node
> + * reaches its final state (online or offline).
> + */
> + __set_migration_target_nodes();
> + break;
> + case MEM_CANCEL_OFFLINE:
> + /*
> + * MEM_GOING_OFFLINE disabled all the migration
> + * targets. Reenable them.
> + */
> + __set_migration_target_nodes();
> + break;
> + case MEM_GOING_ONLINE:
> + case MEM_CANCEL_ONLINE:
> + break;
> + }
> +
> + return notifier_from_errno(0);
> +}
> +
> +static int __init migrate_on_reclaim_init(void)
> +{
> + int ret;
> +
> + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
> + migration_online_cpu,
> + migration_offline_cpu);
> + /*
> + * In the unlikely case that this fails, the automatic
> + * migration targets may become suboptimal for nodes
> + * where N_CPU changes. With such a small impact in a
> + * rare case, do not bother trying to do anything special.
> + */
> + WARN_ON(ret < 0);
> +
> + hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
> + return 0;
> +}
> +late_initcall(migrate_on_reclaim_init);
> +#endif /* CONFIG_MEMORY_HOTPLUG */
I intend to report a issue that is being caused by this patch.
From 5.14 to 5.15 kernel, hotplug on power systems was observed to be
taking double the expected time. Git bisect between these two kernels
points to this patch as the one causing the issue.
I have verified from cpu-hotplug callback trace that we are infact
spending a lot of time in migration_offline_cpu code path.
I see that there have been subsequent patches to optimize this update
node demotion order code, but those patches are already in 5.15 kernel
and regressions are still observed even after those optimizations.
I have also recreated and observed the issue across systems with
different configs, and across different kernels containing this patch.
If needed, I will provide experiment results and traces that were used
to conclude this.

Regards,

- Abhishek

2022-02-25 00:48:02

by Abhishek Goel

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events


On 24/02/22 05:35, Dave Hansen wrote:
> On 2/23/22 15:02, Abhishek Goel wrote:
>> If needed, I will provide experiment results and traces that were used
>> to conclude this.
> It would be great if you can provide some more info. Even just a CPU
> time profile would be helpful.

Average total time taken for SMT=8 to SMT=1 in v5.14 : 20s

Average total time taken for SMT=8 to SMT=1 in v5.15 : 36s

(Observed in system with 150+ CPUs )

>
> It would also be great to understand more about what "hotplug on power
> systems" actually means. Is this a synthetic benchmark, or are actual
> end-users running into this issue? Are entire nodes of CPUs going
> offline? Or is this just doing an offline/online of CPU 22 in a 100-CPU
> NUMA node?
No, this is not a synthetic benchmark. This can be recreated with
entire nodes of CPUs going offline. And the online/offline operations
have been performed by simple scripts. The time observed can also be
verified (for individual CPU or the entire system) by observing CPU-
Hotplug trace which provide consistent result as observed by using
the scripts.

2022-02-25 07:09:14

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

Hi, Abhishek,

Abhishek Goel <[email protected]> writes:

> On 24/02/22 05:35, Dave Hansen wrote:
>> On 2/23/22 15:02, Abhishek Goel wrote:
>>> If needed, I will provide experiment results and traces that were used
>>> to conclude this.
>> It would be great if you can provide some more info. Even just a CPU
>> time profile would be helpful.
>
> Average total time taken for SMT=8 to SMT=1 in v5.14 : 20s
>
> Average total time taken for SMT=8 to SMT=1 in v5.15 : 36s
>
> (Observed in system with 150+ CPUs )

We have run into a memory hotplug regression before. Let's check
whether the problem is similar. Can you try the below debug patch?

Best Regards,
Huang, Ying

----------------------------8<------------------------------------------
From 500c0b53436b7a697ed5d77241abbc0d5d3cfc07 Mon Sep 17 00:00:00 2001
From: Huang Ying <[email protected]>
Date: Wed, 29 Sep 2021 10:57:19 +0800
Subject: [PATCH] mm/migrate: Debug CPU hotplug regression

Signed-off-by: "Huang, Ying" <[email protected]>
---
mm/migrate.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7da064b4781..c4805f15e616 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -3261,15 +3261,17 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
* The ordering is also currently dependent on which nodes have
* CPUs. That means we need CPU on/offline notification too.
*/
-static int migration_online_cpu(unsigned int cpu)
+static int migration_cpu_hotplug(unsigned int cpu)
{
- set_migration_target_nodes();
- return 0;
-}
+ static int nr_cpu_node_saved;
+ int nr_cpu_node;
+
+ nr_cpu_node = num_node_state(N_CPU);
+ if (nr_cpu_node != nr_cpu_node_saved) {
+ set_migration_target_nodes();
+ nr_cpu_node_saved = nr_cpu_node;
+ }

-static int migration_offline_cpu(unsigned int cpu)
-{
- set_migration_target_nodes();
return 0;
}

@@ -3283,7 +3285,7 @@ static int __init migrate_on_reclaim_init(void)
WARN_ON(!node_demotion);

ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
- NULL, migration_offline_cpu);
+ NULL, migration_cpu_hotplug);
/*
* In the unlikely case that this fails, the automatic
* migration targets may become suboptimal for nodes
@@ -3292,7 +3294,7 @@ static int __init migrate_on_reclaim_init(void)
*/
WARN_ON(ret < 0);
ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
- migration_online_cpu, NULL);
+ migration_cpu_hotplug, NULL);
WARN_ON(ret < 0);

hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
--
2.30.2

2022-02-25 09:46:05

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

On 2/24/22 15:37, Abhishek Goel wrote:
>
> On 24/02/22 05:35, Dave Hansen wrote:
>> On 2/23/22 15:02, Abhishek Goel wrote:
>>> If needed, I will provide experiment results and traces that were used
>>> to conclude this.
>> It would be great if you can provide some more info.  Even just a CPU
>> time profile would be helpful.
>
> Average total time taken for SMT=8 to SMT=1 in v5.14 : 20s
>
> Average total time taken for SMT=8 to SMT=1 in v5.15 : 36s
>
> (Observed in system with 150+ CPUs )

I was kinda thinking of:

perf record / perf report

output. Not wall clock time. We need to know what the kernel is doing
during those extra 16 seconds.

2022-02-26 01:55:20

by Abhishek Goel

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

Hi Huang,

On 25/02/22 08:02, Huang, Ying wrote:
>
> We have run into a memory hotplug regression before. Let's check
> whether the problem is similar. Can you try the below debug patch?
>
> Best Regards,
> Huang, Ying
>
> ----------------------------8<------------------------------------------
> From 500c0b53436b7a697ed5d77241abbc0d5d3cfc07 Mon Sep 17 00:00:00 2001
> From: Huang Ying <[email protected]>
> Date: Wed, 29 Sep 2021 10:57:19 +0800
> Subject: [PATCH] mm/migrate: Debug CPU hotplug regression
>
> Signed-off-by: "Huang, Ying" <[email protected]>
> ---
> mm/migrate.c | 20 +++++++++++---------
> 1 file changed, 11 insertions(+), 9 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c7da064b4781..c4805f15e616 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -3261,15 +3261,17 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
> * The ordering is also currently dependent on which nodes have
> * CPUs. That means we need CPU on/offline notification too.
> */
> -static int migration_online_cpu(unsigned int cpu)
> +static int migration_cpu_hotplug(unsigned int cpu)
> {
> - set_migration_target_nodes();
> - return 0;
> -}
> + static int nr_cpu_node_saved;
> + int nr_cpu_node;
> +
> + nr_cpu_node = num_node_state(N_CPU);
> + if (nr_cpu_node != nr_cpu_node_saved) {
> + set_migration_target_nodes();
> + nr_cpu_node_saved = nr_cpu_node;
> + }
>
> -static int migration_offline_cpu(unsigned int cpu)
> -{
> - set_migration_target_nodes();
> return 0;
> }
>
> @@ -3283,7 +3285,7 @@ static int __init migrate_on_reclaim_init(void)
> WARN_ON(!node_demotion);
>
> ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
> - NULL, migration_offline_cpu);
> + NULL, migration_cpu_hotplug);
> /*
> * In the unlikely case that this fails, the automatic
> * migration targets may become suboptimal for nodes
> @@ -3292,7 +3294,7 @@ static int __init migrate_on_reclaim_init(void)
> */
> WARN_ON(ret < 0);
> ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
> - migration_online_cpu, NULL);
> + migration_cpu_hotplug, NULL);
> WARN_ON(ret < 0);
>
> hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
This works. Applied this on 5.15 kernel and don't see any regression
compared to 5.14 kernel.
So, Have you posted this patch yet? Or any plans on inclusion of any
similar patch?


2022-03-08 16:24:50

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

On Fri, Feb 25, 2022 at 10:32:20AM +0800, Huang, Ying wrote:
> -static int migration_online_cpu(unsigned int cpu)
> +static int migration_cpu_hotplug(unsigned int cpu)
> {
> - set_migration_target_nodes();
> - return 0;
> -}
> + static int nr_cpu_node_saved;
> + int nr_cpu_node;
> +
> + nr_cpu_node = num_node_state(N_CPU);
> + if (nr_cpu_node != nr_cpu_node_saved) {
> + set_migration_target_nodes();
> + nr_cpu_node_saved = nr_cpu_node;
> + }
>
> -static int migration_offline_cpu(unsigned int cpu)
> -{
> - set_migration_target_nodes();
> return 0;
> }

These callbacks feel like re-inveting the wheel.
We do already have two functions that get called during cpu
online/offline, and that sets/clears N_CPU on the node properly.
And that is exactly what we want, so what about the following (only
compile-tested):

diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index db96e10eb8da..031af2bb71dc 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -48,6 +48,7 @@ int folio_migrate_mapping(struct address_space *mapping,
struct folio *newfolio, struct folio *folio, int extra_count);

extern bool numa_demotion_enabled;
+extern void set_migration_target_nodes(void);
#else

static inline void putback_movable_pages(struct list_head *l) {}
diff --git a/mm/migrate.c b/mm/migrate.c
index c7da064b4781..7847e4de01d7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -3190,7 +3190,7 @@ static void __set_migration_target_nodes(void)
/*
* For callers that do not hold get_online_mems() already.
*/
-static void set_migration_target_nodes(void)
+void set_migration_target_nodes(void)
{
get_online_mems();
__set_migration_target_nodes();
@@ -3254,47 +3254,13 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
return notifier_from_errno(0);
}

-/*
- * React to hotplug events that might affect the migration targets
- * like events that online or offline NUMA nodes.
- *
- * The ordering is also currently dependent on which nodes have
- * CPUs. That means we need CPU on/offline notification too.
- */
-static int migration_online_cpu(unsigned int cpu)
-{
- set_migration_target_nodes();
- return 0;
-}
-
-static int migration_offline_cpu(unsigned int cpu)
-{
- set_migration_target_nodes();
- return 0;
-}
-
static int __init migrate_on_reclaim_init(void)
{
- int ret;
-
node_demotion = kmalloc_array(nr_node_ids,
sizeof(struct demotion_nodes),
GFP_KERNEL);
WARN_ON(!node_demotion);

- ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
- NULL, migration_offline_cpu);
- /*
- * In the unlikely case that this fails, the automatic
- * migration targets may become suboptimal for nodes
- * where N_CPU changes. With such a small impact in a
- * rare case, do not bother trying to do anything special.
- */
- WARN_ON(ret < 0);
- ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
- migration_online_cpu, NULL);
- WARN_ON(ret < 0);
-
hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
return 0;
}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4057372745d0..0529a83c8f89 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -28,6 +28,7 @@
#include <linux/mm_inline.h>
#include <linux/page_ext.h>
#include <linux/page_owner.h>
+#include <linux/migrate.h>

#include "internal.h"

@@ -2043,7 +2044,12 @@ static void __init init_cpu_node_state(void)
static int vmstat_cpu_online(unsigned int cpu)
{
refresh_zone_stat_thresholds();
- node_set_state(cpu_to_node(cpu), N_CPU);
+
+ if (!node_state(cpu_to_node(cpu), N_CPU)) {
+ node_set_state(cpu_to_node(cpu), N_CPU);
+ set_migration_target_nodes();
+ }
+
return 0;
}

@@ -2066,6 +2072,8 @@ static int vmstat_cpu_dead(unsigned int cpu)
return 0;

node_clear_state(node, N_CPU);
+ set_migration_target_nodes();
+
return 0;
}

I think this is just easier and meets exactly the goal.

We could go even further and move the work left in
migrate_on_reclaim_init() to init_mm_internals().
(I __think__ we should be fine because there is no dependency
there, e.g: notifier being set up somewhere later after
init_mm_internals() has been called).

--
Oscar Salvador
SUSE Labs

2022-03-08 18:49:59

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

On 3/8/22 02:27, Oscar Salvador wrote:
> @@ -2043,7 +2044,12 @@ static void __init init_cpu_node_state(void)
> static int vmstat_cpu_online(unsigned int cpu)
> {
> refresh_zone_stat_thresholds();
> - node_set_state(cpu_to_node(cpu), N_CPU);
> +
> + if (!node_state(cpu_to_node(cpu), N_CPU)) {
> + node_set_state(cpu_to_node(cpu), N_CPU);
> + set_migration_target_nodes();
> + }
> +
> return 0;
> }
>
> @@ -2066,6 +2072,8 @@ static int vmstat_cpu_dead(unsigned int cpu)
> return 0;
>
> node_clear_state(node, N_CPU);
> + set_migration_target_nodes();
> +
> return 0;
> }

Yeah, those callbacks do look like they're reinventing the wheel. This
is a much more direct way of doing it.

2022-03-08 19:39:00

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

On Tue, Mar 08, 2022 at 09:07:20AM -0800, Dave Hansen wrote:
> On 3/8/22 02:27, Oscar Salvador wrote:
> > @@ -2043,7 +2044,12 @@ static void __init init_cpu_node_state(void)
> > static int vmstat_cpu_online(unsigned int cpu)
> > {
> > refresh_zone_stat_thresholds();
> > - node_set_state(cpu_to_node(cpu), N_CPU);
> > +
> > + if (!node_state(cpu_to_node(cpu), N_CPU)) {
> > + node_set_state(cpu_to_node(cpu), N_CPU);
> > + set_migration_target_nodes();
> > + }
> > +
> > return 0;
> > }
> >
> > @@ -2066,6 +2072,8 @@ static int vmstat_cpu_dead(unsigned int cpu)
> > return 0;
> >
> > node_clear_state(node, N_CPU);
> > + set_migration_target_nodes();
> > +
> > return 0;
> > }
>
> Yeah, those callbacks do look like they're reinventing the wheel. This
> is a much more direct way of doing it.

Then let me play a bit more with it and I can cook a patch unless
someone feels strong against it.

Thanks

--
Oscar Salvador
SUSE Labs

2022-03-09 02:15:52

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -V11 2/9] mm/migrate: update node demotion order on hotplug events

Oscar Salvador <[email protected]> writes:

> On Tue, Mar 08, 2022 at 09:07:20AM -0800, Dave Hansen wrote:
>> On 3/8/22 02:27, Oscar Salvador wrote:
>> > @@ -2043,7 +2044,12 @@ static void __init init_cpu_node_state(void)
>> > static int vmstat_cpu_online(unsigned int cpu)
>> > {
>> > refresh_zone_stat_thresholds();
>> > - node_set_state(cpu_to_node(cpu), N_CPU);
>> > +
>> > + if (!node_state(cpu_to_node(cpu), N_CPU)) {
>> > + node_set_state(cpu_to_node(cpu), N_CPU);
>> > + set_migration_target_nodes();
>> > + }
>> > +
>> > return 0;
>> > }
>> >
>> > @@ -2066,6 +2072,8 @@ static int vmstat_cpu_dead(unsigned int cpu)
>> > return 0;
>> >
>> > node_clear_state(node, N_CPU);
>> > + set_migration_target_nodes();
>> > +
>> > return 0;
>> > }
>>
>> Yeah, those callbacks do look like they're reinventing the wheel. This
>> is a much more direct way of doing it.
>
> Then let me play a bit more with it and I can cook a patch unless
> someone feels strong against it.

This looks good to me, Thanks!

Best Regards,
Huang, Ying