2021-04-01 18:57:38

by Dave Hansen

[permalink] [raw]
Subject: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

I'm resending this because I forgot to cc the mailing lists on the
post yesterday. Sorry for the noise. Please reply to this series.

The full series is also available here:

https://github.com/hansendc/linux/tree/automigrate-20210331

which also inclues some vm.zone_reclaim_mode sysctl ABI fixup
prerequisites:

https://github.com/hansendc/linux/commit/18daad8f0181a2da57cb43e595303c2ef5bd7b6e
https://github.com/hansendc/linux/commit/a873f3b6f250581072ab36f2735a3aa341e36705

There are no major changes since the last post.

--

We're starting to see systems with more and more kinds of memory such
as Intel's implementation of persistent memory.

Let's say you have a system with some DRAM and some persistent memory.
Today, once DRAM fills up, reclaim will start and some of the DRAM
contents will be thrown out. Allocations will, at some point, start
falling over to the slower persistent memory.

That has two nasty properties. First, the newer allocations can end
up in the slower persistent memory. Second, reclaimed data in DRAM
are just discarded even if there are gobs of space in persistent
memory that could be used.

This set implements a solution to these problems. At the end of the
reclaim process in shrink_page_list() just before the last page
refcount is dropped, the page is migrated to persistent memory instead
of being dropped.

While I've talked about a DRAM/PMEM pairing, this approach would
function in any environment where memory tiers exist.

This is not perfect. It "strands" pages in slower memory and never
brings them back to fast DRAM. Huang Ying has follow-on work which
repurposes autonuma to promote hot pages back to DRAM.

This is also all based on an upstream mechanism that allows
persistent memory to be onlined and used as if it were volatile:

http://lkml.kernel.org/r/[email protected]

== Open Issues ==

* Memory policies and cpusets that, for instance, restrict allocations
to DRAM can be demoted to PMEM whenever they opt in to this
new mechanism. A cgroup-level API to opt-in or opt-out of
these migrations will likely be required as a follow-on.
* Could be more aggressive about where anon LRU scanning occurs
since it no longer necessarily involves I/O. get_scan_count()
for instance says: "If we have no swap space, do not bother
scanning anon pages"

--

Documentation/admin-guide/sysctl/vm.rst | 12 +
include/linux/migrate.h | 14 +-
include/linux/swap.h | 3 +-
include/linux/vm_event_item.h | 2 +
include/trace/events/migrate.h | 3 +-
include/uapi/linux/mempolicy.h | 1 +
mm/compaction.c | 3 +-
mm/gup.c | 3 +-
mm/internal.h | 5 +
mm/memory-failure.c | 4 +-
mm/memory_hotplug.c | 4 +-
mm/mempolicy.c | 8 +-
mm/migrate.c | 315 +++++++++++++++++++++++-
mm/page_alloc.c | 11 +-
mm/vmscan.c | 158 +++++++++++-
mm/vmstat.c | 2 +
16 files changed, 520 insertions(+), 28 deletions(-)

--

Changes since (automigrate-20210304):
* Add ack/review tags
* Remove duplicate synchronize_rcu() call

Changes since (automigrate-20210122):
* move from GFP_HIGHUSER -> GFP_HIGHUSER_MOVABLE since pages *are*
movable.
* Separate out helpers that check for being able to relaim anonymous
pages versus being able to meaningfully scan the anon LRU.

Changes since (automigrate-20200818):
* Fall back to normal reclaim when demotion fails
* Fix some compile issues, when page migration and NUMA are off

Changes since (automigrate-20201007):
* separate out checks for "can scan anon LRU" from "can actually
swap anon pages right now". Previous series conflated them
and may have been overly aggressive scanning LRU
* add MR_DEMOTION to tracepoint header
* remove unnecessary hugetlb page check

Changes since (https://lwn.net/Articles/824830/):
* Use higher-level migrate_pages() API approach from Yang Shi's
earlier patches.
* made sure to actually check node_reclaim_mode's new bit
* disabled migration entirely before introducing RECLAIM_MIGRATE
* Replace GFP_NOWAIT with explicit __GFP_KSWAPD_RECLAIM and
comment why we want that.
* Comment on effects of that keep multiple source nodes from
sharing target nodes

Cc: Yang Shi <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: osalvador <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Wei Xu <[email protected]>


2021-04-01 18:58:32

by Dave Hansen

[permalink] [raw]
Subject: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events


From: Dave Hansen <[email protected]>

Reclaim-based migration is attempting to optimize data placement in
memory based on the system topology. If the system changes, so must
the migration ordering.

The implementation is conceptually simple and entirely unoptimized.
On any memory or CPU hotplug events, assume that a node was added or
removed and recalculate all migration targets. This ensures that the
node_demotion[] array is always ready to be used in case the new
reclaim mode is enabled.

This recalculation is far from optimal, most glaringly that it does
not even attempt to figure out the hotplug event would have some
*actual* effect on the demotion order. But, given the expected
paucity of hotplug events, this should be fine.

=== What does RCU provide? ===

Imaginge a simple loop which walks down the demotion path looking
for the last node:

terminal_node = start_node;
while (node_demotion[terminal_node] != NUMA_NO_NODE) {
terminal_node = node_demotion[terminal_node];
}

The initial values are:

node_demotion[0] = 1;
node_demotion[1] = NUMA_NO_NODE;

and are updated to:

node_demotion[0] = NUMA_NO_NODE;
node_demotion[1] = 0;

What guarantees that the loop did not observe:

node_demotion[0] = 1;
node_demotion[1] = 0;

and would loop forever?

With RCU, a rcu_read_lock/unlock() can be placed around the
loop. Since the write side does a synchronize_rcu(), the loop
that observed the old contents is known to be complete after the
synchronize_rcu() has completed.

RCU, combined with disable_all_migrate_targets(), ensures that
the old migration state is not visible by the time
__set_migration_target_nodes() is called.

=== What does READ_ONCE() provide? ===

READ_ONCE() forbids the compiler from merging or reordering
successive reads of node_demotion[]. This ensures that any
updates are *eventually* observed.

Consider the above loop again. The compiler could theoretically
read the entirety of node_demotion[] into local storage
(registers) and never go back to memory, and *permanently*
observe bad values for node_demotion[].

Note: RCU does not provide any universal compiler-ordering
guarantees:

https://lore.kernel.org/lkml/[email protected]/

Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Yang Shi <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: osalvador <[email protected]>

--

Changes since 20210302:
* remove duplicate synchronize_rcu()
---

b/mm/migrate.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 129 insertions(+), 23 deletions(-)

diff -puN mm/migrate.c~enable-numa-demotion mm/migrate.c
--- a/mm/migrate.c~enable-numa-demotion 2021-03-31 15:17:13.056000258 -0700
+++ b/mm/migrate.c 2021-03-31 15:17:13.062000258 -0700
@@ -49,6 +49,7 @@
#include <linux/sched/mm.h>
#include <linux/ptrace.h>
#include <linux/oom.h>
+#include <linux/memory.h>

#include <asm/tlbflush.h>

@@ -1198,8 +1199,12 @@ out:
*/

/*
- * Writes to this array occur without locking. READ_ONCE()
- * is recommended for readers to ensure consistent reads.
+ * Writes to this array occur without locking. Cycles are
+ * not allowed: Node X demotes to Y which demotes to X...
+ *
+ * If multiple reads are performed, a single rcu_read_lock()
+ * must be held over all reads to ensure that no cycles are
+ * observed.
*/
static int node_demotion[MAX_NUMNODES] __read_mostly =
{[0 ... MAX_NUMNODES - 1] = NUMA_NO_NODE};
@@ -1215,13 +1220,22 @@ static int node_demotion[MAX_NUMNODES] _
*/
int next_demotion_node(int node)
{
+ int target;
+
/*
- * node_demotion[] is updated without excluding
- * this function from running. READ_ONCE() avoids
- * reading multiple, inconsistent 'node' values
- * during an update.
+ * node_demotion[] is updated without excluding this
+ * function from running. RCU doesn't provide any
+ * compiler barriers, so the READ_ONCE() is required
+ * to avoid compiler reordering or read merging.
+ *
+ * Make sure to use RCU over entire code blocks if
+ * node_demotion[] reads need to be consistent.
*/
- return READ_ONCE(node_demotion[node]);
+ rcu_read_lock();
+ target = READ_ONCE(node_demotion[node]);
+ rcu_read_unlock();
+
+ return target;
}

/*
@@ -3226,8 +3240,9 @@ void migrate_vma_finalize(struct migrate
EXPORT_SYMBOL(migrate_vma_finalize);
#endif /* CONFIG_DEVICE_PRIVATE */

+#if defined(CONFIG_MEMORY_HOTPLUG)
/* Disable reclaim-based migration. */
-static void disable_all_migrate_targets(void)
+static void __disable_all_migrate_targets(void)
{
int node;

@@ -3235,6 +3250,25 @@ static void disable_all_migrate_targets(
node_demotion[node] = NUMA_NO_NODE;
}

+static void disable_all_migrate_targets(void)
+{
+ __disable_all_migrate_targets();
+
+ /*
+ * Ensure that the "disable" is visible across the system.
+ * Readers will see either a combination of before+disable
+ * state or disable+after. They will never see before and
+ * after state together.
+ *
+ * The before+after state together might have cycles and
+ * could cause readers to do things like loop until this
+ * function finishes. This ensures they can only see a
+ * single "bad" read and would, for instance, only loop
+ * once.
+ */
+ synchronize_rcu();
+}
+
/*
* Find an automatic demotion target for 'node'.
* Failing here is OK. It might just indicate
@@ -3297,20 +3331,6 @@ static void __set_migration_target_nodes
disable_all_migrate_targets();

/*
- * Ensure that the "disable" is visible across the system.
- * Readers will see either a combination of before+disable
- * state or disable+after. They will never see before and
- * after state together.
- *
- * The before+after state together might have cycles and
- * could cause readers to do things like loop until this
- * function finishes. This ensures they can only see a
- * single "bad" read and would, for instance, only loop
- * once.
- */
- smp_wmb();
-
- /*
* Allocations go close to CPUs, first. Assume that
* the migration path starts at the nodes with CPUs.
*/
@@ -3347,10 +3367,96 @@ again:
/*
* For callers that do not hold get_online_mems() already.
*/
-__maybe_unused // <- temporay to prevent warnings during bisects
static void set_migration_target_nodes(void)
{
get_online_mems();
__set_migration_target_nodes();
put_online_mems();
}
+
+/*
+ * React to hotplug events that might affect the migration targets
+ * like events that online or offline NUMA nodes.
+ *
+ * The ordering is also currently dependent on which nodes have
+ * CPUs. That means we need CPU on/offline notification too.
+ */
+static int migration_online_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
+static int migration_offline_cpu(unsigned int cpu)
+{
+ set_migration_target_nodes();
+ return 0;
+}
+
+/*
+ * This leaves migrate-on-reclaim transiently disabled between
+ * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs
+ * whether reclaim-based migration is enabled or not, which
+ * ensures that the user can turn reclaim-based migration at
+ * any time without needing to recalculate migration targets.
+ *
+ * These callbacks already hold get_online_mems(). That is why
+ * __set_migration_target_nodes() can be used as opposed to
+ * set_migration_target_nodes().
+ */
+static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
+ unsigned long action, void *arg)
+{
+ switch (action) {
+ case MEM_GOING_OFFLINE:
+ /*
+ * Make sure there are not transient states where
+ * an offline node is a migration target. This
+ * will leave migration disabled until the offline
+ * completes and the MEM_OFFLINE case below runs.
+ */
+ disable_all_migrate_targets();
+ break;
+ case MEM_OFFLINE:
+ case MEM_ONLINE:
+ /*
+ * Recalculate the target nodes once the node
+ * reaches its final state (online or offline).
+ */
+ __set_migration_target_nodes();
+ break;
+ case MEM_CANCEL_OFFLINE:
+ /*
+ * MEM_GOING_OFFLINE disabled all the migration
+ * targets. Reenable them.
+ */
+ __set_migration_target_nodes();
+ break;
+ case MEM_GOING_ONLINE:
+ case MEM_CANCEL_ONLINE:
+ break;
+ }
+
+ return notifier_from_errno(0);
+}
+
+static int __init migrate_on_reclaim_init(void)
+{
+ int ret;
+
+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
+ migration_online_cpu,
+ migration_offline_cpu);
+ /*
+ * In the unlikely case that this fails, the automatic
+ * migration targets may become suboptimal for nodes
+ * where N_CPU changes. With such a small impact in a
+ * rare case, do not bother trying to do anything special.
+ */
+ WARN_ON(ret < 0);
+
+ hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
+ return 0;
+}
+late_initcall(migrate_on_reclaim_init);
+#endif /* CONFIG_MEMORY_HOTPLUG */
_

2021-04-08 09:54:52

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On Thu, Apr 01, 2021 at 11:32:21AM -0700, Dave Hansen wrote:
>
> From: Dave Hansen <[email protected]>
>
> Reclaim-based migration is attempting to optimize data placement in
> memory based on the system topology. If the system changes, so must
> the migration ordering.
>
> The implementation is conceptually simple and entirely unoptimized.
> On any memory or CPU hotplug events, assume that a node was added or
> removed and recalculate all migration targets. This ensures that the
> node_demotion[] array is always ready to be used in case the new
> reclaim mode is enabled.
>
> This recalculation is far from optimal, most glaringly that it does
> not even attempt to figure out the hotplug event would have some
> *actual* effect on the demotion order. But, given the expected
> paucity of hotplug events, this should be fine.
>
> === What does RCU provide? ===
>
> Imaginge a simple loop which walks down the demotion path looking
> for the last node:
>
> terminal_node = start_node;
> while (node_demotion[terminal_node] != NUMA_NO_NODE) {
> terminal_node = node_demotion[terminal_node];
> }
>
> The initial values are:
>
> node_demotion[0] = 1;
> node_demotion[1] = NUMA_NO_NODE;
>
> and are updated to:
>
> node_demotion[0] = NUMA_NO_NODE;
> node_demotion[1] = 0;
>
> What guarantees that the loop did not observe:
>
> node_demotion[0] = 1;
> node_demotion[1] = 0;
>
> and would loop forever?
>
> With RCU, a rcu_read_lock/unlock() can be placed around the
> loop. Since the write side does a synchronize_rcu(), the loop
> that observed the old contents is known to be complete after the
> synchronize_rcu() has completed.
>
> RCU, combined with disable_all_migrate_targets(), ensures that
> the old migration state is not visible by the time
> __set_migration_target_nodes() is called.
>
> === What does READ_ONCE() provide? ===
>
> READ_ONCE() forbids the compiler from merging or reordering
> successive reads of node_demotion[]. This ensures that any
> updates are *eventually* observed.
>
> Consider the above loop again. The compiler could theoretically
> read the entirety of node_demotion[] into local storage
> (registers) and never go back to memory, and *permanently*
> observe bad values for node_demotion[].
>
> Note: RCU does not provide any universal compiler-ordering
> guarantees:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> Signed-off-by: Dave Hansen <[email protected]>
> Reviewed-by: Yang Shi <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: osalvador <[email protected]>
>

...

> +#if defined(CONFIG_MEMORY_HOTPLUG)

I am not really into PMEM, and I ignore whether we need
CONFIG_MEMORY_HOTPLUG in order to have such memory on the system.
If so, the following can be partly ignored.

I think that you either want to check CONFIG_MEMORY_HOTPLUG +
CONFIG_CPU_HOTPLUG, or just do not put it under any conf dependency.

The thing is that migrate_on_reclaim_init() will only be called if
we have CONFIG_MEMORY_HOTPLUG, and when we do not have that (but we do have
CONFIG_CPU_HOTPLUG) the calls to set_migration_target_nodes() wont't be
made when the system brings up the CPUs during the boot phase,
which means node_demotion[] list won't be initialized.

But this brings me to the next point.

From a conceptual point of view, I think you want to build the
node_demotion[] list, being orthogonal to it whether we support CPU Or
MEMORY hotplug.

Now, in case we support CPU or MEMORY hotplug, we do want to be able to re-build
the list for .e.g: in case NUMA nodes become cpu-less or memory-less.

On x86_64, CPU_HOTPLUG is enabled by default if SMP, the same for
MEMORY_HOTPLUG, but I am not sure about other archs.
Can we have !CPU_HOTPLUG && MEMORY_HOTPLUG, !MEMORY_HOTPLUG &&
CPU_HOTPLUG? I do now really know, but I think you should be careful
about that.

If this was my call, I would:

- Do not place the burden to initialize node_demotion[] list in CPU
hotplug boot phase (or if so, be carefull because if I disable
MEMORY_HOTPLUG, I end up with no demotion_list[])
- Diferentiate between migration_{online,offline}_cpu and
migrate_on_reclaim_callback() and place them under their respective
configs-dependency.

But I might be missing some details so I might be off somewhere.

Another thing that caught my eye is that we are calling
set_migration_target_nodes() for every CPU the system brings up at boot
phase. I know systems with *lots* of CPUs.
I am not sure whether we have a mechanism to delay that until all CPUs
that are meant to be online are online? (after boot?)
That's probably happening in wonderland, but was just speaking out loud.

(Of course the same happen with memory_hotplug acpi operations.
All it takes is some qemu-handling)

--
Oscar Salvador
SUSE L3

2021-04-09 10:17:29

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On Thu, Apr 08, 2021 at 11:52:51AM +0200, Oscar Salvador wrote:
> I am not really into PMEM, and I ignore whether we need
> CONFIG_MEMORY_HOTPLUG in order to have such memory on the system.
> If so, the following can be partly ignored.

Ok, I refreshed by memory with [1].
From that, it seems that in order to use PMEM as RAM we need CONFIG_MEMORY_HOTPLUG.
But is that always the case? Can happen that in some scenario PMEM comes ready
to use and we do not need the hotplug trick?

Anyway, I would still like to clarify the state of the HOTPLUG_CPU.
On x86_64, HOTPLUG_CPU and MEMORY_HOTPLUG are tied by SPM means, but on arm64
one can have MEMORY_HOTPLUG while not having picked HOTPLUG_CPU.

My point is that we might want to put the callback functions and the callback
registration for cpu-hotplug guarded by its own HOTPLUG_CPU instead of guarding it
in the same MEMORY_HOTPLUG block to make it more clear?

--
Oscar Salvador
SUSE L3

2021-04-09 10:17:40

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On Fri, Apr 09, 2021 at 12:14:04PM +0200, Oscar Salvador wrote:
> On Thu, Apr 08, 2021 at 11:52:51AM +0200, Oscar Salvador wrote:
> > I am not really into PMEM, and I ignore whether we need
> > CONFIG_MEMORY_HOTPLUG in order to have such memory on the system.
> > If so, the following can be partly ignored.
>
> Ok, I refreshed by memory with [1].

Bleh, being [1] https://lore.kernel.org/linux-mm/[email protected]/

--
Oscar Salvador
SUSE L3

2021-04-09 19:00:57

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On 09.04.21 12:14, Oscar Salvador wrote:
> On Thu, Apr 08, 2021 at 11:52:51AM +0200, Oscar Salvador wrote:
>> I am not really into PMEM, and I ignore whether we need
>> CONFIG_MEMORY_HOTPLUG in order to have such memory on the system.
>> If so, the following can be partly ignored.
>
> Ok, I refreshed by memory with [1].
> From that, it seems that in order to use PMEM as RAM we need CONFIG_MEMORY_HOTPLUG.
> But is that always the case? Can happen that in some scenario PMEM comes ready
> to use and we do not need the hotplug trick?

The only way to add more System RAM is via add_memory() and friends like
add_memory_driver_managed(). These all require CONFIG_MEMORY_HOTPLUG.

Memory ballooning is a different case, but there we're only adjusting
the managed page counters.

--
Thanks,

David / dhildenb

2021-04-12 07:25:36

by Oscar Salvador

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On Fri, Apr 09, 2021 at 08:59:21PM +0200, David Hildenbrand wrote:

> The only way to add more System RAM is via add_memory() and friends like
> add_memory_driver_managed(). These all require CONFIG_MEMORY_HOTPLUG.

Yeah, my point was more towards whether PMEM can come in a way that it does
not have to be hotplugged, but come functional by default (as RAM).
But after having read all papers out there, I do not think that it is possible.

--
Oscar Salvador
SUSE L3

2021-04-12 09:47:32

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 03/10] mm/migrate: update node demotion order during on hotplug events

On 12.04.21 09:19, Oscar Salvador wrote:
> On Fri, Apr 09, 2021 at 08:59:21PM +0200, David Hildenbrand wrote:
>
>> The only way to add more System RAM is via add_memory() and friends like
>> add_memory_driver_managed(). These all require CONFIG_MEMORY_HOTPLUG.
>
> Yeah, my point was more towards whether PMEM can come in a way that it does
> not have to be hotplugged, but come functional by default (as RAM).
> But after having read all papers out there, I do not think that it is possible.
>

You mean e.g., configuring in the BIOS/firmware how an NVDIMM will get
exposed to the OS (pmem vs. RAM). I once heard something about that, not
sure if it's real. But from Linux' perspective, it would simply be
System RAM and it would get treated like that.

--
Thanks,

David / dhildenb

2021-04-16 12:52:04

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

Hi,
I am really sorry to jump into this train sooo late. I have quickly
glanced through the series and I have some questions/concerns. Let me
express them here rather than in specific patches.

First of all I do think that demotion is a useful way to balance the
memory in general. And that is not really bound to PMEM equipped
systems. There are larger NUMA machines which are not trivial to
partition and our existing NUMA APIs are far from ideal to help with
that.

I do appreciate that the whole thing is an opt in because this might
break workloads which are careful with the placement. I am not sure
there is a way to handle constrains in an optimal way if that is
possible at all in some cases (e.g. do we have a way to track page to
its cpuset resp. task mempolicy in all cases?).

The cover letter is focusing on usecases but it doesn't really provide
so let me try to lay it down here (let's see whether I missed something
important).
- order for demontion defines a very simple fallback to a single node
based on the proximity but cycles are not allowed in the fallback
mask.
I have to confess that I haven't grasped the initialization
completely. There is a nice comment explaining a 2 socket system with
3 different NUMA nodes attached to it with one node being terminal.
This is OK if the terminal node is PMEM but how that fits into usual
NUMA setups. E.g.
4 nodes each with its set of CPUs
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10
Do I get it right that Node 3 would be terminal?
- The demotion is controlled by node_reclaim_mode but unlike other modes
it applies to both direct and kswapd reclaims.
I do not see that explained anywhere though.
- The demotion is implemented at shrink_page_list level which migrates
pages in the first round and then falls back to the regular reclaim
when migration fails. This means that the reclaim context
(PF_MEMALLOC) will allocate memory so it has access to full memory
reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
mask which looks like a bug rather than an intention. Btw. using
GFP_NOWAIT in the allocation callback would make more things clear
IMO.
- Memcg reclaim is excluded from all this because it is not NUMA aware
which makes sense to me.
- Anonymous pages are bit tricky because they can be demoted even when
they cannot be reclaimed due to no (or no available) swap storage.
Unless I have missed something the second round will try to reclaim
them even the later is true and I am not sure this is completely OK.

I hope I've captured all important parts. There are some more details
but they do not seem that important.

I am still trying to digest the whole thing but at least jamming
node_reclaim logic into kswapd seems strange to me. Need to think more
about that though.

Btw. do you have any numbers from running this with some real work
workload?
--
Michal Hocko
SUSE Labs

2021-04-16 15:07:43

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

On Fri 16-04-21 07:26:43, Dave Hansen wrote:
> On 4/16/21 5:35 AM, Michal Hocko wrote:
> > I have to confess that I haven't grasped the initialization
> > completely. There is a nice comment explaining a 2 socket system with
> > 3 different NUMA nodes attached to it with one node being terminal.
> > This is OK if the terminal node is PMEM but how that fits into usual
> > NUMA setups. E.g.
> > 4 nodes each with its set of CPUs
> > node distances:
> > node 0 1 2 3
> > 0: 10 20 20 20
> > 1: 20 10 20 20
> > 2: 20 20 10 20
> > 3: 20 20 20 10
> > Do I get it right that Node 3 would be terminal?
>
> Yes, I think Node 3 would end up being the terminal node in that setup.
>
> That said, I'm not sure how much I expect folks to use this on
> traditional, non-tiered setups. It's also hard to argue what the
> migration order *should* be when all the nodes are uniform.

Well, they are not really uniform. The access latency differ and I can
imagine that spreading page cache to a distant node might be just much
better than an IO involved in the refault.

On the other hand I do understand that restricting the feature to CPU
less NUMA setups is quite sane to give us a better understanding of how
this can be used and improve on top. Maybe we will learn that we want to
have the demotion path admin controlable (on the system level or maybe
even more fine grained on the memcg/cpuset).

[...]
> > - The demotion is implemented at shrink_page_list level which migrates
> > pages in the first round and then falls back to the regular reclaim
> > when migration fails. This means that the reclaim context
> > (PF_MEMALLOC) will allocate memory so it has access to full memory
> > reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
> > mask which looks like a bug rather than an intention. Btw. using
> > GFP_NOWAIT in the allocation callback would make more things clear
> > IMO.
>
> Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up.
>
> GFP_NOWAIT _seems_ like it will work. I'll give it a shot.

Let me clarify a bit. The slow path does involve __gfp_pfmemalloc_flags
before bailing out for non sleeping allocation. So you would need both.
Unless you want to involve reclaim on the target node while you are
reclaiming the origin node.

> > - Memcg reclaim is excluded from all this because it is not NUMA aware
> > which makes sense to me.
> > - Anonymous pages are bit tricky because they can be demoted even when
> > they cannot be reclaimed due to no (or no available) swap storage.
> > Unless I have missed something the second round will try to reclaim
> > them even the later is true and I am not sure this is completely OK.
>
> What we want is something like this:
>
> Swap Space / Demotion OK -> Can Reclaim
> Swap Space / Demotion Off -> Can Reclaim
> Swap Full / Demotion OK -> Can Reclaim
> Swap Full / Demotion Off -> No Reclaim
>
> I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm
> misunderstanding what you are referring to, though. By "second round"
> did you mean when we do reclaim on a node which is a terminal node?

No, I mean the migration failure case where you splice back to the page
list to reclaim. In that round you do not demote and want to reclaim.
But a lack of swap space will make that page unreclaimable. I suspect
this would just work out fine but I am not sure from the top of my head.

> > I am still trying to digest the whole thing but at least jamming
> > node_reclaim logic into kswapd seems strange to me. Need to think more
> > about that though.
>
> I'm entirely open to other ways to do the opt-in. It seemed sane at the
> time, but I also understand the kswapd concern.
>
> > Btw. do you have any numbers from running this with some real work
> > workload?
>
> Yes, quite a bit. Do you have a specific scenario in mind? Folks seem
> to come at this in two different ways:
>
> Some want to know how much DRAM they can replace by buying some PMEM.
> They tend to care about how much adding the (cheaper) PMEM slows them
> down versus (expensive) DRAM. They're making a cost-benefit call
>
> Others want to repurpose some PMEM they already have. They want to know
> how much using PMEM in this way will speed them up. They will basically
> take any speedup they can get.
>
> I ask because as a kernel developer with PMEM in my systems, I find the
> "I'll take what I can get" case more personally appealing. But, the
> business folks are much more keen on the "DRAM replacement" use. Do you
> have any thoughts on what you would like to see?

I was thinking about typical large in memory processing (e.g. in memory
databases) where the hot part of the working set is only a portion and
spilling over to a slower memory can be still benefitial because IO +
data preprocessing on cold data is much slower.
--
Michal Hocko
SUSE Labs

2021-04-16 15:56:49

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

On 4/16/21 5:35 AM, Michal Hocko wrote:
> I have to confess that I haven't grasped the initialization
> completely. There is a nice comment explaining a 2 socket system with
> 3 different NUMA nodes attached to it with one node being terminal.
> This is OK if the terminal node is PMEM but how that fits into usual
> NUMA setups. E.g.
> 4 nodes each with its set of CPUs
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
> Do I get it right that Node 3 would be terminal?

Yes, I think Node 3 would end up being the terminal node in that setup.

That said, I'm not sure how much I expect folks to use this on
traditional, non-tiered setups. It's also hard to argue what the
migration order *should* be when all the nodes are uniform.

> - The demotion is controlled by node_reclaim_mode but unlike other modes
> it applies to both direct and kswapd reclaims.
> I do not see that explained anywhere though.

That's an interesting observation. Let me do a bit of research and I'll
update the Documentation/ and the changelog.

> - The demotion is implemented at shrink_page_list level which migrates
> pages in the first round and then falls back to the regular reclaim
> when migration fails. This means that the reclaim context
> (PF_MEMALLOC) will allocate memory so it has access to full memory
> reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
> mask which looks like a bug rather than an intention. Btw. using
> GFP_NOWAIT in the allocation callback would make more things clear
> IMO.

Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up.

GFP_NOWAIT _seems_ like it will work. I'll give it a shot.

> - Memcg reclaim is excluded from all this because it is not NUMA aware
> which makes sense to me.
> - Anonymous pages are bit tricky because they can be demoted even when
> they cannot be reclaimed due to no (or no available) swap storage.
> Unless I have missed something the second round will try to reclaim
> them even the later is true and I am not sure this is completely OK.

What we want is something like this:

Swap Space / Demotion OK -> Can Reclaim
Swap Space / Demotion Off -> Can Reclaim
Swap Full / Demotion OK -> Can Reclaim
Swap Full / Demotion Off -> No Reclaim

I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm
misunderstanding what you are referring to, though. By "second round"
did you mean when we do reclaim on a node which is a terminal node?

> I am still trying to digest the whole thing but at least jamming
> node_reclaim logic into kswapd seems strange to me. Need to think more
> about that though.

I'm entirely open to other ways to do the opt-in. It seemed sane at the
time, but I also understand the kswapd concern.

> Btw. do you have any numbers from running this with some real work
> workload?

Yes, quite a bit. Do you have a specific scenario in mind? Folks seem
to come at this in two different ways:

Some want to know how much DRAM they can replace by buying some PMEM.
They tend to care about how much adding the (cheaper) PMEM slows them
down versus (expensive) DRAM. They're making a cost-benefit call

Others want to repurpose some PMEM they already have. They want to know
how much using PMEM in this way will speed them up. They will basically
take any speedup they can get.

I ask because as a kernel developer with PMEM in my systems, I find the
"I'll take what I can get" case more personally appealing. But, the
business folks are much more keen on the "DRAM replacement" use. Do you
have any thoughts on what you would like to see?

2021-04-21 02:41:04

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

Michal Hocko <[email protected]> writes:

> On Fri 16-04-21 07:26:43, Dave Hansen wrote:
>> On 4/16/21 5:35 AM, Michal Hocko wrote:
>> > - Anonymous pages are bit tricky because they can be demoted even when
>> > they cannot be reclaimed due to no (or no available) swap storage.
>> > Unless I have missed something the second round will try to reclaim
>> > them even the later is true and I am not sure this is completely OK.
>>
>> What we want is something like this:
>>
>> Swap Space / Demotion OK -> Can Reclaim
>> Swap Space / Demotion Off -> Can Reclaim
>> Swap Full / Demotion OK -> Can Reclaim
>> Swap Full / Demotion Off -> No Reclaim
>>
>> I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm
>> misunderstanding what you are referring to, though. By "second round"
>> did you mean when we do reclaim on a node which is a terminal node?
>
> No, I mean the migration failure case where you splice back to the page
> list to reclaim. In that round you do not demote and want to reclaim.
> But a lack of swap space will make that page unreclaimable. I suspect
> this would just work out fine but I am not sure from the top of my head.

I have tested this via injecting some migration errors (returning 0 from
demote_page_list() before migration) on a system without swap. The
system can still work properly. In ftrace, I can find add_to_swap() are
called much more times, and it can deal with the situation where the
swap space isn't available.

Best Regards,
Huang, Ying

2021-05-07 06:47:30

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

Hi, Michal,

Michal Hocko <[email protected]> writes:

[...]

>>
>> > Btw. do you have any numbers from running this with some real work
>> > workload?
>>
>> Yes, quite a bit. Do you have a specific scenario in mind? Folks seem
>> to come at this in two different ways:
>>
>> Some want to know how much DRAM they can replace by buying some PMEM.
>> They tend to care about how much adding the (cheaper) PMEM slows them
>> down versus (expensive) DRAM. They're making a cost-benefit call
>>
>> Others want to repurpose some PMEM they already have. They want to know
>> how much using PMEM in this way will speed them up. They will basically
>> take any speedup they can get.
>>
>> I ask because as a kernel developer with PMEM in my systems, I find the
>> "I'll take what I can get" case more personally appealing. But, the
>> business folks are much more keen on the "DRAM replacement" use. Do you
>> have any thoughts on what you would like to see?
>
> I was thinking about typical large in memory processing (e.g. in memory
> databases) where the hot part of the working set is only a portion and
> spilling over to a slower memory can be still benefitial because IO +
> data preprocessing on cold data is much slower.

We have tested the patchset with the postgresql and pgbench. On a
machine with DRAM and PMEM, the kernel with the patchset can improve the
score of pgbench up to 22.1% compared with that of the DRAM only + disk
case. This comes from the reduced disk read throughput (which reduces
up to 70.8%).

Best Regards,
Huang, Ying

2021-06-11 05:54:30

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard

Hi, Michal,

Sorry for late reply.

Michal Hocko <[email protected]> writes:

> On Fri 16-04-21 07:26:43, Dave Hansen wrote:
>> On 4/16/21 5:35 AM, Michal Hocko wrote:
>> > I have to confess that I haven't grasped the initialization
>> > completely. There is a nice comment explaining a 2 socket system with
>> > 3 different NUMA nodes attached to it with one node being terminal.
>> > This is OK if the terminal node is PMEM but how that fits into usual
>> > NUMA setups. E.g.
>> > 4 nodes each with its set of CPUs
>> > node distances:
>> > node 0 1 2 3
>> > 0: 10 20 20 20
>> > 1: 20 10 20 20
>> > 2: 20 20 10 20
>> > 3: 20 20 20 10
>> > Do I get it right that Node 3 would be terminal?
>>
>> Yes, I think Node 3 would end up being the terminal node in that setup.
>>
>> That said, I'm not sure how much I expect folks to use this on
>> traditional, non-tiered setups. It's also hard to argue what the
>> migration order *should* be when all the nodes are uniform.
>
> Well, they are not really uniform. The access latency differ and I can
> imagine that spreading page cache to a distant node might be just much
> better than an IO involved in the refault.

Sorry, I am confused. In the above system,

"4 nodes each with its set of CPUs"

so I think the demotion target (next_demotion_node[nid]) of the nodes
0-3 are all NUMA_NO_NODE, because there's no CPU-less NUMA node. That
is, the patchset will not change the page reclaiming behavior in the
above system. Did I misunderstand your words?

> On the other hand I do understand that restricting the feature to CPU
> less NUMA setups is quite sane to give us a better understanding of how
> this can be used and improve on top. Maybe we will learn that we want to
> have the demotion path admin controlable (on the system level or maybe
> even more fine grained on the memcg/cpuset).

Yes. More features and interface can be built on top of the current
patchset.

> [...]
>> > - The demotion is implemented at shrink_page_list level which migrates
>> > pages in the first round and then falls back to the regular reclaim
>> > when migration fails. This means that the reclaim context
>> > (PF_MEMALLOC) will allocate memory so it has access to full memory
>> > reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
>> > mask which looks like a bug rather than an intention. Btw. using
>> > GFP_NOWAIT in the allocation callback would make more things clear
>> > IMO.
>>
>> Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up.
>>
>> GFP_NOWAIT _seems_ like it will work. I'll give it a shot.
>
> Let me clarify a bit. The slow path does involve __gfp_pfmemalloc_flags
> before bailing out for non sleeping allocation. So you would need both.
> Unless you want to involve reclaim on the target node while you are
> reclaiming the origin node.

Yes. __GFP_NO_MEMALLOC should be added to the allocation flag. And we
do want some kind of page reclaim on the target node. So the cold DRAM
pages will be demoted to PMEM, and the coldest PMEM pages will be
reclaimed. But the direct reclaiming on the target node may be not
appropriate. So how about change the page allocation flag in
alloc_demote_page() as follows?

(GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_THISNODE |
__GFP_NOWARN | __GFP_NO_MEMALLOC | GFP_NOWAIT

>> > - Memcg reclaim is excluded from all this because it is not NUMA aware
>> > which makes sense to me.
>> > - Anonymous pages are bit tricky because they can be demoted even when
>> > they cannot be reclaimed due to no (or no available) swap storage.
>> > Unless I have missed something the second round will try to reclaim
>> > them even the later is true and I am not sure this is completely OK.
>>
>> What we want is something like this:
>>
>> Swap Space / Demotion OK -> Can Reclaim
>> Swap Space / Demotion Off -> Can Reclaim
>> Swap Full / Demotion OK -> Can Reclaim
>> Swap Full / Demotion Off -> No Reclaim
>>
>> I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm
>> misunderstanding what you are referring to, though. By "second round"
>> did you mean when we do reclaim on a node which is a terminal node?
>
> No, I mean the migration failure case where you splice back to the page
> list to reclaim. In that round you do not demote and want to reclaim.
> But a lack of swap space will make that page unreclaimable. I suspect
> this would just work out fine but I am not sure from the top of my head.
>
>> > I am still trying to digest the whole thing but at least jamming
>> > node_reclaim logic into kswapd seems strange to me. Need to think more
>> > about that though.
>>
>> I'm entirely open to other ways to do the opt-in. It seemed sane at the
>> time, but I also understand the kswapd concern.

Yes. It looks strange to make node_relcaim logic to impact kswapd and
direct reclaim behavior. So, how about introduce another sysctl or
sysfs interface to control demotion behavior? For example, add

/sys/kernel/mm/numa/demotion

to enable/disable demotion between numa nodes for node reclaim, kswapd
reclaim and direct reclaim?

Best Regards,
Huang, Ying

[snip]