2022-04-22 23:06:49

by Jagdish Gediya

[permalink] [raw]
Subject: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
NUMA node which are N_MEMORY and slow memory(persistent memory)
only NUMA node which are also N_MEMORY. As the current demotion
target finding algorithm works based on N_MEMORY and best distance,
it will choose DRAM only NUMA node as demotion target instead of
persistent memory node on such systems. If DRAM only NUMA node is
filled with demoted pages then at some point new allocations can
start falling to persistent memory, so basically cold pages are in
fast memor (due to demotion) and new pages are in slow memory, this
is why persistent memory nodes should be utilized for demotion and
dram node should be avoided for demotion so that they can be used
for new allocations.

Current implementation can work fine on the system where the memory
only numa nodes are possible only for persistent/slow memory but it
is not suitable for the like of systems mentioned above.

This patch series introduces the new node state N_DEMOTION_TARGETS,
which is used to distinguish the nodes which can be used as demotion
targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
nodes which can be used as demotion targets.

node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
certain type of memory which registers through dax kmem (e.g. HBM)
may not be the right choices for demotion so in future they should
be distinguished based on certain attributes and dax kmem driver
should avoid setting them as N_DEMOTION_TARGETS, however current
implementation also doesn't distinguish any such memory and it
considers all N_MEMORY as demotion targets so this patch series
doesn't modify the current behavior.

below command can be used to view the available demotion targets in
the system,

$ cat /sys/devices/system/node/demotion_targets

This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
It may be possible that some memory node desired as demotion target
is not detected in the system from dax-device kmem probe path. It is
also possible that some of the dax-devices are not preferred as
demotion target e.g. HBM, for such devices, node shouldn't be set to
N_DEMOTION_TARGETS, so support is also added to set the demotion
target list from user space so that default behavior can be overridden
to avoid or add specific node to demotion targets manually.

Override the demotion targets in the system (which sets the
node_states[N_DEMOTION_TARGETS] in kernel),
$ echo <node list> > /sys/devices/system/node/demotion_targets

As by default node attributes under /sys/devices/system/node/ are read-
only, support is added to write node_states[] via sysfs so that
node_states[N_DEMOTION_TARGETS] can be modified from user space via
sysfs.

It is also helpful to know per node demotion target path prepared by
kernel to understand the demotion behaviour during reclaim, so this
patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
interface to view per-node demotion targets via sysfs.

Current code which sets migration targets is modified in
this patch series to avoid some of the limitations on the demotion
target sharing and to use N_DEMOTION_TARGETS only nodes while
finding demotion targets.

Changelog
----------

v2:
In v1, only 1st patch of this patch series was sent, which was
implemented to avoid some of the limitations on the demotion
target sharing, however for certain numa topology, the demotion
targets found by that patch was not most optimal, so 1st patch
in this series is modified according to suggestions from Huang
and Baolin. Different examples of demotion list comparasion
between existing implementation and changed implementation can
be found in the commit message of 1st patch.

v3:
- Modify patch 1 subject to make it more specific
- Remove /sys/kernel/mm/numa/demotion_targets interface, use
/sys/devices/system/node/demotion_targets instead and make
it writable to override node_states[N_DEMOTION_TARGETS].
- Add support to view per node demotion targets via sysfs

Jagdish Gediya (8):
mm: demotion: Fix demotion targets sharing among sources
mm: demotion: Add new node state N_DEMOTION_TARGETS
drivers/base/node: Add support to write node_states[] via sysfs
device-dax/kmem: Set node state as N_DEMOTION_TARGETS
mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
mm: demotion: expose per-node demotion targets via sysfs
docs: numa: Add documentation for demotion

Documentation/admin-guide/mm/index.rst | 1 +
.../admin-guide/mm/numa_demotion.rst | 57 +++++++++++++++
drivers/base/node.c | 70 ++++++++++++++++---
drivers/dax/kmem.c | 2 +
include/linux/migrate.h | 1 +
include/linux/nodemask.h | 1 +
mm/migrate.c | 54 ++++++++++----
7 files changed, 162 insertions(+), 24 deletions(-)
create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst

--
2.35.1


2022-04-22 23:09:17

by Jagdish Gediya

[permalink] [raw]
Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

Sharing used_targets between multiple nodes in a single
pass limits some of the opportunities for demotion target
sharing.

Don't share the used targets between multiple nodes in a
single pass, instead accumulate all the used targets in
source nodes shared by all pass, and reset 'used_targets'
to source nodes while finding demotion targets for any new
node.

This results into some more opportunities to share demotion
targets between multiple source nodes, e.g. with below NUMA
topology, where node 0 & 1 are cpu + dram nodes, node 2 & 3
are equally slower memory only nodes, and node 4 is slowest
memory only node,

available: 5 nodes (0-4)
node 0 cpus: 0 1
node 0 size: n MB
node 0 free: n MB
node 1 cpus: 2 3
node 1 size: n MB
node 1 free: n MB
node 2 cpus:
node 2 size: n MB
node 2 free: n MB
node 3 cpus:
node 3 size: n MB
node 3 free: n MB
node 4 cpus:
node 4 size: n MB
node 4 free: n MB
node distances:
node 0 1 2 3 4
0: 10 20 40 40 80
1: 20 10 40 40 80
2: 40 40 10 40 80
3: 40 40 40 10 80
4: 80 80 80 80 10

The existing implementation gives below demotion targets,

node demotion_target
0 3, 2
1 4
2 X
3 X
4 X

With this patch applied, below are the demotion targets,

node demotion_target
0 3, 2
1 3, 2
2 4
3 4
4 X

e.g. with below NUMA topology, where node 0, 1 & 2 are
cpu + dram nodes and node 3 is slow memory node,

available: 4 nodes (0-3)
node 0 cpus: 0 1
node 0 size: n MB
node 0 free: n MB
node 1 cpus: 2 3
node 1 size: n MB
node 1 free: n MB
node 2 cpus: 4 5
node 2 size: n MB
node 2 free: n MB
node 3 cpus:
node 3 size: n MB
node 3 free: n MB
node distances:
node 0 1 2 3
0: 10 20 20 40
1: 20 10 20 40
2: 20 20 10 40
3: 40 40 40 10

The existing implementation gives below demotion targets,

node demotion_target
0 3
1 X
2 X
3 X

With this patch applied, below are the demotion targets,

node demotion_target
0 3
1 3
2 3
3 X

Fixes: 79c28a416722 ("mm/numa: automatically generate node migration order")
Signed-off-by: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Jagdish Gediya <[email protected]>
Reviewed-by: Baolin Wang <[email protected]>
Tested-by: Baolin Wang <[email protected]>
Acked-by: "Huang, Ying" <[email protected]>
---
mm/migrate.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 6c31ee1e1c9b..8bbe1e478122 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2355,7 +2355,7 @@ static void __set_migration_target_nodes(void)
{
nodemask_t next_pass = NODE_MASK_NONE;
nodemask_t this_pass = NODE_MASK_NONE;
- nodemask_t used_targets = NODE_MASK_NONE;
+ nodemask_t source_nodes = NODE_MASK_NONE;
int node, best_distance;

/*
@@ -2373,20 +2373,23 @@ static void __set_migration_target_nodes(void)
again:
this_pass = next_pass;
next_pass = NODE_MASK_NONE;
+
/*
- * To avoid cycles in the migration "graph", ensure
- * that migration sources are not future targets by
- * setting them in 'used_targets'. Do this only
- * once per pass so that multiple source nodes can
- * share a target node.
- *
- * 'used_targets' will become unavailable in future
- * passes. This limits some opportunities for
- * multiple source nodes to share a destination.
+ * Accumulate source nodes to avoid the cycle in migration
+ * list.
*/
- nodes_or(used_targets, used_targets, this_pass);
+ nodes_or(source_nodes, source_nodes, this_pass);

for_each_node_mask(node, this_pass) {
+ /*
+ * To avoid cycles in the migration "graph", ensure
+ * that migration sources are not future targets by
+ * setting them in 'used_targets'. Reset used_targets
+ * to source nodes for each node in this pass so that
+ * multiple source nodes can share a target node.
+ */
+ nodemask_t used_targets = source_nodes;
+
best_distance = -1;

/*
--
2.35.1

2022-04-22 23:09:59

by Jagdish Gediya

[permalink] [raw]
Subject: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs

Current /sys/devices/system/node/* interface doesn't support
to write node_states[], however write support is needed in case
users want to set them manually e.g. when user want to override
default N_DEMOTION_TARGETS found by the kernel.

Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
_NODE_ATTR_RW which can be used for node_states[] which can
be written from sysfs.

It may be necessary to validate written values and take action
based on them in a state specific way so a callback 'write' is
introduced in 'struct node_attr'.

A new function demotion_targets_write() is added to validate
the input nodes for N_DEMOTION_TARGETS which should be subset
of N_MEMORY and to build new demotion list based on new nodes.

Signed-off-by: Jagdish Gediya <[email protected]>
---
drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
1 file changed, 49 insertions(+), 13 deletions(-)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index 6eef22e6413e..e03eedbc421b 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -20,6 +20,7 @@
#include <linux/pm_runtime.h>
#include <linux/swap.h>
#include <linux/slab.h>
+#include <linux/migrate.h>

static struct bus_type node_subsys = {
.name = "node",
@@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
struct node_attr {
struct device_attribute attr;
enum node_states state;
+ int (*write)(nodemask_t nodes);
};

static ssize_t show_node_state(struct device *dev,
@@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
nodemask_pr_args(&node_states[na->state]));
}

-#define _NODE_ATTR(name, state) \
- { __ATTR(name, 0444, show_node_state, NULL), state }
+static ssize_t store_node_state(struct device *s,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ nodemask_t nodes;
+ struct node_attr *na = container_of(attr, struct node_attr, attr);
+
+ if (nodelist_parse(buf, nodes))
+ return -EINVAL;
+
+ if (na->write) {
+ if (na->write(nodes))
+ return -EINVAL;
+ } else {
+ node_states[na->state] = nodes;
+ }
+
+ return count;
+}
+
+static int demotion_targets_write(nodemask_t nodes)
+{
+ if (nodes_subset(nodes, node_states[N_MEMORY])) {
+ node_states[N_DEMOTION_TARGETS] = nodes;
+ set_migration_target_nodes();
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+#define _NODE_ATTR_RO(name, state) \
+ { __ATTR(name, 0444, show_node_state, NULL), state, NULL }
+
+#define _NODE_ATTR_RW(name, state, write_fn) \
+ { __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }

static struct node_attr node_state_attr[] = {
- [N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
- [N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
- [N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
+ [N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
+ [N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
+ [N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
#ifdef CONFIG_HIGHMEM
- [N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
+ [N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
#endif
- [N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
- [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
- [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
- N_GENERIC_INITIATOR),
- [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
- N_DEMOTION_TARGETS),
-
+ [N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
+ [N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
+ [N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
+ N_GENERIC_INITIATOR),
+ [N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
+ N_DEMOTION_TARGETS,
+ demotion_targets_write),
};

static struct attribute *node_state_attrs[] = {
--
2.35.1

2022-04-22 23:10:27

by Jagdish Gediya

[permalink] [raw]
Subject: [PATCH v3 5/7] mm: demotion: Build demotion list based on N_DEMOTION_TARGETS

Only nodes which has state N_DEMOTION_TARGETS should be
used as demotion targets, make nodes which are not in demotion
targets as source nodes while building demotion target list
so that demotion targets are only chosen from N_DEMOTION_TARGETS.

Signed-off-by: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Jagdish Gediya <[email protected]>
---
mm/migrate.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 8bbe1e478122..5b92a09fbe4a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2366,10 +2366,10 @@ static void __set_migration_target_nodes(void)
disable_all_migrate_targets();

/*
- * Allocations go close to CPUs, first. Assume that
- * the migration path starts at the nodes with CPUs.
+ * Some systems can have DRAM(fast memory) only NUMA nodes, demotion targets
+ * need to be found for them as well.
*/
- next_pass = node_states[N_CPU];
+ nodes_andnot(next_pass, node_states[N_ONLINE], node_states[N_DEMOTION_TARGETS]);
again:
this_pass = next_pass;
next_pass = NODE_MASK_NONE;
--
2.35.1

2022-04-22 23:16:28

by Jagdish Gediya

[permalink] [raw]
Subject: [PATCH v3 2/7] mm: demotion: Add new node state N_DEMOTION_TARGETS

Some systems(e.g. PowerVM) have DRAM(fast memory) only NUMA node
which are N_MEMORY as well as slow memory(persistent memory) only
NUMA node which are also N_MEMORY. As the current demotion target
finding algorithm works based on N_MEMORY and best distance, it can
choose DRAM only NUMA node as demotion target instead of persistent
memory node on such systems. If DRAM only NUMA node is filled with
demoted pages then at some point new allocations can start falling
to persistent memory, so basically cold pages are in fast memory
(due to demotion) and new pages are in slow memory, this is why
persistent memory nodes should be utilized for demotion and dram node
should be avoided for demotion so that they can be used for new
allocations.

Add new state N_DEMOTION_TARGETS, node_states[N_DEMOTION_TARGETS]
then can be used to hold the list of nodes which can be used
as demotion targets, later patches in the series builds demotion
targets based on nodes available in node_states[N_DEMOTION_TARGETS].

Signed-off-by: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Jagdish Gediya <[email protected]>
---
drivers/base/node.c | 4 ++++
include/linux/nodemask.h | 1 +
2 files changed, 5 insertions(+)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index ec8bb24a5a22..6eef22e6413e 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -1038,6 +1038,9 @@ static struct node_attr node_state_attr[] = {
[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
[N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
N_GENERIC_INITIATOR),
+ [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
+ N_DEMOTION_TARGETS),
+
};

static struct attribute *node_state_attrs[] = {
@@ -1050,6 +1053,7 @@ static struct attribute *node_state_attrs[] = {
&node_state_attr[N_MEMORY].attr.attr,
&node_state_attr[N_CPU].attr.attr,
&node_state_attr[N_GENERIC_INITIATOR].attr.attr,
+ &node_state_attr[N_DEMOTION_TARGETS].attr.attr,
NULL
};

diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 567c3ddba2c4..17844300fd57 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -400,6 +400,7 @@ enum node_states {
N_MEMORY, /* The node has memory(regular, high, movable) */
N_CPU, /* The node has one or more cpus */
N_GENERIC_INITIATOR, /* The node has one or more Generic Initiators */
+ N_DEMOTION_TARGETS, /* Nodes that should be considered as demotion targets */
NR_NODE_STATES
};

--
2.35.1

2022-04-24 08:51:12

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

> Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

IMHO, this patch doesn't fix some bugs in the original code. Instead,
it just enhances the original code. For example, the subject could be,

mm: demotion: support to share demotion targets among sources

Best Regards,
Huang, Ying

[snip]


2022-04-24 09:27:39

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs

On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> Current /sys/devices/system/node/* interface doesn't support
> to write node_states[], however write support is needed in case
> users want to set them manually e.g. when user want to override
> default N_DEMOTION_TARGETS found by the kernel.
>
> Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> _NODE_ATTR_RW which can be used for node_states[] which can
> be written from sysfs.
>
> It may be necessary to validate written values and take action
> based on them in a state specific way so a callback 'write' is
> introduced in 'struct node_attr'.
>
> A new function demotion_targets_write() is added to validate
> the input nodes for N_DEMOTION_TARGETS which should be subset
> of N_MEMORY and to build new demotion list based on new nodes.
>
> Signed-off-by: Jagdish Gediya <[email protected]>

How about discussing the ABI design firstly? I started a discussion in
the following thread. I think we can start with the requirements.

https://lore.kernel.org/lkml/[email protected]/

Best Regards,
Huang, Ying

> ---
>  drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 49 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 6eef22e6413e..e03eedbc421b 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -20,6 +20,7 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/swap.h>
>  #include <linux/slab.h>
> +#include <linux/migrate.h>
>  
>
>
>
>  static struct bus_type node_subsys = {
>   .name = "node",
> @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
>  struct node_attr {
>   struct device_attribute attr;
>   enum node_states state;
> + int (*write)(nodemask_t nodes);
>  };
>  
>
>
>
>  static ssize_t show_node_state(struct device *dev,
> @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
>   nodemask_pr_args(&node_states[na->state]));
>  }
>  
>
>
>
> -#define _NODE_ATTR(name, state) \
> - { __ATTR(name, 0444, show_node_state, NULL), state }
> +static ssize_t store_node_state(struct device *s,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + nodemask_t nodes;
> + struct node_attr *na = container_of(attr, struct node_attr, attr);
> +
> + if (nodelist_parse(buf, nodes))
> + return -EINVAL;
> +
> + if (na->write) {
> + if (na->write(nodes))
> + return -EINVAL;
> + } else {
> + node_states[na->state] = nodes;
> + }
> +
> + return count;
> +}
> +
> +static int demotion_targets_write(nodemask_t nodes)
> +{
> + if (nodes_subset(nodes, node_states[N_MEMORY])) {
> + node_states[N_DEMOTION_TARGETS] = nodes;
> + set_migration_target_nodes();
> + return 0;
> + }
> +
> + return -EINVAL;
> +}
> +
> +#define _NODE_ATTR_RO(name, state) \
> + { __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> +
> +#define _NODE_ATTR_RW(name, state, write_fn) \
> + { __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
>  
>
>
>
>  static struct node_attr node_state_attr[] = {
> - [N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> - [N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> - [N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> + [N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> + [N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> + [N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
>  #ifdef CONFIG_HIGHMEM
> - [N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> + [N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
>  #endif
> - [N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> - [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> - [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> - N_GENERIC_INITIATOR),
> - [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> - N_DEMOTION_TARGETS),
> -
> + [N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> + [N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> + [N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> + N_GENERIC_INITIATOR),
> + [N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> + N_DEMOTION_TARGETS,
> + demotion_targets_write),
>  };
>  
>
>
>
>  static struct attribute *node_state_attrs[] = {


2022-04-25 06:25:44

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> NUMA node which are N_MEMORY and slow memory(persistent memory)
> only NUMA node which are also N_MEMORY. As the current demotion
> target finding algorithm works based on N_MEMORY and best distance,
> it will choose DRAM only NUMA node as demotion target instead of
> persistent memory node on such systems. If DRAM only NUMA node is
> filled with demoted pages then at some point new allocations can
> start falling to persistent memory, so basically cold pages are in
> fast memor (due to demotion) and new pages are in slow memory, this
> is why persistent memory nodes should be utilized for demotion and
> dram node should be avoided for demotion so that they can be used
> for new allocations.
>
> Current implementation can work fine on the system where the memory
> only numa nodes are possible only for persistent/slow memory but it
> is not suitable for the like of systems mentioned above.

Can you share the NUMA topology information of your machine? And the
demotion order before and after your change?

Whether it's good to use the PMEM nodes as the demotion targets of the
DRAM-only node too?

Best Regards,
Huang, Ying

> This patch series introduces the new node state N_DEMOTION_TARGETS,
> which is used to distinguish the nodes which can be used as demotion
> targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> nodes which can be used as demotion targets.
>
> node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> certain type of memory which registers through dax kmem (e.g. HBM)
> may not be the right choices for demotion so in future they should
> be distinguished based on certain attributes and dax kmem driver
> should avoid setting them as N_DEMOTION_TARGETS, however current
> implementation also doesn't distinguish any such memory and it
> considers all N_MEMORY as demotion targets so this patch series
> doesn't modify the current behavior.
>
> below command can be used to view the available demotion targets in
> the system,
>
> $ cat /sys/devices/system/node/demotion_targets
>
> This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> It may be possible that some memory node desired as demotion target
> is not detected in the system from dax-device kmem probe path. It is
> also possible that some of the dax-devices are not preferred as
> demotion target e.g. HBM, for such devices, node shouldn't be set to
> N_DEMOTION_TARGETS, so support is also added to set the demotion
> target list from user space so that default behavior can be overridden
> to avoid or add specific node to demotion targets manually.
>
> Override the demotion targets in the system (which sets the
> node_states[N_DEMOTION_TARGETS] in kernel),
> $ echo <node list> > /sys/devices/system/node/demotion_targets
>
> As by default node attributes under /sys/devices/system/node/ are read-
> only, support is added to write node_states[] via sysfs so that
> node_states[N_DEMOTION_TARGETS] can be modified from user space via
> sysfs.
>
> It is also helpful to know per node demotion target path prepared by
> kernel to understand the demotion behaviour during reclaim, so this
> patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> interface to view per-node demotion targets via sysfs.
>
> Current code which sets migration targets is modified in
> this patch series to avoid some of the limitations on the demotion
> target sharing and to use N_DEMOTION_TARGETS only nodes while
> finding demotion targets.
>
> Changelog
> ----------
>
> v2:
> In v1, only 1st patch of this patch series was sent, which was
> implemented to avoid some of the limitations on the demotion
> target sharing, however for certain numa topology, the demotion
> targets found by that patch was not most optimal, so 1st patch
> in this series is modified according to suggestions from Huang
> and Baolin. Different examples of demotion list comparasion
> between existing implementation and changed implementation can
> be found in the commit message of 1st patch.
>
> v3:
> - Modify patch 1 subject to make it more specific
> - Remove /sys/kernel/mm/numa/demotion_targets interface, use
>   /sys/devices/system/node/demotion_targets instead and make
>   it writable to override node_states[N_DEMOTION_TARGETS].
> - Add support to view per node demotion targets via sysfs
>
> Jagdish Gediya (8):
>   mm: demotion: Fix demotion targets sharing among sources
>   mm: demotion: Add new node state N_DEMOTION_TARGETS
>   drivers/base/node: Add support to write node_states[] via sysfs
>   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
>   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
>   mm: demotion: expose per-node demotion targets via sysfs
>   docs: numa: Add documentation for demotion
>
>  Documentation/admin-guide/mm/index.rst | 1 +
>  .../admin-guide/mm/numa_demotion.rst | 57 +++++++++++++++
>  drivers/base/node.c | 70 ++++++++++++++++---
>  drivers/dax/kmem.c | 2 +
>  include/linux/migrate.h | 1 +
>  include/linux/nodemask.h | 1 +
>  mm/migrate.c | 54 ++++++++++----
>  7 files changed, 162 insertions(+), 24 deletions(-)
>  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
>


2022-04-25 06:45:52

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs

Jagdish Gediya <[email protected]> writes:

> Current /sys/devices/system/node/* interface doesn't support
> to write node_states[], however write support is needed in case
> users want to set them manually e.g. when user want to override
> default N_DEMOTION_TARGETS found by the kernel.
>
> Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> _NODE_ATTR_RW which can be used for node_states[] which can
> be written from sysfs.
>
> It may be necessary to validate written values and take action
> based on them in a state specific way so a callback 'write' is
> introduced in 'struct node_attr'.
>
> A new function demotion_targets_write() is added to validate
> the input nodes for N_DEMOTION_TARGETS which should be subset
> of N_MEMORY and to build new demotion list based on new nodes.
>
> Signed-off-by: Jagdish Gediya <[email protected]>
> ---
> drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
> 1 file changed, 49 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 6eef22e6413e..e03eedbc421b 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -20,6 +20,7 @@
> #include <linux/pm_runtime.h>
> #include <linux/swap.h>
> #include <linux/slab.h>
> +#include <linux/migrate.h>
>
> static struct bus_type node_subsys = {
> .name = "node",
> @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
> struct node_attr {
> struct device_attribute attr;
> enum node_states state;
> + int (*write)(nodemask_t nodes);
> };
>
> static ssize_t show_node_state(struct device *dev,
> @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
> nodemask_pr_args(&node_states[na->state]));
> }
>
> -#define _NODE_ATTR(name, state) \
> - { __ATTR(name, 0444, show_node_state, NULL), state }
> +static ssize_t store_node_state(struct device *s,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + nodemask_t nodes;
> + struct node_attr *na = container_of(attr, struct node_attr, attr);
> +
> + if (nodelist_parse(buf, nodes))
> + return -EINVAL;
> +
> + if (na->write) {
> + if (na->write(nodes))
> + return -EINVAL;
> + } else {
> + node_states[na->state] = nodes;
> + }
> +
> + return count;
> +}
> +
> +static int demotion_targets_write(nodemask_t nodes)
> +{
> + if (nodes_subset(nodes, node_states[N_MEMORY])) {
> + node_states[N_DEMOTION_TARGETS] = nodes;
> + set_migration_target_nodes();
> + return 0;
> + }

Does this require locking to avoid upating
node_states[N_DEMOTION_TARGETS] while a parallel
set_migratiotn_target_nodes() is running.

> +
> + return -EINVAL;
> +}
> +
> +#define _NODE_ATTR_RO(name, state) \
> + { __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> +
> +#define _NODE_ATTR_RW(name, state, write_fn) \
> + { __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
>
> static struct node_attr node_state_attr[] = {
> - [N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> - [N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> - [N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> + [N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> + [N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> + [N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
> #ifdef CONFIG_HIGHMEM
> - [N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> + [N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
> #endif
> - [N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> - [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> - [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> - N_GENERIC_INITIATOR),
> - [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> - N_DEMOTION_TARGETS),
> -
> + [N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> + [N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> + [N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> + N_GENERIC_INITIATOR),
> + [N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> + N_DEMOTION_TARGETS,
> + demotion_targets_write),
> };
>
> static struct attribute *node_state_attrs[] = {
> --
> 2.35.1

2022-04-25 10:44:25

by Jagdish Gediya

[permalink] [raw]
Subject: Re: [PATCH v3 3/7] drivers/base/node: Add support to write node_states[] via sysfs

On Sun, Apr 24, 2022 at 11:55:45AM +0530, Aneesh Kumar K.V wrote:
> Jagdish Gediya <[email protected]> writes:
>
> > Current /sys/devices/system/node/* interface doesn't support
> > to write node_states[], however write support is needed in case
> > users want to set them manually e.g. when user want to override
> > default N_DEMOTION_TARGETS found by the kernel.
> >
> > Rename existing _NODE_ATTR to _NODE_ATTR_RO and introduce new
> > _NODE_ATTR_RW which can be used for node_states[] which can
> > be written from sysfs.
> >
> > It may be necessary to validate written values and take action
> > based on them in a state specific way so a callback 'write' is
> > introduced in 'struct node_attr'.
> >
> > A new function demotion_targets_write() is added to validate
> > the input nodes for N_DEMOTION_TARGETS which should be subset
> > of N_MEMORY and to build new demotion list based on new nodes.
> >
> > Signed-off-by: Jagdish Gediya <[email protected]>
> > ---
> > drivers/base/node.c | 62 +++++++++++++++++++++++++++++++++++----------
> > 1 file changed, 49 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/base/node.c b/drivers/base/node.c
> > index 6eef22e6413e..e03eedbc421b 100644
> > --- a/drivers/base/node.c
> > +++ b/drivers/base/node.c
> > @@ -20,6 +20,7 @@
> > #include <linux/pm_runtime.h>
> > #include <linux/swap.h>
> > #include <linux/slab.h>
> > +#include <linux/migrate.h>
> >
> > static struct bus_type node_subsys = {
> > .name = "node",
> > @@ -1013,6 +1014,7 @@ void unregister_one_node(int nid)
> > struct node_attr {
> > struct device_attribute attr;
> > enum node_states state;
> > + int (*write)(nodemask_t nodes);
> > };
> >
> > static ssize_t show_node_state(struct device *dev,
> > @@ -1024,23 +1026,57 @@ static ssize_t show_node_state(struct device *dev,
> > nodemask_pr_args(&node_states[na->state]));
> > }
> >
> > -#define _NODE_ATTR(name, state) \
> > - { __ATTR(name, 0444, show_node_state, NULL), state }
> > +static ssize_t store_node_state(struct device *s,
> > + struct device_attribute *attr,
> > + const char *buf, size_t count)
> > +{
> > + nodemask_t nodes;
> > + struct node_attr *na = container_of(attr, struct node_attr, attr);
> > +
> > + if (nodelist_parse(buf, nodes))
> > + return -EINVAL;
> > +
> > + if (na->write) {
> > + if (na->write(nodes))
> > + return -EINVAL;
> > + } else {
> > + node_states[na->state] = nodes;
> > + }
> > +
> > + return count;
> > +}
> > +
> > +static int demotion_targets_write(nodemask_t nodes)
> > +{
> > + if (nodes_subset(nodes, node_states[N_MEMORY])) {
> > + node_states[N_DEMOTION_TARGETS] = nodes;
> > + set_migration_target_nodes();
> > + return 0;
> > + }
>
> Does this require locking to avoid upating
> node_states[N_DEMOTION_TARGETS] while a parallel
> set_migratiotn_target_nodes() is running.

I think locking is needed if set_migratiotn_target_nodes() is called
form here because currently exclusion is provided by memory hotplug
events being single-threaded, but as this path is not hot-plug event,
separate lock is needed.

> > +
> > + return -EINVAL;
> > +}
> > +
> > +#define _NODE_ATTR_RO(name, state) \
> > + { __ATTR(name, 0444, show_node_state, NULL), state, NULL }
> > +
> > +#define _NODE_ATTR_RW(name, state, write_fn) \
> > + { __ATTR(name, 0644, show_node_state, store_node_state), state, write_fn }
> >
> > static struct node_attr node_state_attr[] = {
> > - [N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
> > - [N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
> > - [N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
> > + [N_POSSIBLE] = _NODE_ATTR_RO(possible, N_POSSIBLE),
> > + [N_ONLINE] = _NODE_ATTR_RO(online, N_ONLINE),
> > + [N_NORMAL_MEMORY] = _NODE_ATTR_RO(has_normal_memory, N_NORMAL_MEMORY),
> > #ifdef CONFIG_HIGHMEM
> > - [N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
> > + [N_HIGH_MEMORY] = _NODE_ATTR_RO(has_high_memory, N_HIGH_MEMORY),
> > #endif
> > - [N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
> > - [N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
> > - [N_GENERIC_INITIATOR] = _NODE_ATTR(has_generic_initiator,
> > - N_GENERIC_INITIATOR),
> > - [N_DEMOTION_TARGETS] = _NODE_ATTR(demotion_targets,
> > - N_DEMOTION_TARGETS),
> > -
> > + [N_MEMORY] = _NODE_ATTR_RO(has_memory, N_MEMORY),
> > + [N_CPU] = _NODE_ATTR_RO(has_cpu, N_CPU),
> > + [N_GENERIC_INITIATOR] = _NODE_ATTR_RO(has_generic_initiator,
> > + N_GENERIC_INITIATOR),
> > + [N_DEMOTION_TARGETS] = _NODE_ATTR_RW(demotion_targets,
> > + N_DEMOTION_TARGETS,
> > + demotion_targets_write),
> > };
> >
> > static struct attribute *node_state_attrs[] = {
> > --
> > 2.35.1

2022-04-25 16:10:52

by Jagdish Gediya

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

On Sun, Apr 24, 2022 at 11:25:50AM +0800, [email protected] wrote:
> > Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
>
> IMHO, this patch doesn't fix some bugs in the original code. Instead,
> it just enhances the original code. For example, the subject could be,

I think it is fixing a bug, there is a comment in the code which
mentions that 'used_targets will become unavailable in future passes.
This limits some opportunities for multiple source nodes to share a
destination'. As per this comment, it was intended to share a node as
demotion targets with some limits but the code limits not some but all
such opportunities as no common node can be demotion targets for
multiple source node as per current code.

> mm: demotion: support to share demotion targets among sources
>
> Best Regards,
> Huang, Ying
>
> [snip]
>
>
>

2022-04-25 16:52:09

by Jagdish Gediya

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > only NUMA node which are also N_MEMORY. As the current demotion
> > target finding algorithm works based on N_MEMORY and best distance,
> > it will choose DRAM only NUMA node as demotion target instead of
> > persistent memory node on such systems. If DRAM only NUMA node is
> > filled with demoted pages then at some point new allocations can
> > start falling to persistent memory, so basically cold pages are in
> > fast memor (due to demotion) and new pages are in slow memory, this
> > is why persistent memory nodes should be utilized for demotion and
> > dram node should be avoided for demotion so that they can be used
> > for new allocations.
> >
> > Current implementation can work fine on the system where the memory
> > only numa nodes are possible only for persistent/slow memory but it
> > is not suitable for the like of systems mentioned above.
>
> Can you share the NUMA topology information of your machine? And the
> demotion order before and after your change?
>
> Whether it's good to use the PMEM nodes as the demotion targets of the
> DRAM-only node too?

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node 0 1
0: 10 40
1: 40 10

1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
for 0 even when 1 is DRAM node and there is no demotion targets for 1.

$ cat /sys/bus/nd/devices/dax0.0/target_node
2
$
# cd /sys/bus/dax/drivers/
:/sys/bus/dax/drivers# ls
device_dax kmem
:/sys/bus/dax/drivers# cd device_dax/
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
:/sys/bus/dax/drivers/device_dax# numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13380 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1961 MB
node 2 cpus:
node 2 size: 0 MB
node 2 free: 0 MB
node distances:
node 0 1 2
0: 10 40 80
1: 40 10 80
2: 80 80 10

2) Once this new node brought online, without N_DEMOTION_TARGETS
patch series, 1 is demotion target for 0 and 2 is demotion target
for 1.

With this patch series applied,
1) No demotion target for either 0 or 1 before dax device is online
2) 2 is demotion target for both 0 and 1 after dax device is online.

> Best Regards,
> Huang, Ying
>
> > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > which is used to distinguish the nodes which can be used as demotion
> > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > nodes which can be used as demotion targets.
> >
> > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > certain type of memory which registers through dax kmem (e.g. HBM)
> > may not be the right choices for demotion so in future they should
> > be distinguished based on certain attributes and dax kmem driver
> > should avoid setting them as N_DEMOTION_TARGETS, however current
> > implementation also doesn't distinguish any such memory and it
> > considers all N_MEMORY as demotion targets so this patch series
> > doesn't modify the current behavior.
> >
> > below command can be used to view the available demotion targets in
> > the system,
> >
> > $ cat /sys/devices/system/node/demotion_targets
> >
> > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > It may be possible that some memory node desired as demotion target
> > is not detected in the system from dax-device kmem probe path. It is
> > also possible that some of the dax-devices are not preferred as
> > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > target list from user space so that default behavior can be overridden
> > to avoid or add specific node to demotion targets manually.
> >
> > Override the demotion targets in the system (which sets the
> > node_states[N_DEMOTION_TARGETS] in kernel),
> > $ echo <node list> > /sys/devices/system/node/demotion_targets
> >
> > As by default node attributes under /sys/devices/system/node/ are read-
> > only, support is added to write node_states[] via sysfs so that
> > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > sysfs.
> >
> > It is also helpful to know per node demotion target path prepared by
> > kernel to understand the demotion behaviour during reclaim, so this
> > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > interface to view per-node demotion targets via sysfs.
> >
> > Current code which sets migration targets is modified in
> > this patch series to avoid some of the limitations on the demotion
> > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > finding demotion targets.
> >
> > Changelog
> > ----------
> >
> > v2:
> > In v1, only 1st patch of this patch series was sent, which was
> > implemented to avoid some of the limitations on the demotion
> > target sharing, however for certain numa topology, the demotion
> > targets found by that patch was not most optimal, so 1st patch
> > in this series is modified according to suggestions from Huang
> > and Baolin. Different examples of demotion list comparasion
> > between existing implementation and changed implementation can
> > be found in the commit message of 1st patch.
> >
> > v3:
> > - Modify patch 1 subject to make it more specific
> > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> >   /sys/devices/system/node/demotion_targets instead and make
> >   it writable to override node_states[N_DEMOTION_TARGETS].
> > - Add support to view per node demotion targets via sysfs
> >
> > Jagdish Gediya (8):
> >   mm: demotion: Fix demotion targets sharing among sources
> >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> >   drivers/base/node: Add support to write node_states[] via sysfs
> >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> >   mm: demotion: expose per-node demotion targets via sysfs
> >   docs: numa: Add documentation for demotion
> >
> >  Documentation/admin-guide/mm/index.rst | 1 +
> >  .../admin-guide/mm/numa_demotion.rst | 57 +++++++++++++++
> >  drivers/base/node.c | 70 ++++++++++++++++---
> >  drivers/dax/kmem.c | 2 +
> >  include/linux/migrate.h | 1 +
> >  include/linux/nodemask.h | 1 +
> >  mm/migrate.c | 54 ++++++++++----
> >  7 files changed, 162 insertions(+), 24 deletions(-)
> >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> >
>
>
>

2022-04-25 18:43:25

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <[email protected]> wrote:
>
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine? And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node 0 1
>> 0: 10 40
>> 1: 40 10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>> for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
>
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
>

Hence the suggestion for the ability to override this from userspace.
Now, for example, we could build a system with memory from the remote
machine (memory inception in case of power which will mostly be plugged
in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
memory.

In the former case, we won't consider that for demotion with this series
because that is not instantiated via dax kmem. So yes definitely we
would need the ability to override this from userspace so that we could
put these remote memory NUMA nodes as demotion targets if we want.

>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node 0 1 2
>> 0: 10 40 80
>> 1: 40 10 80
>> 2: 80 80 10
>>
>> 2) Once this new node brought online, without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
>
> I'd argue that is wrong. At this state you have a tiered memory system
> be it one with just DRAM. Using it as such is correct behavior that
> we should not be preventing. Sure some usecases wouldn't want that
> arrangement but some do want it.
>
> For your case we could add a heuristic along the lines of the demotion
> target should be at least as big as the starting point but that would
> be a bit hacky.
>

Hence the proposal to do a per node demotion target override with the
semantics that i explained here


https://lore.kernel.org/linux-mm/[email protected]/

Let me know if that interface would be good to handle all the possible
demotion target configs we would want to have.

-aneesh

2022-04-25 22:22:40

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <[email protected]> wrote:
>

....

>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node 0 1
>> 0: 10 40
>> 1: 40 10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>> for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
>
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
>
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node 0 1 2
>> 0: 10 40 80
>> 1: 40 10 80
>> 2: 80 80 10
>>
>> 2) Once this new node brought online, without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
>
> I'd argue that is wrong. At this state you have a tiered memory system
> be it one with just DRAM. Using it as such is correct behavior that
> we should not be preventing. Sure some usecases wouldn't want that
> arrangement but some do want it.
>

I missed this in my earlier reply. Are you suggesting that we would want
Node 1 (DRAM only memory numa node) to act as demotion target for Node
0? Any reason why we would want to do that? That is clearly opposite of
what we are trying to do here. IMHO node using Node1 as demotion target
for Node0 is a better default?



-aneesh

2022-04-26 08:22:30

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources

On Mon, 2022-04-25 at 15:02 +0530, Jagdish Gediya wrote:
> On Sun, Apr 24, 2022 at 11:25:50AM +0800, [email protected] wrote:
> > > Subject: [PATCH v3 1/7] mm: demotion: Fix demotion targets sharing among sources
> >
> > IMHO, this patch doesn't fix some bugs in the original code. Instead,
> > it just enhances the original code. For example, the subject could be,
>
> I think it is fixing a bug, there is a comment in the code which
> mentions that 'used_targets will become unavailable in future passes.
> This limits some opportunities for multiple source nodes to share a
> destination'. As per this comment, it was intended to share a node as
> demotion targets with some limits but the code limits not some but all
> such opportunities as no common node can be demotion targets for
> multiple source node as per current code.

IMHO, the original code is just to keep as simple as possible to address
the issue for the real machines at that time. That provides a base line
for future improvement like you have done. If the original code
wouldn't work well for the target machines, then we fixed a bug. But if
that doen't work well for some new kind of machines, then we need to
improve the code, add more feature, not to fix a bug.

Best Regards,
Huang, Ying

> >   mm: demotion: support to share demotion targets among sources
> >
> > Best Regards,
> > Huang, Ying
> >
> > [snip]
> >
> >
> >


2022-04-26 13:16:45

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > only NUMA node which are also N_MEMORY. As the current demotion
> > > target finding algorithm works based on N_MEMORY and best distance,
> > > it will choose DRAM only NUMA node as demotion target instead of
> > > persistent memory node on such systems. If DRAM only NUMA node is
> > > filled with demoted pages then at some point new allocations can
> > > start falling to persistent memory, so basically cold pages are in
> > > fast memor (due to demotion) and new pages are in slow memory, this
> > > is why persistent memory nodes should be utilized for demotion and
> > > dram node should be avoided for demotion so that they can be used
> > > for new allocations.
> > >
> > > Current implementation can work fine on the system where the memory
> > > only numa nodes are possible only for persistent/slow memory but it
> > > is not suitable for the like of systems mentioned above.
> >
> > Can you share the NUMA topology information of your machine? And the
> > demotion order before and after your change?
> >
> > Whether it's good to use the PMEM nodes as the demotion targets of the
> > DRAM-only node too?
>
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13392 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1971 MB
> node distances:
> node 0 1
>   0: 10 40
>   1: 40 10
>
> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>
> $ cat /sys/bus/nd/devices/dax0.0/target_node
> 2
> $
> # cd /sys/bus/dax/drivers/
> :/sys/bus/dax/drivers# ls
> device_dax kmem
> :/sys/bus/dax/drivers# cd device_dax/
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
> :/sys/bus/dax/drivers/device_dax# numactl -H
> available: 3 nodes (0-2)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13380 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1961 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 free: 0 MB
> node distances:
> node 0 1 2
>   0: 10 40 80
>   1: 40 10 80
>   2: 80 80 10
>

This looks like a virtual machine, not a real machine. That's
unfortunate. I am looking forward to a real issue, not a theoritical
possible issue.

> 2) Once this new node brought online, without N_DEMOTION_TARGETS
> patch series, 1 is demotion target for 0 and 2 is demotion target
> for 1.
>
> With this patch series applied,
> 1) No demotion target for either 0 or 1 before dax device is online
> 2) 2 is demotion target for both 0 and 1 after dax device is online.
>

So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
a top-level demotion source even if it hasn't N_CPU? If so, I cannot
clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Best Regards,
Huang, Ying

> >
[snip]


2022-04-26 16:24:25

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Tue, 2022-04-26 at 14:37 +0530, Aneesh Kumar K V wrote:
> On 4/26/22 1:25 PM, [email protected] wrote:
> > On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > >
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > >
> > > > Can you share the NUMA topology information of your machine? And the
> > > > demotion order before and after your change?
> > > >
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > >
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node 0 1
> > >    0: 10 40
> > >    1: 40 10
> > >
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > >
> > > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > > 2
> > > $
> > > # cd /sys/bus/dax/drivers/
> > > :/sys/bus/dax/drivers# ls
> > > device_dax kmem
> > > :/sys/bus/dax/drivers# cd device_dax/
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
> > > :/sys/bus/dax/drivers/device_dax# numactl -H
> > > available: 3 nodes (0-2)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13380 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1961 MB
> > > node 2 cpus:
> > > node 2 size: 0 MB
> > > node 2 free: 0 MB
> > > node distances:
> > > node 0 1 2
> > >    0: 10 40 80
> > >    1: 40 10 80
> > >    2: 80 80 10
> > >
> >
> > This looks like a virtual machine, not a real machine. That's
> > unfortunate. I am looking forward to a real issue, not a theoritical
> > possible issue.
> >
>
> This is the source of confusion i guess. A large class of ppc64 systems
> are virtualized. The firmware include a hypervisor (PowerVM) and end
> user creates guest (aka LPAR) on them. That is the way end user will use
> this system. There is no baremetal access on this (There is an openpower
> variant, but all new systems built by IBM these days do have PowerVM on
> them).
>
>
> So this is not a theoretical possibility.
>

Now I get it. Thanks for detailed explanation.

Best Regards,
Huang, Ying



2022-04-26 16:24:50

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On 4/26/22 1:25 PM, [email protected] wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine? And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node 0 1
>>   0: 10 40
>>   1: 40 10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node 0 1 2
>>   0: 10 40 80
>>   1: 40 10 80
>>   2: 80 80 10
>>
>
> This looks like a virtual machine, not a real machine. That's
> unfortunate. I am looking forward to a real issue, not a theoritical
> possible issue.
>

This is the source of confusion i guess. A large class of ppc64 systems
are virtualized. The firmware include a hypervisor (PowerVM) and end
user creates guest (aka LPAR) on them. That is the way end user will use
this system. There is no baremetal access on this (There is an openpower
variant, but all new systems built by IBM these days do have PowerVM on
them).


So this is not a theoretical possibility.

-aneesh

2022-04-26 20:53:44

by Jagdish Gediya

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Tue, Apr 26, 2022 at 03:55:36PM +0800, [email protected] wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > filled with demoted pages then at some point new allocations can
> > > > start falling to persistent memory, so basically cold pages are in
> > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > is why persistent memory nodes should be utilized for demotion and
> > > > dram node should be avoided for demotion so that they can be used
> > > > for new allocations.
> > > >
> > > > Current implementation can work fine on the system where the memory
> > > > only numa nodes are possible only for persistent/slow memory but it
> > > > is not suitable for the like of systems mentioned above.
> > >
> > > Can you share the NUMA topology information of your machine? And the
> > > demotion order before and after your change?
> > >
> > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > DRAM-only node too?
> >
> > $ numactl -H
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13392 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1971 MB
> > node distances:
> > node 0 1
> >   0: 10 40
> >   1: 40 10
> >
> > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> >
> > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > 2
> > $
> > # cd /sys/bus/dax/drivers/
> > :/sys/bus/dax/drivers# ls
> > device_dax kmem
> > :/sys/bus/dax/drivers# cd device_dax/
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
> > :/sys/bus/dax/drivers/device_dax# numactl -H
> > available: 3 nodes (0-2)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13380 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1961 MB
> > node 2 cpus:
> > node 2 size: 0 MB
> > node 2 free: 0 MB
> > node distances:
> > node 0 1 2
> >   0: 10 40 80
> >   1: 40 10 80
> >   2: 80 80 10
> >
>
> This looks like a virtual machine, not a real machine. That's
> unfortunate. I am looking forward to a real issue, not a theoritical
> possible issue.
>
> > 2) Once this new node brought online, without N_DEMOTION_TARGETS
> > patch series, 1 is demotion target for 0 and 2 is demotion target
> > for 1.
> >
> > With this patch series applied,
> > 1) No demotion target for either 0 or 1 before dax device is online
> > 2) 2 is demotion target for both 0 and 1 after dax device is online.
> >
>
> So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
> a top-level demotion source even if it hasn't N_CPU? If so, I cannot
> clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Yes, only N_MEMORY node also become demotion source because it is not
N_DEMOTION_TARGETS. You can clear N_DEMOTION_TARGETS from middle
or bottom but in that case, as the implementation works based on the
passes, cleared node will not be found as demotion target hence
demotion target will not be found for it, but does it make sense to
use faster persistent memory as demotion target leaving slowerer
persistent memory out of demotion list, if not, then it is not an
issue.

> Best Regards,
> Huang, Ying
>
> > >
> [snip]
>
>
>

2022-04-27 07:49:37

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Wed, 2022-04-27 at 08:27 +0530, Aneesh Kumar K V wrote:
> On 4/27/22 6:59 AM, [email protected] wrote:
> > On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> > > On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > > > On Mon, 25 Apr 2022 16:45:38 +0530
> > > > Jagdish Gediya <[email protected]> wrote:
> > > >
> > > > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> > > > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > > > filled with demoted pages then at some point new allocations can
> > > > > > > start falling to persistent memory, so basically cold pages are in
> > > > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > > > dram node should be avoided for demotion so that they can be used
> > > > > > > for new allocations.
> > > > > > >
> > > > > > > Current implementation can work fine on the system where the memory
> > > > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > > > is not suitable for the like of systems mentioned above.
> > > > > >
> > > > > > Can you share the NUMA topology information of your machine? And the
> > > > > > demotion order before and after your change?
> > > > > >
> > > > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > > > DRAM-only node too?
> > > > >
> > > > > $ numactl -H
> > > > > available: 2 nodes (0-1)
> > > > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > > > node 0 size: 14272 MB
> > > > > node 0 free: 13392 MB
> > > > > node 1 cpus:
> > > > > node 1 size: 2028 MB
> > > > > node 1 free: 1971 MB
> > > > > node distances:
> > > > > node 0 1
> > > > >     0: 10 40
> > > > >     1: 40 10
> > > > >
> > > > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > > > >      for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > > >
> > > > I'm not convinced the distinction between DRAM and persistent memory is
> > > > valid. There will definitely be systems with a large pool
> > > > of remote DRAM (and potentially no NV memory) where the right choice
> > > > is to demote to that DRAM pool.
> > > >
> > > > Basing the decision on whether the memory is from kmem or
> > > > normal DRAM doesn't provide sufficient information to make the decision.
> > > >
> > >
> > > Hence the suggestion for the ability to override this from userspace.
> > > Now, for example, we could build a system with memory from the remote
> > > machine (memory inception in case of power which will mostly be plugged
> > > in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
> > > memory.
> > >
> > > In the former case, we won't consider that for demotion with this series
> > > because that is not instantiated via dax kmem. So yes definitely we
> > > would need the ability to override this from userspace so that we could
> > > put these remote memory NUMA nodes as demotion targets if we want.
> > > > >
> >
> > Is there a driver for the device (memory from the remote machine)? If
> > so, we can adjust demotion order for it in the driver.
> >
>
> At this point, it is managed by hypervisor, is hotplugged into the the
> LPAR with more additional properties specified via device tree. So there
> is no inception specific device driver.

Because there's information in device tree, I still think it's doable in
the kernel. But it's up to you to choose the appropriate way.

Best Regards,
Huang, Ying

> > In general, I think that we can adjust demotion order inside kernel from
> > various information sources. In addition to ACPI SLIT, we also have
> > HMAT, kmem driver, other drivers, etc.
> >
>
> Managing inception memory will any way requires a userspace component to
> track the owner machine for the remote memory. So we should be ok to
> have userspace manage demotion order.
>
> -aneesh
>


2022-04-27 09:58:30

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On 4/27/22 6:59 AM, [email protected] wrote:
> On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
>> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
>>> On Mon, 25 Apr 2022 16:45:38 +0530
>>> Jagdish Gediya <[email protected]> wrote:
>>>
>>>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
>>>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>>>> filled with demoted pages then at some point new allocations can
>>>>>> start falling to persistent memory, so basically cold pages are in
>>>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>>>> is why persistent memory nodes should be utilized for demotion and
>>>>>> dram node should be avoided for demotion so that they can be used
>>>>>> for new allocations.
>>>>>>
>>>>>> Current implementation can work fine on the system where the memory
>>>>>> only numa nodes are possible only for persistent/slow memory but it
>>>>>> is not suitable for the like of systems mentioned above.
>>>>>
>>>>> Can you share the NUMA topology information of your machine? And the
>>>>> demotion order before and after your change?
>>>>>
>>>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>>>> DRAM-only node too?
>>>>
>>>> $ numactl -H
>>>> available: 2 nodes (0-1)
>>>> node 0 cpus: 0 1 2 3 4 5 6 7
>>>> node 0 size: 14272 MB
>>>> node 0 free: 13392 MB
>>>> node 1 cpus:
>>>> node 1 size: 2028 MB
>>>> node 1 free: 1971 MB
>>>> node distances:
>>>> node 0 1
>>>>    0: 10 40
>>>>    1: 40 10
>>>>
>>>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>>
>>> I'm not convinced the distinction between DRAM and persistent memory is
>>> valid. There will definitely be systems with a large pool
>>> of remote DRAM (and potentially no NV memory) where the right choice
>>> is to demote to that DRAM pool.
>>>
>>> Basing the decision on whether the memory is from kmem or
>>> normal DRAM doesn't provide sufficient information to make the decision.
>>>
>>
>> Hence the suggestion for the ability to override this from userspace.
>> Now, for example, we could build a system with memory from the remote
>> machine (memory inception in case of power which will mostly be plugged
>> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
>> memory.
>>
>> In the former case, we won't consider that for demotion with this series
>> because that is not instantiated via dax kmem. So yes definitely we
>> would need the ability to override this from userspace so that we could
>> put these remote memory NUMA nodes as demotion targets if we want.
>>>>
>
> Is there a driver for the device (memory from the remote machine)? If
> so, we can adjust demotion order for it in the driver.
>

At this point, it is managed by hypervisor, is hotplugged into the the
LPAR with more additional properties specified via device tree. So there
is no inception specific device driver.

> In general, I think that we can adjust demotion order inside kernel from
> various information sources. In addition to ACPI SLIT, we also have
> HMAT, kmem driver, other drivers, etc.
>

Managing inception memory will any way requires a userspace component to
track the owner machine for the remote memory. So we should be ok to
have userspace manage demotion order.

-aneesh

2022-04-27 11:07:09

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <[email protected]> wrote:
> >
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, [email protected] wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > >
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > >
> > > > Can you share the NUMA topology information of your machine? And the
> > > > demotion order before and after your change?
> > > >
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > >
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node 0 1
> > >    0: 10 40
> > >    1: 40 10
> > >
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> >
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> >
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> >
>
> Hence the suggestion for the ability to override this from userspace.
> Now, for example, we could build a system with memory from the remote
> machine (memory inception in case of power which will mostly be plugged
> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
> memory.
>
> In the former case, we won't consider that for demotion with this series
> because that is not instantiated via dax kmem. So yes definitely we
> would need the ability to override this from userspace so that we could
> put these remote memory NUMA nodes as demotion targets if we want.
> > >

Is there a driver for the device (memory from the remote machine)? If
so, we can adjust demotion order for it in the driver.

In general, I think that we can adjust demotion order inside kernel from
various information sources. In addition to ACPI SLIT, we also have
HMAT, kmem driver, other drivers, etc.

> >
Best Regards,
Huang, Ying