2021-10-08 08:41:40

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 0/6] NUMA balancing: optimize memory placement for memory tiering system

The changes since the last post are as follows,

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

--

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
different.

After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
for use like normal RAM"), the PMEM could be used as the
cost-effective volatile memory in separate NUMA nodes. In a typical
memory tiering system, there are CPUs, DRAM and PMEM in each physical
NUMA node. The CPUs and the DRAM will be put in one logical node,
while the PMEM will be put in another (faked) logical node.

To optimize the system overall performance, the hot pages should be
placed in DRAM node. To do that, we need to identify the hot pages in
the PMEM node and migrate them to DRAM node via NUMA migration.

In the original NUMA balancing, there are already a set of existing
mechanisms to identify the pages recently accessed by the CPUs in a
node and migrate the pages to the node. So we can reuse these
mechanisms to build the mechanisms to optimize the page placement in
the memory tiering system. This is implemented in this patchset.

At the other hand, the cold pages should be placed in PMEM node. So,
we also need to identify the cold pages in the DRAM node and migrate
them to PMEM node.

In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
mechanism to demote the cold DRAM pages to PMEM node under memory
pressure is implemented. Based on that, the cold DRAM pages can be
demoted to PMEM node proactively to free some memory space on DRAM
node to accommodate the promoted hot PMEM pages. This is implemented
in this patchset too.

We have tested the solution with the pmbench memory accessing
benchmark with the 80:20 read/write ratio and the normal access
address distribution on a 2 socket Intel server with Optane DC
Persistent Memory Model. The test results of the base kernel and step
by step optimizations are as follows,

Throughput Promotion DRAM bandwidth
access/s MB/s MB/s
----------- ---------- --------------
Base 69263986.8 1830.2
Patch 2 135691921.4 385.6 11315.9
Patch 3 133239016.8 384.7 11065.2
Patch 4 151310868.9 197.6 11397.0
Patch 5 142311252.8 99.3 9580.8
Patch 6 149044263.9 65.5 9922.8

The whole patchset improves the benchmark score up to 115.2%. The
basic NUMA balancing based optimization solution (patch 2), the hot
page selection algorithm (patch 4), and the threshold automatic
adjustment algorithms (patch 6) improves the performance or reduce the
overhead (promotion MB/s) greatly.

Changelog:

v9:

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

v8:

- Rebased on v5.15-rc1

- Make user-specified threshold take effect sooner

v7:

- Rebased on the mmots tree of 2021-07-15.

- Some minor fixes.

v6:

- Rebased on the latest page demotion patchset. (which bases on v5.11)

v5:

- Rebased on the latest page demotion patchset. (which bases on v5.10)

v4:

- Rebased on the latest page demotion patchset. (which bases on v5.9-rc6)

- Add page promotion counter.

v3:

- Move the rate limit control as late as possible per Mel Gorman's
comments.

- Revise the hot page selection implementation to store page scan time
in struct page.

- Code cleanup.

- Rebased on the latest page demotion patchset.

v2:

- Addressed comments for V1.

- Rebased on v5.5.

Best Regards,
Huang, Ying


2021-10-08 08:41:52

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter

In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
and DRAM in one socket will be put in one NUMA node as before, while
the PMEM will be put in another NUMA node as described in the
description of the commit c221c0b0308f ("device-dax: "Hotplug"
persistent memory for use like normal RAM"). So, the NUMA balancing
mechanism will identify all PMEM accesses as remote access and try to
promote the PMEM pages to DRAM.

To distinguish the number of the inter-type promoted pages from that
of the inter-socket migrated pages. A new vmstat count is added. The
counter is per-node (count in the target node). So this can be used
to identify promotion imbalance among the NUMA nodes.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/mmzone.h | 3 +++
include/linux/node.h | 5 +++++
include/linux/vmstat.h | 2 ++
mm/migrate.c | 10 ++++++++--
mm/vmstat.c | 3 +++
5 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6a1d79d84675..37ccd6158765 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -209,6 +209,9 @@ enum node_stat_item {
NR_PAGETABLE, /* used for pagetables */
#ifdef CONFIG_SWAP
NR_SWAPCACHE,
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+ PGPROMOTE_SUCCESS, /* promote successfully */
#endif
NR_VM_NODE_STAT_ITEMS
};
diff --git a/include/linux/node.h b/include/linux/node.h
index 8e5a29897936..26e96fcc66af 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,

#define to_node(device) container_of(device, struct node, dev)

+static inline bool node_is_toptier(int node)
+{
+ return node_state(node, N_CPU);
+}
+
#endif /* _LINUX_NODE_H_ */
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index d6a6cf53b127..75c53b7d1539 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
#ifdef CONFIG_NUMA_BALANCING
#define count_vm_numa_event(x) count_vm_event(x)
#define count_vm_numa_events(x, y) count_vm_events(x, y)
+#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)
#else
#define count_vm_numa_event(x) do {} while (0)
#define count_vm_numa_events(x, y) do { (void)(y); } while (0)
+#define mod_node_balancing_page_state(n, i, v) do {} while (0)
#endif /* CONFIG_NUMA_BALANCING */

#ifdef CONFIG_DEBUG_TLBFLUSH
diff --git a/mm/migrate.c b/mm/migrate.c
index a6a7743ee98f..c3affc587902 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
pg_data_t *pgdat = NODE_DATA(node);
int isolated;
int nr_remaining;
+ int nr_succeeded;
LIST_HEAD(migratepages);
new_page_t *new;
bool compound;
@@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,

list_add(&page->lru, &migratepages);
nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
- MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
+ MIGRATE_ASYNC, MR_NUMA_MISPLACED,
+ &nr_succeeded);
if (nr_remaining) {
if (!list_empty(&migratepages)) {
list_del(&page->lru);
@@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
putback_lru_page(page);
}
isolated = 0;
- } else
+ } else {
count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
+ if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+ mod_node_balancing_page_state(
+ NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
+ }
BUG_ON(!list_empty(&migratepages));
return isolated;

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8ce2620344b2..fff0ec94d795 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_SWAP
"nr_swapcached",
#endif
+#ifdef CONFIG_NUMA_BALANCING
+ "pgpromote_success",
+#endif

/* enum writeback_stat_item counters */
"nr_dirty_threshold",
--
2.30.2

2021-10-08 08:41:57

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 2/6] NUMA balancing: optimize page placement for memory tiering system

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally. So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node. The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node). That is, the fast memory is regarded as local while the
slow memory is regarded as remote. So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if the free
memory of the target node will become below the high watermark. This
is a reasonable policy if there's only one memory type. But this
makes the original NUMA balancing mechanism almost not work to optimize page
placement among different memory types. Details are as follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes. Otherwise, it's
unnecessary to use the slow memory at all. So in the common cases,
there are almost always no enough free pages in the fast memory nodes,
so that the globally hot pages in the slow memory node cannot be
promoted to the fast memory node. To solve the issue, we have 2
choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
from the slow memory node to the fast memory node. This will
create some memory pressure in the fast memory node, thus trigger
the memory reclaiming. So that, the cold pages in the fast memory
node will be demoted to the slow memory node.

b. Make kswapd of the fast memory node to reclaim pages until the free
pages are a little more (about 10MB) than the high watermark. Then,
if the free pages of the fast memory node reaches high watermark, and
some hot pages need to be promoted, kswapd of the fast memory node
will be waken up to demote some cold pages in the fast memory node to
the slow memory node. This will free some extra space in the fast
memory node, so the hot pages in the slow memory node can be
promoted to the fast memory node.

The choice "a" will create the memory pressure in the fast memory
node. If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.

The choice "b" works much better at this aspect. If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation. So in this patch, choice "b" is
implemented.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types. So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field. The
definition of the flags is,

- 0x0: NUMA_BALANCING_DISABLED
- 0x1: NUMA_BALANCING_NORMAL
- 0x2: NUMA_BALANCING_MEMORY_TIERING

TODO:

- Update ABI document: Documentation/sysctl/kernel.txt

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/sched/sysctl.h | 10 ++++++++++
kernel/sched/core.c | 10 ++++------
kernel/sysctl.c | 7 ++++---
mm/migrate.c | 19 +++++++++++++++++--
mm/vmscan.c | 16 ++++++++++++++++
5 files changed, 51 insertions(+), 11 deletions(-)

diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 304f431178fd..bc54c1d75d6d 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -35,6 +35,16 @@ enum sched_tunable_scaling {
SCHED_TUNABLESCALING_END,
};

+#define NUMA_BALANCING_DISABLED 0x0
+#define NUMA_BALANCING_NORMAL 0x1
+#define NUMA_BALANCING_MEMORY_TIERING 0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode 0
+#endif
+
/*
* control realtime throttling:
*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1bba4128a3e6..e61c2d415601 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4228,6 +4228,8 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);

#ifdef CONFIG_NUMA_BALANCING

+int sysctl_numa_balancing_mode;
+
void set_numabalancing_state(bool enabled)
{
if (enabled)
@@ -4240,20 +4242,16 @@ void set_numabalancing_state(bool enabled)
int sysctl_numa_balancing(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
- struct ctl_table t;
int err;
- int state = static_branch_likely(&sched_numa_balancing);

if (write && !capable(CAP_SYS_ADMIN))
return -EPERM;

- t = *table;
- t.data = &state;
- err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
+ err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
if (err < 0)
return err;
if (write)
- set_numabalancing_state(state);
+ set_numabalancing_state(*(int *)table->data);
return err;
}
#endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 083be6af29d7..666c58455355 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -115,6 +115,7 @@ static int sixty = 60;

static int __maybe_unused neg_one = -1;
static int __maybe_unused two = 2;
+static int __maybe_unused three = 3;
static int __maybe_unused four = 4;
static unsigned long zero_ul;
static unsigned long one_ul = 1;
@@ -1803,12 +1804,12 @@ static struct ctl_table kern_table[] = {
#ifdef CONFIG_NUMA_BALANCING
{
.procname = "numa_balancing",
- .data = NULL, /* filled in by handler */
- .maxlen = sizeof(unsigned int),
+ .data = &sysctl_numa_balancing_mode,
+ .maxlen = sizeof(int),
.mode = 0644,
.proc_handler = sysctl_numa_balancing,
.extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
+ .extra2 = &three,
},
#endif /* CONFIG_NUMA_BALANCING */
{
diff --git a/mm/migrate.c b/mm/migrate.c
index c3affc587902..8a42e484bd9e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -50,6 +50,7 @@
#include <linux/ptrace.h>
#include <linux/oom.h>
#include <linux/memory.h>
+#include <linux/sched/sysctl.h>

#include <asm/tlbflush.h>

@@ -2110,16 +2111,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
{
int page_lru;
int nr_pages = thp_nr_pages(page);
+ int order = compound_order(page);

- VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+ VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);

/* Do not migrate THP mapped by multiple processes */
if (PageTransHuge(page) && total_mapcount(page) > 1)
return 0;

/* Avoid migrating to a node that is nearly full */
- if (!migrate_balanced_pgdat(pgdat, nr_pages))
+ if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+ int z;
+
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+ !numa_demotion_enabled)
+ return 0;
+ if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
+ return 0;
+ for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+ if (populated_zone(pgdat->node_zones + z))
+ break;
+ }
+ wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
return 0;
+ }

if (isolate_lru_page(page))
return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f441c5946a4c..7fe737fd0e03 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -56,6 +56,7 @@

#include <linux/swapops.h>
#include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>

#include "internal.h"

@@ -3775,6 +3776,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
return false;
}

+/*
+ * Keep the free pages on fast memory node a little more than the high
+ * watermark to accommodate the promoted pages.
+ */
+#define NUMA_BALANCING_PROMOTE_WATERMARK (10UL * 1024 * 1024 >> PAGE_SHIFT)
+
/*
* Returns true if there is an eligible zone balanced for the request order
* and highest_zoneidx
@@ -3796,6 +3803,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
continue;

mark = high_wmark_pages(zone);
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ numa_demotion_enabled &&
+ next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
+ unsigned long promote_mark;
+
+ promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
+ pgdat->node_present_pages >> 6);
+ mark += promote_mark;
+ }
if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
return true;
}
--
2.30.2

2021-10-08 08:42:09

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 4/6] memory tiering: hot page selection with hint page fault latency

To optimize page placement in a memory tiering system with NUMA
balancing, the hot pages in the slow memory node need to be
identified. Essentially, the original NUMA balancing implementation
selects the mostly recently accessed (MRU) pages as the hot pages.
But this isn't a very good algorithm to identify the hot pages.

So, in this patch we implemented a better hot page selection
algorithm. Which is based on NUMA balancing page table scanning and
hint page fault as follows,

- When the page tables of the processes are scanned to change PTE/PMD
to be PROT_NONE, the current time is recorded in struct page as scan
time.

- When the page is accessed, hint page fault will occur. The scan
time is gotten from the struct page. And The hint page fault
latency is defined as

hint page fault time - scan time

The shorter the hint page fault latency of a page is, the higher the
probability of their access frequency to be higher. So the hint page
fault latency is a good estimation of the page hot/cold.

But it's hard to find some extra space in struct page to hold the scan
time. Fortunately, we can reuse some bits used by the original NUMA
balancing.

NUMA balancing uses some bits in struct page to store the page
accessing CPU and PID (referring to page_cpupid_xchg_last()). Which
is used by the multi-stage node selection algorithm to avoid to
migrate pages shared accessed by the NUMA nodes back and forth. But
for pages in the slow memory node, even if they are shared accessed by
multiple NUMA nodes, as long as the pages are hot, they need to be
promoted to the fast memory node. So the accessing CPU and PID
information are unnecessary for the slow memory pages. We can reuse
these bits in struct page to record the scan time for them. For the
fast memory pages, these bits are used as before.

The remaining problem is how to determine the hot threshold. It's not
easy to be done automatically. So we provide a sysctl knob:
kernel.numa_balancing_hot_threshold_ms. All pages with hint page
fault latency < the threshold will be considered hot. The system
administrator can determine the hot threshold via various information,
such as PMEM bandwidth limit, the average number of the pages pass the
hot threshold, etc. The default hot threshold is 1 second, which
works well in our performance test.

The downside of the patch is that the response time to the workload
hot spot changing may be much longer. For example,

- A previous cold memory area becomes hot

- The hint page fault will be triggered. But the hint page fault
latency isn't shorter than the hot threshold. So the pages will
not be promoted.

- When the memory area is scanned again, maybe after a scan period,
the hint page fault latency measured will be shorter than the hot
threshold and the pages will be promoted.

To mitigate this,

- If there are enough free space in the fast memory node, the hot
threshold will not be used, all pages will be promoted upon the hint
page fault for fast response.

- If fast response is more important for system performance, the
administrator can set a higher hot threshold.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/mm.h | 29 ++++++++++++++++
include/linux/sched/sysctl.h | 1 +
kernel/sched/fair.c | 67 ++++++++++++++++++++++++++++++++++++
kernel/sysctl.c | 7 ++++
mm/huge_memory.c | 13 +++++--
mm/memory.c | 11 +++++-
mm/migrate.c | 12 +++++++
mm/mmzone.c | 17 +++++++++
mm/mprotect.c | 8 ++++-
9 files changed, 160 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 73a52aba448f..12aaa9ec8db0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1380,6 +1380,18 @@ static inline int page_to_nid(const struct page *page)
#endif

#ifdef CONFIG_NUMA_BALANCING
+/* page access time bits needs to hold at least 4 seconds */
+#define PAGE_ACCESS_TIME_MIN_BITS 12
+#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS
+#define PAGE_ACCESS_TIME_BUCKETS \
+ (PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT)
+#else
+#define PAGE_ACCESS_TIME_BUCKETS 0
+#endif
+
+#define PAGE_ACCESS_TIME_MASK \
+ (LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS)
+
static inline int cpu_pid_to_cpupid(int cpu, int pid)
{
return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK);
@@ -1422,6 +1434,16 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
}

+static inline unsigned int xchg_page_access_time(struct page *page,
+ unsigned int time)
+{
+ unsigned int last_time;
+
+ last_time = xchg(&page->_last_cpupid,
+ (time >> PAGE_ACCESS_TIME_BUCKETS) & LAST_CPUPID_MASK);
+ return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
+
static inline int page_cpupid_last(struct page *page)
{
return page->_last_cpupid;
@@ -1437,6 +1459,7 @@ static inline int page_cpupid_last(struct page *page)
}

extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+extern unsigned int xchg_page_access_time(struct page *page, unsigned int time);

static inline void page_cpupid_reset_last(struct page *page)
{
@@ -1449,6 +1472,12 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
return page_to_nid(page); /* XXX */
}

+static inline unsigned int xchg_page_access_time(struct page *page,
+ unsigned int time)
+{
+ return 0;
+}
+
static inline int page_cpupid_last(struct page *page)
{
return page_to_nid(page); /* XXX */
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index bc54c1d75d6d..0ea43b146aee 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -41,6 +41,7 @@ enum sched_tunable_scaling {

#ifdef CONFIG_NUMA_BALANCING
extern int sysctl_numa_balancing_mode;
+extern unsigned int sysctl_numa_balancing_hot_threshold;
#else
#define sysctl_numa_balancing_mode 0
#endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f6a05d9b5443..8ed370c159dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1069,6 +1069,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256;
/* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */
unsigned int sysctl_numa_balancing_scan_delay = 1000;

+/* The page with hint page fault latency < threshold in ms is considered hot */
+unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+
struct numa_group {
refcount_t refcount;

@@ -1409,6 +1412,37 @@ static inline unsigned long group_weight(struct task_struct *p, int nid,
return 1000 * faults / total_faults;
}

+static bool pgdat_free_space_enough(struct pglist_data *pgdat)
+{
+ int z;
+ unsigned long enough_mark;
+
+ enough_mark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT,
+ pgdat->node_present_pages >> 4);
+ for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+ struct zone *zone = pgdat->node_zones + z;
+
+ if (!populated_zone(zone))
+ continue;
+
+ if (zone_watermark_ok(zone, 0,
+ high_wmark_pages(zone) + enough_mark,
+ ZONE_MOVABLE, 0))
+ return true;
+ }
+ return false;
+}
+
+static int numa_hint_fault_latency(struct page *page)
+{
+ unsigned int last_time, time;
+
+ time = jiffies_to_msecs(jiffies);
+ last_time = xchg_page_access_time(page, time);
+
+ return (time - last_time) & PAGE_ACCESS_TIME_MASK;
+}
+
bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
int src_nid, int dst_cpu)
{
@@ -1416,6 +1450,27 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
int dst_nid = cpu_to_node(dst_cpu);
int last_cpupid, this_cpupid;

+ /*
+ * The pages in slow memory node should be migrated according
+ * to hot/cold instead of accessing CPU node.
+ */
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !node_is_toptier(src_nid)) {
+ struct pglist_data *pgdat;
+ unsigned long latency, th;
+
+ pgdat = NODE_DATA(dst_nid);
+ if (pgdat_free_space_enough(pgdat))
+ return true;
+
+ th = sysctl_numa_balancing_hot_threshold;
+ latency = numa_hint_fault_latency(page);
+ if (latency > th)
+ return false;
+
+ return true;
+ }
+
this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
last_cpupid = page_cpupid_xchg_last(page, this_cpupid);

@@ -2636,6 +2691,11 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
if (!p->mm)
return;

+ /* Numa faults statistics are unnecessary for the slow memory node */
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !node_is_toptier(mem_node))
+ return;
+
/* Allocate buffer to track faults on a per-node basis */
if (unlikely(!p->numa_faults)) {
int size = sizeof(*p->numa_faults) *
@@ -2655,6 +2715,13 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
*/
if (unlikely(last_cpupid == (-1 & LAST_CPUPID_MASK))) {
priv = 1;
+ } else if (unlikely(!cpu_online(cpupid_to_cpu(last_cpupid)))) {
+ /*
+ * In memory tiering mode, cpupid of slow memory page is
+ * used to record page access time, so its value may be
+ * invalid during numa balancing mode transition.
+ */
+ return;
} else {
priv = cpupid_match_pid(p, last_cpupid);
if (!priv && !(flags & TNF_NO_GROUP))
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 666c58455355..ea105f52b646 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1811,6 +1811,13 @@ static struct ctl_table kern_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = &three,
},
+ {
+ .procname = "numa_balancing_hot_threshold_ms",
+ .data = &sysctl_numa_balancing_hot_threshold,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#endif /* CONFIG_NUMA_BALANCING */
{
.procname = "sched_rt_period_us",
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8edcd64b5b1f..10cdd8e399e7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1430,7 +1430,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
struct page *page;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
int page_nid = NUMA_NO_NODE;
- int target_nid, last_cpupid = -1;
+ int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
bool migrated = false;
bool was_writable = pmd_savedwrite(oldpmd);
int flags = 0;
@@ -1451,7 +1451,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
flags |= TNF_NO_GROUP;

page_nid = page_to_nid(page);
- last_cpupid = page_cpupid_last(page);
+ if (node_is_toptier(page_nid))
+ last_cpupid = page_cpupid_last(page);
target_nid = numa_migrate_prep(page, vma, haddr, page_nid,
&flags);

@@ -1769,6 +1770,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,

if (prot_numa) {
struct page *page;
+ bool toptier;
/*
* Avoid trapping faults against the zero page. The read-only
* data is likely to be read-cached on the local CPU and
@@ -1781,13 +1783,18 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
goto unlock;

page = pmd_page(*pmd);
+ toptier = node_is_toptier(page_to_nid(page));
/*
* Skip scanning top tier node if normal numa
* balancing is disabled
*/
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
- node_is_toptier(page_to_nid(page)))
+ toptier)
goto unlock;
+
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !toptier)
+ xchg_page_access_time(page, jiffies_to_msecs(jiffies));
}
/*
* In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/memory.c b/mm/memory.c
index 134575f9c35e..004e37591999 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -73,6 +73,7 @@
#include <linux/perf_event.h>
#include <linux/ptrace.h>
#include <linux/vmalloc.h>
+#include <linux/sched/sysctl.h>

#include <trace/events/kmem.h>

@@ -4387,8 +4388,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
flags |= TNF_SHARED;

- last_cpupid = page_cpupid_last(page);
page_nid = page_to_nid(page);
+ /*
+ * In memory tiering mode, cpupid of slow memory page is used
+ * to record page access time. So use default value.
+ */
+ if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+ !node_is_toptier(page_nid))
+ last_cpupid = (-1 & LAST_CPUPID_MASK);
+ else
+ last_cpupid = page_cpupid_last(page);
target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
&flags);
if (target_nid == NUMA_NO_NODE) {
diff --git a/mm/migrate.c b/mm/migrate.c
index 8a42e484bd9e..3a03fe6c0452 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -577,6 +577,18 @@ void migrate_page_states(struct page *newpage, struct page *page)
* future migrations of this same page.
*/
cpupid = page_cpupid_xchg_last(page, -1);
+ /*
+ * If migrate between slow and fast memory node, reset cpupid,
+ * because that is used to record page access time in slow
+ * memory node
+ */
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
+ bool f_toptier = node_is_toptier(page_to_nid(page));
+ bool t_toptier = node_is_toptier(page_to_nid(newpage));
+
+ if (f_toptier != t_toptier)
+ cpupid = -1;
+ }
page_cpupid_xchg_last(newpage, cpupid);

ksm_migrate_page(newpage, page);
diff --git a/mm/mmzone.c b/mm/mmzone.c
index eb89d6e018e2..27f9075632ee 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -99,4 +99,21 @@ int page_cpupid_xchg_last(struct page *page, int cpupid)

return last_cpupid;
}
+
+unsigned int xchg_page_access_time(struct page *page, unsigned int time)
+{
+ unsigned long old_flags, flags;
+ unsigned int last_time;
+
+ time >>= PAGE_ACCESS_TIME_BUCKETS;
+ do {
+ old_flags = flags = page->flags;
+ last_time = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+
+ flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
+ flags |= (time & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
+ } while (unlikely(cmpxchg(&page->flags, old_flags, flags) != old_flags));
+
+ return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
#endif
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 0dd3f82ec6eb..bbf2c65cc4ae 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -85,6 +85,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
if (prot_numa) {
struct page *page;
int nid;
+ bool toptier;

/* Avoid TLB flush if possible */
if (pte_protnone(oldpte))
@@ -114,14 +115,19 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
nid = page_to_nid(page);
if (target_node == nid)
continue;
+ toptier = node_is_toptier(nid);

/*
* Skip scanning top tier node if normal numa
* balancing is disabled
*/
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
- node_is_toptier(nid))
+ toptier)
continue;
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ !toptier)
+ xchg_page_access_time(page,
+ jiffies_to_msecs(jiffies));
}

oldpte = ptep_modify_prot_start(vma, addr, pte);
--
2.30.2

2021-10-08 08:42:25

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 5/6] memory tiering: rate limit NUMA migration throughput

In NUMA balancing memory tiering mode, the hot slow memory pages could
be promoted to the fast memory node via NUMA balancing. But this
incurs some overhead too. So that sometimes the workload performance
may be hurt. To avoid too much disturbing to the workload in these
situations, we should make it possible to rate limit the promotion
throughput.

So, in this patch, we implement a simple rate limit algorithm as
follows. The number of the candidate pages to be promoted to the fast
memory node via NUMA balancing is counted, if the count exceeds the
limit specified by the users, the NUMA balancing promotion will be
stopped until the next second.

A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for
the users to specify the limit.

TODO: Add ABI document for new sysctl knob.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/mmzone.h | 5 +++++
include/linux/sched/sysctl.h | 1 +
kernel/sched/fair.c | 29 +++++++++++++++++++++++++++--
kernel/sysctl.c | 8 ++++++++
mm/vmstat.c | 1 +
5 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 37ccd6158765..d6a0efd387bd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -212,6 +212,7 @@ enum node_stat_item {
#endif
#ifdef CONFIG_NUMA_BALANCING
PGPROMOTE_SUCCESS, /* promote successfully */
+ PGPROMOTE_CANDIDATE, /* candidate pages to promote */
#endif
NR_VM_NODE_STAT_ITEMS
};
@@ -887,6 +888,10 @@ typedef struct pglist_data {
struct deferred_split deferred_split_queue;
#endif

+#ifdef CONFIG_NUMA_BALANCING
+ unsigned long numa_ts;
+ unsigned long numa_nr_candidate;
+#endif
/* Fields commonly accessed by the page reclaim scanner */

/*
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 0ea43b146aee..7d937adaac0f 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -42,6 +42,7 @@ enum sched_tunable_scaling {
#ifdef CONFIG_NUMA_BALANCING
extern int sysctl_numa_balancing_mode;
extern unsigned int sysctl_numa_balancing_hot_threshold;
+extern unsigned int sysctl_numa_balancing_rate_limit;
#else
#define sysctl_numa_balancing_mode 0
#endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8ed370c159dd..c57baeacfc1a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1071,6 +1071,11 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;

/* The page with hint page fault latency < threshold in ms is considered hot */
unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+/*
+ * Restrict the NUMA migration per second in MB for each target node
+ * if no enough free space in target node
+ */
+unsigned int sysctl_numa_balancing_rate_limit = 65536;

struct numa_group {
refcount_t refcount;
@@ -1443,6 +1448,23 @@ static int numa_hint_fault_latency(struct page *page)
return (time - last_time) & PAGE_ACCESS_TIME_MASK;
}

+static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
+ unsigned long rate_limit, int nr)
+{
+ unsigned long nr_candidate;
+ unsigned long now = jiffies, last_ts;
+
+ mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr);
+ nr_candidate = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+ last_ts = pgdat->numa_ts;
+ if (now > last_ts + HZ &&
+ cmpxchg(&pgdat->numa_ts, last_ts, now) == last_ts)
+ pgdat->numa_nr_candidate = nr_candidate;
+ if (nr_candidate - pgdat->numa_nr_candidate > rate_limit)
+ return false;
+ return true;
+}
+
bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
int src_nid, int dst_cpu)
{
@@ -1457,7 +1479,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!node_is_toptier(src_nid)) {
struct pglist_data *pgdat;
- unsigned long latency, th;
+ unsigned long rate_limit, latency, th;

pgdat = NODE_DATA(dst_nid);
if (pgdat_free_space_enough(pgdat))
@@ -1468,7 +1490,10 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
if (latency > th)
return false;

- return true;
+ rate_limit =
+ sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+ return numa_migration_check_rate_limit(pgdat, rate_limit,
+ thp_nr_pages(page));
}

this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index ea105f52b646..0d89021bd66a 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1818,6 +1818,14 @@ static struct ctl_table kern_table[] = {
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "numa_balancing_rate_limit_mbps",
+ .data = &sysctl_numa_balancing_rate_limit,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ },
#endif /* CONFIG_NUMA_BALANCING */
{
.procname = "sched_rt_period_us",
diff --git a/mm/vmstat.c b/mm/vmstat.c
index fff0ec94d795..da2abeaf9e6c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1238,6 +1238,7 @@ const char * const vmstat_text[] = {
#endif
#ifdef CONFIG_NUMA_BALANCING
"pgpromote_success",
+ "pgpromote_candidate",
#endif

/* enum writeback_stat_item counters */
--
2.30.2

2021-10-08 08:42:47

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 6/6] memory tiering: adjust hot threshold automatically

It isn't easy for the administrator to determine the hot threshold.
So in this patch, a method to adjust the hot threshold automatically
is implemented. The basic idea is to control the number of the
candidate promotion pages to match the promotion rate limit. If the
hint page fault latency of a page is less than the hot threshold, we
will try to promote the page, and the page is called the candidate
promotion page.

If the number of the candidate promotion pages in the statistics
interval is much more than the promotion rate limit, the hot threshold
will be decreased to reduce the number of the candidate promotion
pages. Otherwise, the hot threshold will be increased to increase the
number of the candidate promotion pages.

To make the above method works, in each statistics interval, the total
number of the pages to check (on which the hint page faults occur) and
the hot/cold distribution need to be stable. Because the page tables
are scanned linearly in NUMA balancing, but the hot/cold distribution
isn't uniform along the address, the statistics interval should be
larger than the NUMA balancing scan period. So in the patch, the max
scan period is used as statistics interval and it works well in our
tests.

The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the
initial value and max value of the hot threshold.

Signed-off-by: "Huang, Ying" <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
include/linux/mmzone.h | 3 ++
include/linux/sched/sysctl.h | 2 ++
kernel/sched/fair.c | 59 +++++++++++++++++++++++++++++++++---
kernel/sysctl.c | 3 +-
4 files changed, 62 insertions(+), 5 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d6a0efd387bd..69bb672ea743 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -891,6 +891,9 @@ typedef struct pglist_data {
#ifdef CONFIG_NUMA_BALANCING
unsigned long numa_ts;
unsigned long numa_nr_candidate;
+ unsigned long numa_threshold_ts;
+ unsigned long numa_threshold_nr_candidate;
+ unsigned long numa_threshold;
#endif
/* Fields commonly accessed by the page reclaim scanner */

diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 7d937adaac0f..ff2c43e8ebac 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -84,6 +84,8 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos);
int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos);
+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos);
int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos);

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c57baeacfc1a..ff57055aab23 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1465,6 +1465,54 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
return true;
}

+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+ size_t *lenp, loff_t *ppos)
+{
+ int err;
+ struct pglist_data *pgdat;
+
+ if (write && !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (err < 0 || !write)
+ return err;
+
+ for_each_online_pgdat(pgdat)
+ pgdat->numa_threshold = 0;
+
+ return err;
+}
+
+#define NUMA_MIGRATION_ADJUST_STEPS 16
+
+static void numa_migration_adjust_threshold(struct pglist_data *pgdat,
+ unsigned long rate_limit,
+ unsigned long ref_th)
+{
+ unsigned long now = jiffies, last_th_ts, th_period;
+ unsigned long unit_th, th;
+ unsigned long nr_cand, ref_cand, diff_cand;
+
+ th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max);
+ last_th_ts = pgdat->numa_threshold_ts;
+ if (now > last_th_ts + th_period &&
+ cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) {
+ ref_cand = rate_limit *
+ sysctl_numa_balancing_scan_period_max / 1000;
+ nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+ diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate;
+ unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS;
+ th = pgdat->numa_threshold ? : ref_th;
+ if (diff_cand > ref_cand * 11 / 10)
+ th = max(th - unit_th, unit_th);
+ else if (diff_cand < ref_cand * 9 / 10)
+ th = min(th + unit_th, ref_th);
+ pgdat->numa_threshold_nr_candidate = nr_cand;
+ pgdat->numa_threshold = th;
+ }
+}
+
bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
int src_nid, int dst_cpu)
{
@@ -1479,19 +1527,22 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!node_is_toptier(src_nid)) {
struct pglist_data *pgdat;
- unsigned long rate_limit, latency, th;
+ unsigned long rate_limit, latency, th, def_th;

pgdat = NODE_DATA(dst_nid);
if (pgdat_free_space_enough(pgdat))
return true;

- th = sysctl_numa_balancing_hot_threshold;
+ def_th = sysctl_numa_balancing_hot_threshold;
+ rate_limit =
+ sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+ numa_migration_adjust_threshold(pgdat, rate_limit, def_th);
+
+ th = pgdat->numa_threshold ? : def_th;
latency = numa_hint_fault_latency(page);
if (latency > th)
return false;

- rate_limit =
- sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
return numa_migration_check_rate_limit(pgdat, rate_limit,
thp_nr_pages(page));
}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 0d89021bd66a..0a87d5877718 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1816,7 +1816,8 @@ static struct ctl_table kern_table[] = {
.data = &sysctl_numa_balancing_hot_threshold,
.maxlen = sizeof(unsigned int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = sysctl_numa_balancing_threshold,
+ .extra1 = SYSCTL_ZERO,
},
{
.procname = "numa_balancing_rate_limit_mbps",
--
2.30.2

2021-10-08 08:42:52

by Huang, Ying

[permalink] [raw]
Subject: [PATCH -V9 3/6] memory tiering: skip to scan fast memory

If the NUMA balancing isn't used to optimize the page placement among
sockets but only among memory types, the hot pages in the fast memory
node couldn't be migrated (promoted) to anywhere. So it's unnecessary
to scan the pages in the fast memory node via changing their PTE/PMD
mapping to be PROT_NONE. So that the page faults could be avoided
too.

In the test, if only the memory tiering NUMA balancing mode is enabled, the
number of the NUMA balancing hint faults for the DRAM node is reduced to
almost 0 with the patch. While the benchmark score doesn't change
visibly.

Signed-off-by: "Huang, Ying" <[email protected]>
Suggested-by: Dave Hansen <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Wei Xu <[email protected]>
Cc: osalvador <[email protected]>
Cc: Shakeel Butt <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
mm/huge_memory.c | 30 +++++++++++++++++++++---------
mm/mprotect.c | 13 ++++++++++++-
2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5e9ef0fc261e..8edcd64b5b1f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -34,6 +34,7 @@
#include <linux/oom.h>
#include <linux/numa.h>
#include <linux/page_owner.h>
+#include <linux/sched/sysctl.h>

#include <asm/tlb.h>
#include <asm/pgalloc.h>
@@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
}
#endif

- /*
- * Avoid trapping faults against the zero page. The read-only
- * data is likely to be read-cached on the local CPU and
- * local/remote hits to the zero page are not interesting.
- */
- if (prot_numa && is_huge_zero_pmd(*pmd))
- goto unlock;
+ if (prot_numa) {
+ struct page *page;
+ /*
+ * Avoid trapping faults against the zero page. The read-only
+ * data is likely to be read-cached on the local CPU and
+ * local/remote hits to the zero page are not interesting.
+ */
+ if (is_huge_zero_pmd(*pmd))
+ goto unlock;

- if (prot_numa && pmd_protnone(*pmd))
- goto unlock;
+ if (pmd_protnone(*pmd))
+ goto unlock;

+ page = pmd_page(*pmd);
+ /*
+ * Skip scanning top tier node if normal numa
+ * balancing is disabled
+ */
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+ node_is_toptier(page_to_nid(page)))
+ goto unlock;
+ }
/*
* In case prot_numa, we are under mmap_read_lock(mm). It's critical
* to not clear pmd intermittently to avoid race with MADV_DONTNEED
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 883e2cc85cad..0dd3f82ec6eb 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -29,6 +29,7 @@
#include <linux/uaccess.h>
#include <linux/mm_inline.h>
#include <linux/pgtable.h>
+#include <linux/sched/sysctl.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/tlbflush.h>
@@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
*/
if (prot_numa) {
struct page *page;
+ int nid;

/* Avoid TLB flush if possible */
if (pte_protnone(oldpte))
@@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
- if (target_node == page_to_nid(page))
+ nid = page_to_nid(page);
+ if (target_node == nid)
+ continue;
+
+ /*
+ * Skip scanning top tier node if normal numa
+ * balancing is disabled
+ */
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+ node_is_toptier(nid))
continue;
}

--
2.30.2

2021-10-08 08:46:46

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -V9 0/6] NUMA balancing: optimize memory placement for memory tiering system

Hi, Mel,

Huang Ying <[email protected]> writes:

> The changes since the last post are as follows,
>
> - Rebased on v5.15-rc4
>
> - Make "add promotion counter" the first patch per Yang's comments
>
> --
>
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> different.
>
> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
> for use like normal RAM"), the PMEM could be used as the
> cost-effective volatile memory in separate NUMA nodes. In a typical
> memory tiering system, there are CPUs, DRAM and PMEM in each physical
> NUMA node. The CPUs and the DRAM will be put in one logical node,
> while the PMEM will be put in another (faked) logical node.
>
> To optimize the system overall performance, the hot pages should be
> placed in DRAM node. To do that, we need to identify the hot pages in
> the PMEM node and migrate them to DRAM node via NUMA migration.
>
> In the original NUMA balancing, there are already a set of existing
> mechanisms to identify the pages recently accessed by the CPUs in a
> node and migrate the pages to the node. So we can reuse these
> mechanisms to build the mechanisms to optimize the page placement in
> the memory tiering system. This is implemented in this patchset.
>
> At the other hand, the cold pages should be placed in PMEM node. So,
> we also need to identify the cold pages in the DRAM node and migrate
> them to PMEM node.
>
> In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
> mechanism to demote the cold DRAM pages to PMEM node under memory
> pressure is implemented. Based on that, the cold DRAM pages can be
> demoted to PMEM node proactively to free some memory space on DRAM
> node to accommodate the promoted hot PMEM pages. This is implemented
> in this patchset too.
>
> We have tested the solution with the pmbench memory accessing
> benchmark with the 80:20 read/write ratio and the normal access
> address distribution on a 2 socket Intel server with Optane DC
> Persistent Memory Model. The test results of the base kernel and step
> by step optimizations are as follows,
>
> Throughput Promotion DRAM bandwidth
> access/s MB/s MB/s
> ----------- ---------- --------------
> Base 69263986.8 1830.2
> Patch 2 135691921.4 385.6 11315.9
> Patch 3 133239016.8 384.7 11065.2
> Patch 4 151310868.9 197.6 11397.0
> Patch 5 142311252.8 99.3 9580.8
> Patch 6 149044263.9 65.5 9922.8
>
> The whole patchset improves the benchmark score up to 115.2%. The
> basic NUMA balancing based optimization solution (patch 2), the hot
> page selection algorithm (patch 4), and the threshold automatic
> adjustment algorithms (patch 6) improves the performance or reduce the
> overhead (promotion MB/s) greatly.
>
> Changelog:
>
> v9:
>
> - Rebased on v5.15-rc4
>
> - Make "add promotion counter" the first patch per Yang's comments

In this new version, all dependencies have been merged by the latest
upstream kernel. So it is easier to be reviewed than the previous
version you have reviewed part of them. Do you have time to take a look
at this new series now?

Best Regards,
Huang, Ying

2021-10-12 08:44:15

by kernel test robot

[permalink] [raw]
Subject: [memory tiering] 76ff9ff49a: vm-scalability.median 5.0% improvement



Greeting,

FYI, we noticed a 5.0% improvement of vm-scalability.median due to commit:


commit: 76ff9ff49a478cf8936f020e7cbd052babc86245 ("[PATCH -V9 3/6] memory tiering: skip to scan fast memory")
url: https://github.com/0day-ci/linux/commits/Huang-Ying/NUMA-balancing-optimize-memory-placement-for-memory-tiering-system/20211008-164204
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 769fdf83df57b373660343ef4270b3ada91ef434

in testcase: vm-scalability
on test machine: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:

runtime: 300s
size: 512G
test: anon-w-rand
cpufreq_governor: performance
ucode: 0x5003006

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/512G/lkp-csl-2ap4/anon-w-rand/vm-scalability/0x5003006

commit:
9fbea5e92b ("NUMA balancing: optimize page placement for memory tiering system")
76ff9ff49a ("memory tiering: skip to scan fast memory")

9fbea5e92b8daae4 76ff9ff49a478cf8936f020e7cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.00 ? 5% -10.1% 0.00 ? 3% vm-scalability.free_time
43870 +5.0% 46077 vm-scalability.median
0.53 ? 24% +1.9 2.40 ? 20% vm-scalability.median_stddev%
296.42 +3.3% 306.08 vm-scalability.time.elapsed_time
296.42 +3.3% 306.08 vm-scalability.time.elapsed_time.max
296908 ? 2% -20.0% 237565 ? 5% vm-scalability.time.involuntary_context_switches
8965353 -68.2% 2846758 ? 6% vm-scalability.time.minor_page_faults
17482 -14.3% 14989 vm-scalability.time.percent_of_cpu_this_job_got
4484 -19.8% 3594 ? 2% vm-scalability.time.system_time
47330 -10.7% 42287 ? 2% vm-scalability.time.user_time
12597 -12.5% 11028 ? 3% vm-scalability.time.voluntary_context_switches
2.336e+09 -11.9% 2.058e+09 ? 2% vm-scalability.workload
11927 ? 6% +69.4% 20201 ? 4% uptime.idle
45.31 ? 2% +4.0% 47.13 boot-time.boot
7668 ? 2% +4.5% 8012 boot-time.idle
3.992e+09 ? 17% +197.6% 1.188e+10 ? 8% cpuidle..time
8277676 ? 17% +165.0% 21937927 ? 13% cpuidle..usage
1437923 ? 12% -26.0% 1063848 ? 4% numa-numastat.node1.local_node
1484898 ? 10% -22.5% 1150364 ? 4% numa-numastat.node1.numa_hit
47588 ? 61% +82.4% 86778 numa-numastat.node1.other_node
6.97 ? 16% +13.2 20.18 ? 8% mpstat.cpu.all.idle%
1.90 -0.2 1.72 mpstat.cpu.all.irq%
7.97 -1.8 6.19 mpstat.cpu.all.sys%
83.08 -11.3 71.83 ? 2% mpstat.cpu.all.usr%
1444 -14.4% 1236 ? 4% turbostat.Avg_MHz
93.50 -13.4 80.13 ? 4% turbostat.Busy%
783206 ?129% +413.7% 4023234 ? 60% turbostat.C6
6.04 ? 21% +209.3% 18.67 ? 21% turbostat.CPU%c1
239.62 -6.3% 224.43 ? 2% turbostat.PkgWatt
7.00 ? 18% +190.5% 20.33 ? 7% vmstat.cpu.id
82.33 -14.0% 70.83 vmstat.cpu.us
1.048e+08 +12.1% 1.175e+08 vmstat.memory.free
179.17 -14.4% 153.33 ? 2% vmstat.procs.r
4031 -8.1% 3704 ? 3% vmstat.system.cs
22475618 ? 3% -22.1% 17517115 ? 12% numa-meminfo.node1.AnonHugePages
22587494 ? 3% -22.1% 17584894 ? 12% numa-meminfo.node1.AnonPages
22646122 ? 4% -22.3% 17589979 ? 12% numa-meminfo.node1.Inactive
22646122 ? 4% -22.3% 17589970 ? 12% numa-meminfo.node1.Inactive(anon)
25905734 ? 4% +18.7% 30756229 ? 6% numa-meminfo.node1.MemFree
23629681 ? 4% -20.5% 18779185 ? 10% numa-meminfo.node1.MemUsed
49340 ? 17% -29.9% 34586 ? 15% numa-meminfo.node1.PageTables
36467 ?220% +2391.0% 908402 ?112% numa-meminfo.node3.Unevictable
87524616 -14.7% 74626124 meminfo.AnonHugePages
87952806 -14.7% 74999102 meminfo.AnonPages
92866025 -14.8% 79091942 ? 2% meminfo.Committed_AS
88140856 -14.7% 75179098 meminfo.Inactive
88140803 -14.7% 75179046 meminfo.Inactive(anon)
1.04e+08 +12.5% 1.171e+08 meminfo.MemAvailable
1.047e+08 +12.5% 1.177e+08 meminfo.MemFree
93039748 -14.0% 79986317 meminfo.Memused
179104 -14.5% 153155 meminfo.PageTables
5629708 ? 3% -22.0% 4392349 ? 12% numa-vmstat.node1.nr_anon_pages
10940 ? 3% -21.9% 8545 ? 12% numa-vmstat.node1.nr_anon_transparent_hugepages
6493715 ? 3% +18.5% 7693620 ? 6% numa-vmstat.node1.nr_free_pages
5644782 ? 3% -22.2% 4393439 ? 12% numa-vmstat.node1.nr_inactive_anon
201.00 ? 93% -100.0% 0.00 numa-vmstat.node1.nr_isolated_anon
12322 ? 17% -29.8% 8645 ? 15% numa-vmstat.node1.nr_page_table_pages
5644427 ? 3% -22.2% 4393052 ? 12% numa-vmstat.node1.nr_zone_inactive_anon
9116 ?220% +2391.1% 227100 ?112% numa-vmstat.node3.nr_unevictable
9116 ?220% +2391.1% 227100 ?112% numa-vmstat.node3.nr_zone_unevictable
21996061 -14.8% 18741659 ? 2% proc-vmstat.nr_anon_pages
42751 -14.8% 36422 ? 2% proc-vmstat.nr_anon_transparent_hugepages
2594160 +12.6% 2921578 proc-vmstat.nr_dirty_background_threshold
5194664 +12.6% 5850300 proc-vmstat.nr_dirty_threshold
26161778 +12.5% 29440763 proc-vmstat.nr_free_pages
22043312 -14.8% 18787084 ? 2% proc-vmstat.nr_inactive_anon
359.83 ? 17% -100.0% 0.17 ?223% proc-vmstat.nr_isolated_anon
34447 -2.6% 33554 proc-vmstat.nr_kernel_stack
44738 -14.5% 38259 ? 2% proc-vmstat.nr_page_table_pages
22043308 -14.8% 18787082 ? 2% proc-vmstat.nr_zone_inactive_anon
5746797 -100.0% 0.00 proc-vmstat.numa_hint_faults
5649618 -100.0% 0.00 proc-vmstat.numa_hint_faults_local
5698412 ? 2% -10.4% 5106741 ? 4% proc-vmstat.numa_hit
5622112 -100.0% 0.00 proc-vmstat.numa_huge_pte_updates
5439419 ? 2% -10.9% 4847613 ? 4% proc-vmstat.numa_local
23521423 ? 6% -100.0% 0.00 proc-vmstat.numa_pages_migrated
2.879e+09 -100.0% 0.00 proc-vmstat.numa_pte_updates
5705992 ? 2% -10.3% 5116075 ? 4% proc-vmstat.pgalloc_normal
10333677 -59.3% 4210415 ? 4% proc-vmstat.pgfault
5569170 ? 3% -10.1% 5005163 ? 4% proc-vmstat.pgfree
23521423 ? 6% -100.0% 0.00 proc-vmstat.pgmigrate_success
1009758 -11.9% 889598 ? 2% proc-vmstat.thp_fault_alloc
45861 ? 6% -100.0% 0.00 proc-vmstat.thp_migration_success
0.27 ? 81% -88.7% 0.03 ? 65% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
766.06 ? 85% -99.4% 4.40 ? 98% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.06 ? 15% +61.1% 0.09 ? 27% perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork
496.79 ? 86% -94.2% 29.00 ?214% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
37.07 ?174% -91.7% 3.08 ? 83% perf-sched.sch_delay.max.ms.syslog_print.do_syslog.part.0.kmsg_read
96.02 ? 24% +56.7% 150.46 ? 10% perf-sched.total_wait_and_delay.average.ms
39825 ? 20% -34.8% 25952 ? 10% perf-sched.total_wait_and_delay.count.ms
95.79 ? 24% +56.0% 149.39 ? 10% perf-sched.total_wait_time.average.ms
1.40 ? 42% -58.2% 0.59 ? 31% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.38 ? 33% -47.3% 0.20 ? 21% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
17.03 ? 56% +92.0% 32.70 ? 39% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
586.50 ? 30% -62.1% 222.50 ?118% perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
72.83 ?141% +233.0% 242.50 ? 19% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve
95.85 ?170% -90.3% 9.34 ? 48% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
1710 ? 66% -96.2% 64.24 ?116% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.40 ? 42% -58.2% 0.58 ? 31% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.38 ? 33% -47.1% 0.20 ? 21% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
9.36 ?203% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_huge_page.migrate_page_copy.migrate_page
1.17 ?183% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
2.18 ? 93% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
1.67 ? 86% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
92.94 ?166% -88.8% 10.44 ? 85% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown]
95.85 ?170% -90.3% 9.34 ? 48% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
341.43 ?215% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_huge_page.migrate_page_copy.migrate_page
1.32 ?163% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.rmap_walk_anon.remove_migration_ptes.migrate_pages
27.99 ?153% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
6.19 ? 89% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion.stop_two_cpus.migrate_swap
0.22 ?142% +2.8 2.99 ? 42% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.24 ?141% +3.2 3.41 ? 42% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
0.24 ?141% +3.2 3.44 ? 42% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ?141% +3.4 3.60 ? 42% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ?141% +3.4 3.61 ? 42% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.25 ?141% +3.4 3.61 ? 42% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
0.26 ?141% +3.4 3.67 ? 42% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.06 ? 19% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.00 +0.1 0.06 ? 17% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.00 +0.1 0.09 ? 21% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.1 0.10 ? 61% perf-profile.children.cycles-pp.tick_nohz_next_event
0.08 ? 50% +0.1 0.18 ? 17% perf-profile.children.cycles-pp.devkmsg_write.cold
0.08 ? 50% +0.1 0.18 ? 17% perf-profile.children.cycles-pp.devkmsg_emit
0.09 ? 52% +0.1 0.18 ? 15% perf-profile.children.cycles-pp.serial8250_console_putchar
0.09 ? 52% +0.1 0.19 ? 15% perf-profile.children.cycles-pp.uart_console_write
0.09 ? 52% +0.1 0.19 ? 15% perf-profile.children.cycles-pp.write
0.09 ? 51% +0.1 0.20 ? 15% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +0.1 0.11 ? 58% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.10 ? 50% +0.1 0.20 ? 16% perf-profile.children.cycles-pp.vprintk_emit
0.10 ? 49% +0.1 0.21 ? 21% perf-profile.children.cycles-pp.ksys_write
0.10 ? 49% +0.1 0.21 ? 21% perf-profile.children.cycles-pp.vfs_write
0.10 ? 49% +0.1 0.21 ? 21% perf-profile.children.cycles-pp.new_sync_write
0.09 ? 52% +0.1 0.20 ? 15% perf-profile.children.cycles-pp.console_unlock
0.09 ? 52% +0.1 0.20 ? 15% perf-profile.children.cycles-pp.serial8250_console_write
0.14 ? 16% +0.1 0.27 ? 23% perf-profile.children.cycles-pp.__softirqentry_text_start
0.18 ? 13% +0.1 0.32 ? 19% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.1 0.14 ? 55% perf-profile.children.cycles-pp.menu_select
0.69 ? 14% +0.2 0.88 ? 10% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.29 ? 10% +0.4 1.71 ? 11% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.38 ? 63% +2.7 3.05 ? 41% perf-profile.children.cycles-pp.intel_idle
0.43 ? 62% +3.1 3.49 ? 41% perf-profile.children.cycles-pp.cpuidle_enter
0.43 ? 62% +3.1 3.49 ? 41% perf-profile.children.cycles-pp.cpuidle_enter_state
0.44 ? 61% +3.2 3.61 ? 42% perf-profile.children.cycles-pp.start_secondary
0.44 ? 61% +3.2 3.67 ? 42% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.44 ? 61% +3.2 3.67 ? 42% perf-profile.children.cycles-pp.cpu_startup_entry
0.44 ? 61% +3.2 3.67 ? 42% perf-profile.children.cycles-pp.do_idle
0.38 ? 63% +2.7 3.05 ? 41% perf-profile.self.cycles-pp.intel_idle
1.935e+10 -14.9% 1.646e+10 ? 2% perf-stat.i.branch-instructions
9.144e+08 -15.8% 7.7e+08 ? 2% perf-stat.i.cache-misses
9.366e+08 -15.3% 7.938e+08 ? 2% perf-stat.i.cache-references
3904 -7.4% 3615 ? 3% perf-stat.i.context-switches
8.60 -5.0% 8.18 ? 2% perf-stat.i.cpi
5.469e+11 -14.6% 4.67e+11 ? 2% perf-stat.i.cpu-cycles
729.58 -2.9% 708.47 perf-stat.i.cycles-between-cache-misses
2.308e+10 -14.9% 1.964e+10 ? 2% perf-stat.i.dTLB-loads
2046156 ? 9% -23.2% 1571948 ? 13% perf-stat.i.dTLB-store-misses
8.466e+09 -14.7% 7.22e+09 ? 2% perf-stat.i.dTLB-stores
89.84 -9.0 80.88 perf-stat.i.iTLB-load-miss-rate%
263296 ? 9% +123.0% 587185 ? 19% perf-stat.i.iTLB-loads
8.137e+10 -14.9% 6.925e+10 ? 2% perf-stat.i.instructions
0.15 +2.6% 0.15 perf-stat.i.ipc
2.85 -14.6% 2.44 ? 2% perf-stat.i.metric.GHz
126.49 ? 6% +190.4% 367.36 ? 6% perf-stat.i.metric.K/sec
275.22 -15.0% 233.82 ? 2% perf-stat.i.metric.M/sec
33877 ? 2% -61.8% 12941 ? 4% perf-stat.i.minor-faults
59.35 ? 3% -15.6 43.76 ? 3% perf-stat.i.node-load-miss-rate%
2576301 ? 6% -74.8% 647958 ? 6% perf-stat.i.node-load-misses
0.83 ? 13% +6.9 7.75 ? 13% perf-stat.i.node-store-miss-rate%
3938739 ? 6% +978.5% 42480928 ? 8% perf-stat.i.node-store-misses
8.997e+08 -19.7% 7.224e+08 ? 2% perf-stat.i.node-stores
33880 ? 2% -61.8% 12944 ? 4% perf-stat.i.page-faults
0.03 ? 6% +0.0 0.04 ? 7% perf-stat.overall.branch-miss-rate%
83.20 -13.9 69.27 ? 4% perf-stat.overall.iTLB-load-miss-rate%
65.41 ? 3% -33.2 32.16 ? 4% perf-stat.overall.node-load-miss-rate%
0.44 ? 6% +5.1 5.54 ? 8% perf-stat.overall.node-store-miss-rate%
1.926e+10 -14.4% 1.649e+10 ? 2% perf-stat.ps.branch-instructions
9.107e+08 -15.2% 7.718e+08 ? 2% perf-stat.ps.cache-misses
9.33e+08 -14.7% 7.954e+08 ? 2% perf-stat.ps.cache-references
3894 -7.7% 3593 ? 3% perf-stat.ps.context-switches
5.458e+11 -14.1% 4.689e+11 perf-stat.ps.cpu-cycles
2.298e+10 -14.4% 1.968e+10 ? 2% perf-stat.ps.dTLB-loads
2040735 ? 8% -22.8% 1576006 ? 12% perf-stat.ps.dTLB-store-misses
8.43e+09 -14.2% 7.234e+09 ? 2% perf-stat.ps.dTLB-stores
256776 ? 8% +121.6% 568913 ? 19% perf-stat.ps.iTLB-loads
8.101e+10 -14.3% 6.939e+10 ? 2% perf-stat.ps.instructions
34379 ? 2% -61.3% 13320 ? 4% perf-stat.ps.minor-faults
2701913 ? 6% -76.1% 645841 ? 6% perf-stat.ps.node-load-misses
3988458 ? 5% +965.6% 42500100 ? 8% perf-stat.ps.node-store-misses
8.956e+08 -19.1% 7.243e+08 ? 2% perf-stat.ps.node-stores
34381 ? 2% -61.3% 13321 ? 4% perf-stat.ps.page-faults
2.406e+13 -11.5% 2.131e+13 ? 2% perf-stat.total.instructions
13281 ? 10% +28.6% 17086 ? 5% softirqs.CPU0.SCHED
8734 ? 14% +45.5% 12709 ? 11% softirqs.CPU1.SCHED
6931 ? 15% +90.2% 13181 ? 7% softirqs.CPU120.SCHED
7210 ? 13% +84.1% 13274 ? 8% softirqs.CPU121.SCHED
6879 ? 16% +89.8% 13054 ? 10% softirqs.CPU122.SCHED
6827 ? 14% +97.0% 13451 ? 13% softirqs.CPU123.SCHED
6820 ? 14% +96.1% 13371 ? 10% softirqs.CPU124.SCHED
6866 ? 15% +94.3% 13340 ? 10% softirqs.CPU125.SCHED
6790 ? 14% +94.6% 13216 ? 12% softirqs.CPU126.SCHED
6882 ? 13% +94.5% 13384 ? 11% softirqs.CPU127.SCHED
7041 ? 17% +85.8% 13081 ? 10% softirqs.CPU128.SCHED
7089 ? 16% +86.3% 13205 ? 10% softirqs.CPU129.SCHED
6819 ? 14% +93.9% 13222 ? 11% softirqs.CPU130.SCHED
6845 ? 14% +97.4% 13512 ? 11% softirqs.CPU131.SCHED
6799 ? 15% +94.6% 13229 ? 14% softirqs.CPU132.SCHED
6814 ? 14% +95.1% 13295 ? 11% softirqs.CPU133.SCHED
6902 ? 13% +90.2% 13130 ? 14% softirqs.CPU134.SCHED
6811 ? 14% +92.1% 13083 ? 13% softirqs.CPU135.SCHED
6817 ? 15% +98.4% 13525 ? 12% softirqs.CPU136.SCHED
6752 ? 15% +95.9% 13229 ? 14% softirqs.CPU137.SCHED
6782 ? 14% +98.2% 13442 ? 14% softirqs.CPU138.SCHED
6787 ? 14% +93.4% 13130 ? 14% softirqs.CPU139.SCHED
6784 ? 14% +96.8% 13349 ? 12% softirqs.CPU140.SCHED
6847 ? 13% +90.2% 13025 ? 14% softirqs.CPU141.SCHED
6819 ? 15% +91.1% 13028 ? 14% softirqs.CPU142.SCHED
6966 ? 14% +88.3% 13118 ? 14% softirqs.CPU143.SCHED
7573 ? 7% +73.9% 13172 ? 12% softirqs.CPU144.SCHED
7691 ? 8% +71.7% 13210 ? 16% softirqs.CPU146.SCHED
7671 ? 8% +71.6% 13165 ? 17% softirqs.CPU148.SCHED
7773 ? 7% +76.2% 13696 ? 11% softirqs.CPU159.SCHED
8036 ? 7% +49.3% 11995 ? 14% softirqs.CPU168.SCHED
7981 ? 7% +55.5% 12409 ? 19% softirqs.CPU169.SCHED
8003 ? 7% +55.6% 12451 ? 16% softirqs.CPU170.SCHED
7581 ? 6% +65.5% 12543 ? 16% softirqs.CPU191.SCHED
8101 ? 17% +57.4% 12752 ? 16% softirqs.CPU2.SCHED
7027 ? 9% +50.3% 10561 ? 6% softirqs.CPU24.SCHED
6931 ? 13% +75.2% 12145 ? 3% softirqs.CPU25.SCHED
7329 ? 23% +81.4% 13295 ? 6% softirqs.CPU26.SCHED
6843 ? 14% +95.0% 13340 ? 4% softirqs.CPU27.SCHED
6795 ? 12% +100.9% 13652 ? 7% softirqs.CPU28.SCHED
6796 ? 14% +97.4% 13418 ? 8% softirqs.CPU29.SCHED
7139 ? 11% +77.5% 12670 ? 24% softirqs.CPU3.SCHED
6830 ? 14% +98.1% 13531 ? 8% softirqs.CPU30.SCHED
6758 ? 13% +102.3% 13669 ? 6% softirqs.CPU31.SCHED
6456 ? 15% +104.2% 13182 ? 9% softirqs.CPU32.SCHED
6879 ? 14% +94.3% 13364 ? 12% softirqs.CPU33.SCHED
6828 ? 13% +94.5% 13277 ? 11% softirqs.CPU34.SCHED
6782 ? 13% +94.8% 13211 ? 13% softirqs.CPU35.SCHED
6780 ? 14% +96.3% 13309 ? 11% softirqs.CPU36.SCHED
6779 ? 13% +87.7% 12721 ? 12% softirqs.CPU37.SCHED
6734 ? 14% +96.5% 13234 ? 10% softirqs.CPU38.SCHED
6841 ? 13% +95.4% 13368 ? 13% softirqs.CPU39.SCHED
6839 ? 13% +88.3% 12875 ? 10% softirqs.CPU40.SCHED
6714 ? 14% +94.6% 13062 ? 11% softirqs.CPU41.SCHED
7162 ? 10% +82.8% 13095 ? 13% softirqs.CPU42.SCHED
6782 ? 14% +93.5% 13126 ? 13% softirqs.CPU43.SCHED
6750 ? 13% +94.2% 13109 ? 14% softirqs.CPU44.SCHED
6769 ? 13% +97.9% 13397 ? 13% softirqs.CPU45.SCHED
6800 ? 15% +91.3% 13009 ? 17% softirqs.CPU46.SCHED
6755 ? 14% +95.5% 13209 ? 13% softirqs.CPU47.SCHED
7404 ? 5% +43.2% 10604 ? 3% softirqs.CPU48.SCHED
7766 ? 7% +55.9% 12110 ? 4% softirqs.CPU49.SCHED
7568 ? 8% +71.6% 12988 ? 9% softirqs.CPU50.SCHED
7348 ? 8% +85.3% 13613 ? 17% softirqs.CPU51.SCHED
7612 ? 8% +74.2% 13261 ? 17% softirqs.CPU52.SCHED
7671 ? 8% +73.4% 13300 ? 12% softirqs.CPU53.SCHED
7653 ? 8% +73.9% 13306 ? 14% softirqs.CPU54.SCHED
7624 ? 8% +75.2% 13356 ? 16% softirqs.CPU55.SCHED
7545 ? 9% +73.6% 13100 ? 14% softirqs.CPU58.SCHED
7666 ? 7% +70.2% 13049 ? 18% softirqs.CPU65.SCHED
7669 ? 8% +47.1% 11285 ? 5% softirqs.CPU72.SCHED
7753 ? 7% +52.2% 11799 ? 5% softirqs.CPU73.SCHED
7922 ? 7% +56.2% 12372 ? 15% softirqs.CPU74.SCHED
7953 ? 6% +56.7% 12463 ? 16% softirqs.CPU75.SCHED
7964 ? 7% +53.0% 12187 ? 15% softirqs.CPU76.SCHED
7911 ? 6% +54.0% 12182 ? 18% softirqs.CPU77.SCHED
1464438 ? 6% +64.0% 2401987 ? 4% softirqs.SCHED
7857 ? 6% -27.8% 5673 ? 24% interrupts.CPU0.NMI:Non-maskable_interrupts
7857 ? 6% -27.8% 5673 ? 24% interrupts.CPU0.PMI:Performance_monitoring_interrupts
970.17 ? 35% -37.8% 603.67 ? 7% interrupts.CPU0.RES:Rescheduling_interrupts
7628 ? 8% -27.0% 5572 ? 27% interrupts.CPU1.NMI:Non-maskable_interrupts
7628 ? 8% -27.0% 5572 ? 27% interrupts.CPU1.PMI:Performance_monitoring_interrupts
367.50 ? 35% -38.7% 225.17 ? 16% interrupts.CPU100.RES:Rescheduling_interrupts
534.17 ? 43% -57.4% 227.50 ? 11% interrupts.CPU101.RES:Rescheduling_interrupts
1099 ?135% -80.7% 212.50 ? 11% interrupts.CPU102.RES:Rescheduling_interrupts
7737 ? 6% -48.2% 4004 ? 36% interrupts.CPU107.NMI:Non-maskable_interrupts
7737 ? 6% -48.2% 4004 ? 36% interrupts.CPU107.PMI:Performance_monitoring_interrupts
7692 ? 6% -52.5% 3657 ? 22% interrupts.CPU112.NMI:Non-maskable_interrupts
7692 ? 6% -52.5% 3657 ? 22% interrupts.CPU112.PMI:Performance_monitoring_interrupts
392.33 ? 51% -46.2% 211.17 ? 10% interrupts.CPU113.RES:Rescheduling_interrupts
350.17 ? 36% -38.8% 214.17 ? 14% interrupts.CPU114.RES:Rescheduling_interrupts
434.17 ? 46% -42.9% 247.83 ? 20% interrupts.CPU119.RES:Rescheduling_interrupts
8256 ? 3% -51.0% 4048 ? 50% interrupts.CPU120.NMI:Non-maskable_interrupts
8256 ? 3% -51.0% 4048 ? 50% interrupts.CPU120.PMI:Performance_monitoring_interrupts
8184 ? 4% -47.5% 4297 ? 41% interrupts.CPU121.NMI:Non-maskable_interrupts
8184 ? 4% -47.5% 4297 ? 41% interrupts.CPU121.PMI:Performance_monitoring_interrupts
8189 ? 4% -46.9% 4347 ? 40% interrupts.CPU122.NMI:Non-maskable_interrupts
8189 ? 4% -46.9% 4347 ? 40% interrupts.CPU122.PMI:Performance_monitoring_interrupts
560.00 ? 83% -66.3% 188.67 ? 15% interrupts.CPU122.RES:Rescheduling_interrupts
8178 ? 3% -48.9% 4177 ? 43% interrupts.CPU123.NMI:Non-maskable_interrupts
8178 ? 3% -48.9% 4177 ? 43% interrupts.CPU123.PMI:Performance_monitoring_interrupts
8152 ? 3% -43.5% 4607 ? 36% interrupts.CPU124.NMI:Non-maskable_interrupts
8152 ? 3% -43.5% 4607 ? 36% interrupts.CPU124.PMI:Performance_monitoring_interrupts
8186 ? 3% -48.6% 4207 ? 45% interrupts.CPU126.NMI:Non-maskable_interrupts
8186 ? 3% -48.6% 4207 ? 45% interrupts.CPU126.PMI:Performance_monitoring_interrupts
8167 ? 4% -49.6% 4120 ? 41% interrupts.CPU127.NMI:Non-maskable_interrupts
8167 ? 4% -49.6% 4120 ? 41% interrupts.CPU127.PMI:Performance_monitoring_interrupts
318.17 ? 42% -40.1% 190.50 ? 13% interrupts.CPU128.RES:Rescheduling_interrupts
8156 ? 4% -43.7% 4593 ? 37% interrupts.CPU129.NMI:Non-maskable_interrupts
8156 ? 4% -43.7% 4593 ? 37% interrupts.CPU129.PMI:Performance_monitoring_interrupts
8159 ? 4% -44.2% 4552 ? 46% interrupts.CPU130.NMI:Non-maskable_interrupts
8159 ? 4% -44.2% 4552 ? 46% interrupts.CPU130.PMI:Performance_monitoring_interrupts
8162 ? 4% -43.2% 4638 ? 37% interrupts.CPU131.NMI:Non-maskable_interrupts
8162 ? 4% -43.2% 4638 ? 37% interrupts.CPU131.PMI:Performance_monitoring_interrupts
8130 ? 5% -48.0% 4230 ? 45% interrupts.CPU132.NMI:Non-maskable_interrupts
8130 ? 5% -48.0% 4230 ? 45% interrupts.CPU132.PMI:Performance_monitoring_interrupts
1184 ? 4% +21.8% 1442 ? 27% interrupts.CPU133.CAL:Function_call_interrupts
7400 ? 21% -54.4% 3375 ? 20% interrupts.CPU133.NMI:Non-maskable_interrupts
7400 ? 21% -54.4% 3375 ? 20% interrupts.CPU133.PMI:Performance_monitoring_interrupts
7408 ? 20% -53.7% 3432 ? 23% interrupts.CPU134.NMI:Non-maskable_interrupts
7408 ? 20% -53.7% 3432 ? 23% interrupts.CPU134.PMI:Performance_monitoring_interrupts
1509 ? 17% -20.5% 1200 ? 4% interrupts.CPU135.CAL:Function_call_interrupts
7461 ? 21% -52.9% 3510 ? 24% interrupts.CPU138.NMI:Non-maskable_interrupts
7461 ? 21% -52.9% 3510 ? 24% interrupts.CPU138.PMI:Performance_monitoring_interrupts
370.33 ? 66% -40.0% 222.33 ? 13% interrupts.CPU139.RES:Rescheduling_interrupts
8131 ? 4% -51.4% 3950 ? 17% interrupts.CPU142.NMI:Non-maskable_interrupts
8131 ? 4% -51.4% 3950 ? 17% interrupts.CPU142.PMI:Performance_monitoring_interrupts
1577 ? 18% -24.3% 1194 ? 4% interrupts.CPU147.CAL:Function_call_interrupts
7255 ? 20% -45.3% 3971 ? 49% interrupts.CPU147.NMI:Non-maskable_interrupts
7255 ? 20% -45.3% 3971 ? 49% interrupts.CPU147.PMI:Performance_monitoring_interrupts
7943 ? 5% -44.4% 4418 ? 37% interrupts.CPU148.NMI:Non-maskable_interrupts
7943 ? 5% -44.4% 4418 ? 37% interrupts.CPU148.PMI:Performance_monitoring_interrupts
7231 ? 20% -48.8% 3704 ? 52% interrupts.CPU150.NMI:Non-maskable_interrupts
7231 ? 20% -48.8% 3704 ? 52% interrupts.CPU150.PMI:Performance_monitoring_interrupts
270.83 ? 28% -30.4% 188.50 ? 17% interrupts.CPU152.RES:Rescheduling_interrupts
7228 ? 21% -56.3% 3156 ? 33% interrupts.CPU160.NMI:Non-maskable_interrupts
7228 ? 21% -56.3% 3156 ? 33% interrupts.CPU160.PMI:Performance_monitoring_interrupts
2620 ? 38% -46.4% 1404 ? 22% interrupts.CPU169.CAL:Function_call_interrupts
1668 ? 49% -29.1% 1183 ? 3% interrupts.CPU176.CAL:Function_call_interrupts
237.00 ? 15% -26.4% 174.50 ? 14% interrupts.CPU178.RES:Rescheduling_interrupts
477.17 ? 46% -40.5% 283.83 ? 6% interrupts.CPU18.RES:Rescheduling_interrupts
378.17 ? 26% -23.3% 290.00 ? 8% interrupts.CPU19.RES:Rescheduling_interrupts
1409 ? 16% -21.5% 1106 ? 6% interrupts.CPU191.CAL:Function_call_interrupts
519.50 ? 31% -34.0% 343.00 ? 19% interrupts.CPU2.RES:Rescheduling_interrupts
477.00 ? 26% -42.8% 273.00 ? 6% interrupts.CPU22.RES:Rescheduling_interrupts
581.33 ? 44% -48.7% 298.33 ? 15% interrupts.CPU23.RES:Rescheduling_interrupts
1718 ? 18% -15.6% 1450 ? 3% interrupts.CPU24.CAL:Function_call_interrupts
8257 ? 3% -21.1% 6511 ? 6% interrupts.CPU24.NMI:Non-maskable_interrupts
8257 ? 3% -21.1% 6511 ? 6% interrupts.CPU24.PMI:Performance_monitoring_interrupts
741.17 ? 86% -66.6% 247.50 ? 13% interrupts.CPU25.RES:Rescheduling_interrupts
8225 ? 3% -45.6% 4472 ? 30% interrupts.CPU26.NMI:Non-maskable_interrupts
8225 ? 3% -45.6% 4472 ? 30% interrupts.CPU26.PMI:Performance_monitoring_interrupts
1430 ?149% -83.3% 238.67 ? 11% interrupts.CPU26.RES:Rescheduling_interrupts
8192 ? 3% -49.1% 4168 ? 39% interrupts.CPU27.NMI:Non-maskable_interrupts
8192 ? 3% -49.1% 4168 ? 39% interrupts.CPU27.PMI:Performance_monitoring_interrupts
8177 ? 4% -48.1% 4240 ? 40% interrupts.CPU29.NMI:Non-maskable_interrupts
8177 ? 4% -48.1% 4240 ? 40% interrupts.CPU29.PMI:Performance_monitoring_interrupts
519.33 ? 77% -52.5% 246.67 ? 5% interrupts.CPU29.RES:Rescheduling_interrupts
8211 ? 3% -49.3% 4163 ? 42% interrupts.CPU30.NMI:Non-maskable_interrupts
8211 ? 3% -49.3% 4163 ? 42% interrupts.CPU30.PMI:Performance_monitoring_interrupts
329.33 ? 10% -27.3% 239.33 ? 16% interrupts.CPU31.RES:Rescheduling_interrupts
8187 ? 3% -47.3% 4313 ? 39% interrupts.CPU32.NMI:Non-maskable_interrupts
8187 ? 3% -47.3% 4313 ? 39% interrupts.CPU32.PMI:Performance_monitoring_interrupts
369.00 ? 31% -40.2% 220.83 ? 20% interrupts.CPU32.RES:Rescheduling_interrupts
8203 ? 3% -47.7% 4290 ? 44% interrupts.CPU33.NMI:Non-maskable_interrupts
8203 ? 3% -47.7% 4290 ? 44% interrupts.CPU33.PMI:Performance_monitoring_interrupts
8208 ? 3% -43.7% 4617 ? 34% interrupts.CPU34.NMI:Non-maskable_interrupts
8208 ? 3% -43.7% 4617 ? 34% interrupts.CPU34.PMI:Performance_monitoring_interrupts
8200 ? 4% -56.7% 3549 ? 26% interrupts.CPU35.NMI:Non-maskable_interrupts
8200 ? 4% -56.7% 3549 ? 26% interrupts.CPU35.PMI:Performance_monitoring_interrupts
8173 ? 4% -52.6% 3877 ? 16% interrupts.CPU36.NMI:Non-maskable_interrupts
8173 ? 4% -52.6% 3877 ? 16% interrupts.CPU36.PMI:Performance_monitoring_interrupts
8176 ? 4% -43.4% 4626 ? 38% interrupts.CPU37.NMI:Non-maskable_interrupts
8176 ? 4% -43.4% 4626 ? 38% interrupts.CPU37.PMI:Performance_monitoring_interrupts
8207 ? 4% -46.3% 4405 ? 39% interrupts.CPU38.NMI:Non-maskable_interrupts
8207 ? 4% -46.3% 4405 ? 39% interrupts.CPU38.PMI:Performance_monitoring_interrupts
8163 ? 4% -45.3% 4467 ? 38% interrupts.CPU39.NMI:Non-maskable_interrupts
8163 ? 4% -45.3% 4467 ? 38% interrupts.CPU39.PMI:Performance_monitoring_interrupts
324.50 ? 23% -28.5% 232.17 ? 14% interrupts.CPU40.RES:Rescheduling_interrupts
8162 ? 5% -55.1% 3664 ? 30% interrupts.CPU43.NMI:Non-maskable_interrupts
8162 ? 5% -55.1% 3664 ? 30% interrupts.CPU43.PMI:Performance_monitoring_interrupts
8183 ? 4% -43.3% 4643 ? 37% interrupts.CPU44.NMI:Non-maskable_interrupts
8183 ? 4% -43.3% 4643 ? 37% interrupts.CPU44.PMI:Performance_monitoring_interrupts
8167 ? 4% -43.2% 4642 ? 37% interrupts.CPU46.NMI:Non-maskable_interrupts
8167 ? 4% -43.2% 4642 ? 37% interrupts.CPU46.PMI:Performance_monitoring_interrupts
7992 ? 4% -27.1% 5827 ? 21% interrupts.CPU48.NMI:Non-maskable_interrupts
7992 ? 4% -27.1% 5827 ? 21% interrupts.CPU48.PMI:Performance_monitoring_interrupts
7948 ? 5% -42.7% 4558 ? 34% interrupts.CPU50.NMI:Non-maskable_interrupts
7948 ? 5% -42.7% 4558 ? 34% interrupts.CPU50.PMI:Performance_monitoring_interrupts
7913 ? 5% -49.5% 3993 ? 55% interrupts.CPU51.NMI:Non-maskable_interrupts
7913 ? 5% -49.5% 3993 ? 55% interrupts.CPU51.PMI:Performance_monitoring_interrupts
7935 ? 5% -52.6% 3760 ? 58% interrupts.CPU52.NMI:Non-maskable_interrupts
7935 ? 5% -52.6% 3760 ? 58% interrupts.CPU52.PMI:Performance_monitoring_interrupts
7902 ? 5% -52.8% 3728 ? 61% interrupts.CPU53.NMI:Non-maskable_interrupts
7902 ? 5% -52.8% 3728 ? 61% interrupts.CPU53.PMI:Performance_monitoring_interrupts
7918 ? 5% -50.2% 3944 ? 42% interrupts.CPU54.NMI:Non-maskable_interrupts
7918 ? 5% -50.2% 3944 ? 42% interrupts.CPU54.PMI:Performance_monitoring_interrupts
249.83 ? 9% -22.0% 194.83 ? 16% interrupts.CPU59.RES:Rescheduling_interrupts
7899 ? 5% -49.5% 3992 ? 58% interrupts.CPU64.NMI:Non-maskable_interrupts
7899 ? 5% -49.5% 3992 ? 58% interrupts.CPU64.PMI:Performance_monitoring_interrupts
360.17 ? 48% -39.1% 219.33 ? 13% interrupts.CPU64.RES:Rescheduling_interrupts
330.17 ? 32% -35.9% 211.50 ? 10% interrupts.CPU71.RES:Rescheduling_interrupts
7764 ? 5% -31.1% 5348 ? 30% interrupts.CPU74.NMI:Non-maskable_interrupts
7764 ? 5% -31.1% 5348 ? 30% interrupts.CPU74.PMI:Performance_monitoring_interrupts
7820 ? 4% -30.3% 5448 ? 15% interrupts.CPU77.NMI:Non-maskable_interrupts
7820 ? 4% -30.3% 5448 ? 15% interrupts.CPU77.PMI:Performance_monitoring_interrupts
7745 ? 5% -30.7% 5364 ? 21% interrupts.CPU88.NMI:Non-maskable_interrupts
7745 ? 5% -30.7% 5364 ? 21% interrupts.CPU88.PMI:Performance_monitoring_interrupts
3245 ? 10% -17.0% 2694 ? 10% interrupts.CPU95.CAL:Function_call_interrupts
1592 ? 3% -21.1% 1256 ? 10% interrupts.CPU95.RES:Rescheduling_interrupts
1669 ? 18% -18.7% 1357 ? 2% interrupts.CPU96.CAL:Function_call_interrupts
331.00 ? 15% -36.1% 211.50 ? 22% interrupts.IWI:IRQ_work_interrupts
1409742 ? 6% -36.2% 899323 ? 15% interrupts.NMI:Non-maskable_interrupts
1409742 ? 6% -36.2% 899323 ? 15% interrupts.PMI:Performance_monitoring_interrupts
68748 ? 7% -29.6% 48431 ? 6% interrupts.RES:Rescheduling_interrupts



vm-scalability.time.system_time

4600 +--------------------------------------------------------------------+
|. + +. .++.+ +. .++.+ .+ +.+ .+ ++. .+. .++.++.+.++.++.++.|
4400 |-+ ++.+ +.+ + + + ++ ++ |
| |
4200 |-+ |
| |
4000 |-+ |
| O O O |
3800 |-+ O O O O O O O O O |
| O OO O O O O O O O |
3600 |-+O O O O O O O |
| O O O O O O |
3400 |-+ O |
| O |
3200 +--------------------------------------------------------------------+


vm-scalability.time.percent_of_cpu_this_job_got

18000 +-------------------------------------------------------------------+
|+ +.+ + ++ ++ + + +.+ + +.+ + +.+ + +. +.+ |
17500 |-+ + + + + +.++.++.++. : +.|
17000 |-+ + |
| |
16500 |-+ |
16000 |-+ O O OO O O O |
| O O OO O O |
15500 |-O O OO O O O |
15000 |-+ O O O O |
| O O O O O O O |
14500 |-+ O O O |
14000 |-+ O O |
| O |
13500 +-------------------------------------------------------------------+


vm-scalability.time.minor_page_faults

1e+07 +-------------------------------------------------------------------+
| +. +. +. + +.+ .+. .|
9e+06 |.++.++.+ + ++.+.+ + + +.++.++.+.++.++.++. : + ++.+ .++.++.++ |
8e+06 |-+ + + + |
| |
7e+06 |-+ |
| |
6e+06 |-+ |
| |
5e+06 |-+ |
4e+06 |-+ |
| O O O OO |
3e+06 |-O OO O O O O OO O O OO O OO OO O O O O |
| O O OO O O O O O |
2e+06 +-------------------------------------------------------------------+


vm-scalability.median

47000 +-------------------------------------------------------------------+
| O O OO O O |
46500 |-O O O O O O |
| O O O O O O OO O |
46000 |-+ O O O O OO O OO O |
| O O O O O O |
45500 |-+ |
| |
45000 |-+ |
| |
44500 |-+ |
| |
44000 |.+ .++.++. .+.+ .++. +. .+. +.|
| ++.++ ++ + + ++.++ ++.++.++.++.++.+.++.++.++.++.+ |
43500 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (43.99 kB)
config-5.15.0-rc4-00039-g76ff9ff49a47 (179.51 kB)
job-script (8.06 kB)
job.yaml (5.34 kB)
reproduce (2.04 kB)
Download all attachments

2021-10-13 21:59:24

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter

On Fri, Oct 8, 2021 at 1:40 AM Huang Ying <[email protected]> wrote:
>
> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
> and DRAM in one socket will be put in one NUMA node as before, while
> the PMEM will be put in another NUMA node as described in the
> description of the commit c221c0b0308f ("device-dax: "Hotplug"
> persistent memory for use like normal RAM"). So, the NUMA balancing
> mechanism will identify all PMEM accesses as remote access and try to
> promote the PMEM pages to DRAM.
>
> To distinguish the number of the inter-type promoted pages from that
> of the inter-socket migrated pages. A new vmstat count is added. The
> counter is per-node (count in the target node). So this can be used
> to identify promotion imbalance among the NUMA nodes.
>
> Signed-off-by: "Huang, Ying" <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Yang Shi <[email protected]>
> Cc: Zi Yan <[email protected]>
> Cc: Wei Xu <[email protected]>
> Cc: osalvador <[email protected]>
> Cc: Shakeel Butt <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> ---
> include/linux/mmzone.h | 3 +++
> include/linux/node.h | 5 +++++
> include/linux/vmstat.h | 2 ++
> mm/migrate.c | 10 ++++++++--
> mm/vmstat.c | 3 +++
> 5 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 6a1d79d84675..37ccd6158765 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -209,6 +209,9 @@ enum node_stat_item {
> NR_PAGETABLE, /* used for pagetables */
> #ifdef CONFIG_SWAP
> NR_SWAPCACHE,
> +#endif
> +#ifdef CONFIG_NUMA_BALANCING
> + PGPROMOTE_SUCCESS, /* promote successfully */
> #endif
> NR_VM_NODE_STAT_ITEMS
> };
> diff --git a/include/linux/node.h b/include/linux/node.h
> index 8e5a29897936..26e96fcc66af 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>
> #define to_node(device) container_of(device, struct node, dev)
>
> +static inline bool node_is_toptier(int node)
> +{
> + return node_state(node, N_CPU);
> +}
> +
> #endif /* _LINUX_NODE_H_ */
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
> index d6a6cf53b127..75c53b7d1539 100644
> --- a/include/linux/vmstat.h
> +++ b/include/linux/vmstat.h
> @@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
> #ifdef CONFIG_NUMA_BALANCING
> #define count_vm_numa_event(x) count_vm_event(x)
> #define count_vm_numa_events(x, y) count_vm_events(x, y)
> +#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)

I don't quite get why we need this new API. Doesn't __count_vm_events() work?

> #else
> #define count_vm_numa_event(x) do {} while (0)
> #define count_vm_numa_events(x, y) do { (void)(y); } while (0)
> +#define mod_node_balancing_page_state(n, i, v) do {} while (0)
> #endif /* CONFIG_NUMA_BALANCING */
>
> #ifdef CONFIG_DEBUG_TLBFLUSH
> diff --git a/mm/migrate.c b/mm/migrate.c
> index a6a7743ee98f..c3affc587902 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
> pg_data_t *pgdat = NODE_DATA(node);
> int isolated;
> int nr_remaining;
> + int nr_succeeded;
> LIST_HEAD(migratepages);
> new_page_t *new;
> bool compound;
> @@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>
> list_add(&page->lru, &migratepages);
> nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
> - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
> + MIGRATE_ASYNC, MR_NUMA_MISPLACED,
> + &nr_succeeded);
> if (nr_remaining) {
> if (!list_empty(&migratepages)) {
> list_del(&page->lru);
> @@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
> putback_lru_page(page);
> }
> isolated = 0;
> - } else
> + } else {
> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
> + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
> + mod_node_balancing_page_state(
> + NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
> + }
> BUG_ON(!list_empty(&migratepages));
> return isolated;
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8ce2620344b2..fff0ec94d795 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
> #ifdef CONFIG_SWAP
> "nr_swapcached",
> #endif
> +#ifdef CONFIG_NUMA_BALANCING
> + "pgpromote_success",
> +#endif
>
> /* enum writeback_stat_item counters */
> "nr_dirty_threshold",
> --
> 2.30.2
>

2021-10-14 00:53:10

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter

Yang Shi <[email protected]> writes:

> On Fri, Oct 8, 2021 at 1:40 AM Huang Ying <[email protected]> wrote:
>>
>> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
>> and DRAM in one socket will be put in one NUMA node as before, while
>> the PMEM will be put in another NUMA node as described in the
>> description of the commit c221c0b0308f ("device-dax: "Hotplug"
>> persistent memory for use like normal RAM"). So, the NUMA balancing
>> mechanism will identify all PMEM accesses as remote access and try to
>> promote the PMEM pages to DRAM.
>>
>> To distinguish the number of the inter-type promoted pages from that
>> of the inter-socket migrated pages. A new vmstat count is added. The
>> counter is per-node (count in the target node). So this can be used
>> to identify promotion imbalance among the NUMA nodes.
>>
>> Signed-off-by: "Huang, Ying" <[email protected]>
>> Cc: Andrew Morton <[email protected]>
>> Cc: Michal Hocko <[email protected]>
>> Cc: Rik van Riel <[email protected]>
>> Cc: Mel Gorman <[email protected]>
>> Cc: Peter Zijlstra <[email protected]>
>> Cc: Dave Hansen <[email protected]>
>> Cc: Yang Shi <[email protected]>
>> Cc: Zi Yan <[email protected]>
>> Cc: Wei Xu <[email protected]>
>> Cc: osalvador <[email protected]>
>> Cc: Shakeel Butt <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> ---
>> include/linux/mmzone.h | 3 +++
>> include/linux/node.h | 5 +++++
>> include/linux/vmstat.h | 2 ++
>> mm/migrate.c | 10 ++++++++--
>> mm/vmstat.c | 3 +++
>> 5 files changed, 21 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> index 6a1d79d84675..37ccd6158765 100644
>> --- a/include/linux/mmzone.h
>> +++ b/include/linux/mmzone.h
>> @@ -209,6 +209,9 @@ enum node_stat_item {
>> NR_PAGETABLE, /* used for pagetables */
>> #ifdef CONFIG_SWAP
>> NR_SWAPCACHE,
>> +#endif
>> +#ifdef CONFIG_NUMA_BALANCING
>> + PGPROMOTE_SUCCESS, /* promote successfully */
>> #endif
>> NR_VM_NODE_STAT_ITEMS
>> };
>> diff --git a/include/linux/node.h b/include/linux/node.h
>> index 8e5a29897936..26e96fcc66af 100644
>> --- a/include/linux/node.h
>> +++ b/include/linux/node.h
>> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>>
>> #define to_node(device) container_of(device, struct node, dev)
>>
>> +static inline bool node_is_toptier(int node)
>> +{
>> + return node_state(node, N_CPU);
>> +}
>> +
>> #endif /* _LINUX_NODE_H_ */
>> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
>> index d6a6cf53b127..75c53b7d1539 100644
>> --- a/include/linux/vmstat.h
>> +++ b/include/linux/vmstat.h
>> @@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
>> #ifdef CONFIG_NUMA_BALANCING
>> #define count_vm_numa_event(x) count_vm_event(x)
>> #define count_vm_numa_events(x, y) count_vm_events(x, y)
>> +#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)
>
> I don't quite get why we need this new API. Doesn't __count_vm_events() work?

PGPROMOTE_SUCCESS is a per-node counter. That is, its type is enum
node_stat_item instead of enum vm_event_item. So we need to use
mod_node_page_state() instead of count_vm_events(). The new API is to
avoid #ifdef CONFIG_NUMA_BALANCING/#endif in caller.

Best Regards,
Huang, Ying

>> #else
>> #define count_vm_numa_event(x) do {} while (0)
>> #define count_vm_numa_events(x, y) do { (void)(y); } while (0)
>> +#define mod_node_balancing_page_state(n, i, v) do {} while (0)
>> #endif /* CONFIG_NUMA_BALANCING */
>>
>> #ifdef CONFIG_DEBUG_TLBFLUSH
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index a6a7743ee98f..c3affc587902 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> pg_data_t *pgdat = NODE_DATA(node);
>> int isolated;
>> int nr_remaining;
>> + int nr_succeeded;
>> LIST_HEAD(migratepages);
>> new_page_t *new;
>> bool compound;
>> @@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>>
>> list_add(&page->lru, &migratepages);
>> nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
>> - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
>> + MIGRATE_ASYNC, MR_NUMA_MISPLACED,
>> + &nr_succeeded);
>> if (nr_remaining) {
>> if (!list_empty(&migratepages)) {
>> list_del(&page->lru);
>> @@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> putback_lru_page(page);
>> }
>> isolated = 0;
>> - } else
>> + } else {
>> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
>> + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
>> + mod_node_balancing_page_state(
>> + NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
>> + }
>> BUG_ON(!list_empty(&migratepages));
>> return isolated;
>>
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 8ce2620344b2..fff0ec94d795 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/vmstat.c
>> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
>> #ifdef CONFIG_SWAP
>> "nr_swapcached",
>> #endif
>> +#ifdef CONFIG_NUMA_BALANCING
>> + "pgpromote_success",
>> +#endif
>>
>> /* enum writeback_stat_item counters */
>> "nr_dirty_threshold",
>> --
>> 2.30.2
>>

2021-10-15 07:10:37

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter

On Wed, Oct 13, 2021 at 5:50 PM Huang, Ying <[email protected]> wrote:
>
> Yang Shi <[email protected]> writes:
>
> > On Fri, Oct 8, 2021 at 1:40 AM Huang Ying <[email protected]> wrote:
> >>
> >> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
> >> and DRAM in one socket will be put in one NUMA node as before, while
> >> the PMEM will be put in another NUMA node as described in the
> >> description of the commit c221c0b0308f ("device-dax: "Hotplug"
> >> persistent memory for use like normal RAM"). So, the NUMA balancing
> >> mechanism will identify all PMEM accesses as remote access and try to
> >> promote the PMEM pages to DRAM.
> >>
> >> To distinguish the number of the inter-type promoted pages from that
> >> of the inter-socket migrated pages. A new vmstat count is added. The
> >> counter is per-node (count in the target node). So this can be used
> >> to identify promotion imbalance among the NUMA nodes.
> >>
> >> Signed-off-by: "Huang, Ying" <[email protected]>
> >> Cc: Andrew Morton <[email protected]>
> >> Cc: Michal Hocko <[email protected]>
> >> Cc: Rik van Riel <[email protected]>
> >> Cc: Mel Gorman <[email protected]>
> >> Cc: Peter Zijlstra <[email protected]>
> >> Cc: Dave Hansen <[email protected]>
> >> Cc: Yang Shi <[email protected]>
> >> Cc: Zi Yan <[email protected]>
> >> Cc: Wei Xu <[email protected]>
> >> Cc: osalvador <[email protected]>
> >> Cc: Shakeel Butt <[email protected]>
> >> Cc: [email protected]
> >> Cc: [email protected]
> >> ---
> >> include/linux/mmzone.h | 3 +++
> >> include/linux/node.h | 5 +++++
> >> include/linux/vmstat.h | 2 ++
> >> mm/migrate.c | 10 ++++++++--
> >> mm/vmstat.c | 3 +++
> >> 5 files changed, 21 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >> index 6a1d79d84675..37ccd6158765 100644
> >> --- a/include/linux/mmzone.h
> >> +++ b/include/linux/mmzone.h
> >> @@ -209,6 +209,9 @@ enum node_stat_item {
> >> NR_PAGETABLE, /* used for pagetables */
> >> #ifdef CONFIG_SWAP
> >> NR_SWAPCACHE,
> >> +#endif
> >> +#ifdef CONFIG_NUMA_BALANCING
> >> + PGPROMOTE_SUCCESS, /* promote successfully */
> >> #endif
> >> NR_VM_NODE_STAT_ITEMS
> >> };
> >> diff --git a/include/linux/node.h b/include/linux/node.h
> >> index 8e5a29897936..26e96fcc66af 100644
> >> --- a/include/linux/node.h
> >> +++ b/include/linux/node.h
> >> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
> >>
> >> #define to_node(device) container_of(device, struct node, dev)
> >>
> >> +static inline bool node_is_toptier(int node)
> >> +{
> >> + return node_state(node, N_CPU);
> >> +}
> >> +
> >> #endif /* _LINUX_NODE_H_ */
> >> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
> >> index d6a6cf53b127..75c53b7d1539 100644
> >> --- a/include/linux/vmstat.h
> >> +++ b/include/linux/vmstat.h
> >> @@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
> >> #ifdef CONFIG_NUMA_BALANCING
> >> #define count_vm_numa_event(x) count_vm_event(x)
> >> #define count_vm_numa_events(x, y) count_vm_events(x, y)
> >> +#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)
> >
> > I don't quite get why we need this new API. Doesn't __count_vm_events() work?
>
> PGPROMOTE_SUCCESS is a per-node counter. That is, its type is enum
> node_stat_item instead of enum vm_event_item. So we need to use
> mod_node_page_state() instead of count_vm_events(). The new API is to
> avoid #ifdef CONFIG_NUMA_BALANCING/#endif in caller.

Aha, I see, sorry for overlooking this. But I think you could just
call mod_node_page_state() since migrate_misplaced_page() has been
protected by #ifdef CONFIG_NUMA_BALANCING. The !CONFIG_NUMA_BALANCING
version just returns -EFAULT. Other than this, another nit below.

>
> Best Regards,
> Huang, Ying
>
> >> #else
> >> #define count_vm_numa_event(x) do {} while (0)
> >> #define count_vm_numa_events(x, y) do { (void)(y); } while (0)
> >> +#define mod_node_balancing_page_state(n, i, v) do {} while (0)
> >> #endif /* CONFIG_NUMA_BALANCING */
> >>
> >> #ifdef CONFIG_DEBUG_TLBFLUSH
> >> diff --git a/mm/migrate.c b/mm/migrate.c
> >> index a6a7743ee98f..c3affc587902 100644
> >> --- a/mm/migrate.c
> >> +++ b/mm/migrate.c
> >> @@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
> >> pg_data_t *pgdat = NODE_DATA(node);
> >> int isolated;
> >> int nr_remaining;
> >> + int nr_succeeded;
> >> LIST_HEAD(migratepages);
> >> new_page_t *new;
> >> bool compound;
> >> @@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
> >>
> >> list_add(&page->lru, &migratepages);
> >> nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
> >> - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
> >> + MIGRATE_ASYNC, MR_NUMA_MISPLACED,
> >> + &nr_succeeded);
> >> if (nr_remaining) {
> >> if (!list_empty(&migratepages)) {
> >> list_del(&page->lru);
> >> @@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
> >> putback_lru_page(page);
> >> }
> >> isolated = 0;
> >> - } else
> >> + } else {
> >> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
> >> + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
> >> + mod_node_balancing_page_state(
> >> + NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
> >> + }

It looks the original code is already problematic. It just updates the
counter when *all* pages are migrated successfully. But since we
already has "nr_succeeded", so I think we could do:

if (nr_remaining) {
do_something();
}

count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);

> >> BUG_ON(!list_empty(&migratepages));
> >> return isolated;
> >>
> >> diff --git a/mm/vmstat.c b/mm/vmstat.c
> >> index 8ce2620344b2..fff0ec94d795 100644
> >> --- a/mm/vmstat.c
> >> +++ b/mm/vmstat.c
> >> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
> >> #ifdef CONFIG_SWAP
> >> "nr_swapcached",
> >> #endif
> >> +#ifdef CONFIG_NUMA_BALANCING
> >> + "pgpromote_success",
> >> +#endif
> >>
> >> /* enum writeback_stat_item counters */
> >> "nr_dirty_threshold",
> >> --
> >> 2.30.2
> >>

2021-10-15 10:41:31

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH -V9 1/6] NUMA Balancing: add page promotion counter

Yang Shi <[email protected]> writes:

> On Wed, Oct 13, 2021 at 5:50 PM Huang, Ying <[email protected]> wrote:
>>
>> Yang Shi <[email protected]> writes:
>>
>> > On Fri, Oct 8, 2021 at 1:40 AM Huang Ying <[email protected]> wrote:
>> >>
>> >> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
>> >> and DRAM in one socket will be put in one NUMA node as before, while
>> >> the PMEM will be put in another NUMA node as described in the
>> >> description of the commit c221c0b0308f ("device-dax: "Hotplug"
>> >> persistent memory for use like normal RAM"). So, the NUMA balancing
>> >> mechanism will identify all PMEM accesses as remote access and try to
>> >> promote the PMEM pages to DRAM.
>> >>
>> >> To distinguish the number of the inter-type promoted pages from that
>> >> of the inter-socket migrated pages. A new vmstat count is added. The
>> >> counter is per-node (count in the target node). So this can be used
>> >> to identify promotion imbalance among the NUMA nodes.
>> >>
>> >> Signed-off-by: "Huang, Ying" <[email protected]>
>> >> Cc: Andrew Morton <[email protected]>
>> >> Cc: Michal Hocko <[email protected]>
>> >> Cc: Rik van Riel <[email protected]>
>> >> Cc: Mel Gorman <[email protected]>
>> >> Cc: Peter Zijlstra <[email protected]>
>> >> Cc: Dave Hansen <[email protected]>
>> >> Cc: Yang Shi <[email protected]>
>> >> Cc: Zi Yan <[email protected]>
>> >> Cc: Wei Xu <[email protected]>
>> >> Cc: osalvador <[email protected]>
>> >> Cc: Shakeel Butt <[email protected]>
>> >> Cc: [email protected]
>> >> Cc: [email protected]
>> >> ---
>> >> include/linux/mmzone.h | 3 +++
>> >> include/linux/node.h | 5 +++++
>> >> include/linux/vmstat.h | 2 ++
>> >> mm/migrate.c | 10 ++++++++--
>> >> mm/vmstat.c | 3 +++
>> >> 5 files changed, 21 insertions(+), 2 deletions(-)
>> >>
>> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
>> >> index 6a1d79d84675..37ccd6158765 100644
>> >> --- a/include/linux/mmzone.h
>> >> +++ b/include/linux/mmzone.h
>> >> @@ -209,6 +209,9 @@ enum node_stat_item {
>> >> NR_PAGETABLE, /* used for pagetables */
>> >> #ifdef CONFIG_SWAP
>> >> NR_SWAPCACHE,
>> >> +#endif
>> >> +#ifdef CONFIG_NUMA_BALANCING
>> >> + PGPROMOTE_SUCCESS, /* promote successfully */
>> >> #endif
>> >> NR_VM_NODE_STAT_ITEMS
>> >> };
>> >> diff --git a/include/linux/node.h b/include/linux/node.h
>> >> index 8e5a29897936..26e96fcc66af 100644
>> >> --- a/include/linux/node.h
>> >> +++ b/include/linux/node.h
>> >> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>> >>
>> >> #define to_node(device) container_of(device, struct node, dev)
>> >>
>> >> +static inline bool node_is_toptier(int node)
>> >> +{
>> >> + return node_state(node, N_CPU);
>> >> +}
>> >> +
>> >> #endif /* _LINUX_NODE_H_ */
>> >> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
>> >> index d6a6cf53b127..75c53b7d1539 100644
>> >> --- a/include/linux/vmstat.h
>> >> +++ b/include/linux/vmstat.h
>> >> @@ -112,9 +112,11 @@ static inline void vm_events_fold_cpu(int cpu)
>> >> #ifdef CONFIG_NUMA_BALANCING
>> >> #define count_vm_numa_event(x) count_vm_event(x)
>> >> #define count_vm_numa_events(x, y) count_vm_events(x, y)
>> >> +#define mod_node_balancing_page_state(n, i, v) mod_node_page_state(n, i, v)
>> >
>> > I don't quite get why we need this new API. Doesn't __count_vm_events() work?
>>
>> PGPROMOTE_SUCCESS is a per-node counter. That is, its type is enum
>> node_stat_item instead of enum vm_event_item. So we need to use
>> mod_node_page_state() instead of count_vm_events(). The new API is to
>> avoid #ifdef CONFIG_NUMA_BALANCING/#endif in caller.
>
> Aha, I see, sorry for overlooking this. But I think you could just
> call mod_node_page_state() since migrate_misplaced_page() has been
> protected by #ifdef CONFIG_NUMA_BALANCING. The !CONFIG_NUMA_BALANCING
> version just returns -EFAULT. Other than this, another nit below.
>

Yes. You are right. I will use mod_node_page_state() directly!

>>
>> >> #else
>> >> #define count_vm_numa_event(x) do {} while (0)
>> >> #define count_vm_numa_events(x, y) do { (void)(y); } while (0)
>> >> +#define mod_node_balancing_page_state(n, i, v) do {} while (0)
>> >> #endif /* CONFIG_NUMA_BALANCING */
>> >>
>> >> #ifdef CONFIG_DEBUG_TLBFLUSH
>> >> diff --git a/mm/migrate.c b/mm/migrate.c
>> >> index a6a7743ee98f..c3affc587902 100644
>> >> --- a/mm/migrate.c
>> >> +++ b/mm/migrate.c
>> >> @@ -2148,6 +2148,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> >> pg_data_t *pgdat = NODE_DATA(node);
>> >> int isolated;
>> >> int nr_remaining;
>> >> + int nr_succeeded;
>> >> LIST_HEAD(migratepages);
>> >> new_page_t *new;
>> >> bool compound;
>> >> @@ -2186,7 +2187,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> >>
>> >> list_add(&page->lru, &migratepages);
>> >> nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
>> >> - MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
>> >> + MIGRATE_ASYNC, MR_NUMA_MISPLACED,
>> >> + &nr_succeeded);
>> >> if (nr_remaining) {
>> >> if (!list_empty(&migratepages)) {
>> >> list_del(&page->lru);
>> >> @@ -2195,8 +2197,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>> >> putback_lru_page(page);
>> >> }
>> >> isolated = 0;
>> >> - } else
>> >> + } else {
>> >> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
>> >> + if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
>> >> + mod_node_balancing_page_state(
>> >> + NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);
>> >> + }
>
> It looks the original code is already problematic. It just updates the
> counter when *all* pages are migrated successfully. But since we
> already has "nr_succeeded", so I think we could do:
>
> if (nr_remaining) {
> do_something();
> }
>
> count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
> if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
> mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS, nr_succeeded);

Looks good to me. Will update this in the next version.

Best Regards,
Huang, Ying


>> >> BUG_ON(!list_empty(&migratepages));
>> >> return isolated;
>> >>
>> >> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> >> index 8ce2620344b2..fff0ec94d795 100644
>> >> --- a/mm/vmstat.c
>> >> +++ b/mm/vmstat.c
>> >> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
>> >> #ifdef CONFIG_SWAP
>> >> "nr_swapcached",
>> >> #endif
>> >> +#ifdef CONFIG_NUMA_BALANCING
>> >> + "pgpromote_success",
>> >> +#endif
>> >>
>> >> /* enum writeback_stat_item counters */
>> >> "nr_dirty_threshold",
>> >> --
>> >> 2.30.2
>> >>