2023-03-01 12:20:25

by Raghavendra K T

[permalink] [raw]
Subject: [PATCH REBASE V3 0/4] sched/numa: Enhance vma scanning

The patchset proposes one of the enhancements to numa vma scanning
suggested by Mel. This is continuation of [3].

Reposting the rebased patchset to akpm mm-unstable tree (March 1)

Existing mechanism of scan period involves, scan period derived from
per-thread stats. Process Adaptive autoNUMA [1] proposed to gather NUMA
fault stats at per-process level to capture aplication behaviour better.

During that course of discussion, Mel proposed several ideas to enhance
current numa balancing. One of the suggestion was below

Track what threads access a VMA. The suggestion was to use an unsigned
long pid_mask and use the lower bits to tag approximately what
threads access a VMA. Skip VMAs that did not trap a fault. This would
be approximate because of PID collisions but would reduce scanning of
areas the thread is not interested in. The above suggestion intends not
to penalize threads that has no interest in the vma, thus reduce scanning
overhead.

V3 changes are mostly based on PeterZ comments (details below in
changes)

Summary of patchset:
Current patchset implements:
1. Delay the vma scanning logic for newly created VMA's so that
additional overhead of scanning is not incurred for short lived tasks
(implementation by Mel)

2. Store the information of tasks accessing VMA in 2 windows. It is
regularly cleared in (4*sysctl_numa_balancing_scan_delay) interval.
The above time is derived from experimenting (Suggested by PeterZ) to
balance between frequent clearing vs obsolete access data

3. hash_32 used to encode task index accessing VMA information

4. VMA's acess information is used to skip scanning for the tasks
which had not accessed VMA

Changes since V2:
patch1:
- Renaming of structure, macro to function,
- Add explanation to heuristics
- Adding more details from result (PeterZ)
Patch2:
- Usage of test and set bit (PeterZ)
- Move storing access PID info to numa_migrate_prep()
- Add a note on fainess among tasks allowed to scan
(PeterZ)
Patch3:
- Maintain two windows of access PID information
(PeterZ supported implementation and Gave idea to extend
to N if needed)
Patch4:
- Apply hash_32 function to track VMA accessing PIDs (PeterZ)

Changes since RFC V1:
- Include Mel's vma scan delay patch
- Change the accessing pid store logic (Thanks Mel)
- Fencing structure / code to NUMA_BALANCING (David, Mel)
- Adding clearing access PID logic (Mel)
- Descriptive change log ( Mike Rapoport)

Things to ponder over:
==========================================
- Improvement to clearing accessing PIDs logic (discussed in-detail in
patch3 itself (Done in this patchset by implementing 2 window history)

- Current scan period is not changed in the patchset, so we do see frequent
tries to scan. Relaxing scan period dynamically could improve results
further.

[1] sched/numa: Process Adaptive autoNUMA
Link: https://lore.kernel.org/lkml/[email protected]/T/

[2] RFC V1 Link:
https://lore.kernel.org/all/[email protected]/

[3] V2 Link:
https://lore.kernel.org/lkml/[email protected]/


Results:
Summary: Huge autonuma cost reduction seen in mmtest. Kernbench improvement
is more than 5% and huge system time (80%+) improvement from mmtest autonuma.
(dbench had huge std deviation to post)

kernbench
===========
6.2.0-mmunstable-base 6.2.0-mmunstable-patched
Amean user-256 22002.51 ( 0.00%) 22649.95 * -2.94%*
Amean syst-256 10162.78 ( 0.00%) 8214.13 * 19.17%*
Amean elsp-256 160.74 ( 0.00%) 156.92 * 2.38%*

Duration User 66017.43 67959.84
Duration System 30503.15 24657.03
Duration Elapsed 504.61 493.12

6.2.0-mmunstable-base 6.2.0-mmunstable-patched
Ops NUMA alloc hit 1738835089.00 1738780310.00
Ops NUMA alloc local 1738834448.00 1738779711.00
Ops NUMA base-page range updates 477310.00 392566.00
Ops NUMA PTE updates 477310.00 392566.00
Ops NUMA hint faults 96817.00 87555.00
Ops NUMA hint local faults % 10150.00 2192.00
Ops NUMA hint local percent 10.48 2.50
Ops NUMA pages migrated 86660.00 85363.00
Ops AutoNUMA cost 489.07 442.14

autonumabench
===============
6.2.0-mmunstable-base 6.2.0-mmunstable-patched
Amean syst-NUMA01 399.50 ( 0.00%) 52.05 * 86.97%*
Amean syst-NUMA01_THREADLOCAL 0.21 ( 0.00%) 0.22 * -5.41%*
Amean syst-NUMA02 0.80 ( 0.00%) 0.78 * 2.68%*
Amean syst-NUMA02_SMT 0.65 ( 0.00%) 0.68 * -3.95%*
Amean elsp-NUMA01 313.26 ( 0.00%) 313.11 * 0.05%*
Amean elsp-NUMA01_THREADLOCAL 1.06 ( 0.00%) 1.08 * -1.76%*
Amean elsp-NUMA02 3.19 ( 0.00%) 3.24 * -1.52%*
Amean elsp-NUMA02_SMT 3.72 ( 0.00%) 3.61 * 2.92%*

Duration User 396433.47 324835.96
Duration System 2808.70 376.66
Duration Elapsed 2258.61 2258.12

6.2.0-mmunstable-base 6.2.0-mmunstable-patched
Ops NUMA alloc hit 59921806.00 49623489.00
Ops NUMA alloc miss 0.00 0.00
Ops NUMA interleave hit 0.00 0.00
Ops NUMA alloc local 59920880.00 49622594.00
Ops NUMA base-page range updates 152259275.00 50075.00
Ops NUMA PTE updates 152259275.00 50075.00
Ops NUMA PMD updates 0.00 0.00
Ops NUMA hint faults 154660352.00 39014.00
Ops NUMA hint local faults % 138550501.00 23139.00
Ops NUMA hint local percent 89.58 59.31
Ops NUMA pages migrated 8179067.00 14147.00
Ops AutoNUMA cost 774522.98 195.69

Mel Gorman (1):
sched/numa: Apply the scan delay to every new vma

Raghavendra K T (3):
sched/numa: Enhance vma scanning logic
sched/numa: implement access PID reset logic
sched/numa: Use hash_32 to mix up PIDs accessing VMA

include/linux/mm.h | 30 +++++++++++++++++++++
include/linux/mm_types.h | 9 +++++++
kernel/fork.c | 2 ++
kernel/sched/fair.c | 57 ++++++++++++++++++++++++++++++++++++++++
mm/memory.c | 3 +++
5 files changed, 101 insertions(+)

--
2.34.1



2023-03-01 12:20:27

by Raghavendra K T

[permalink] [raw]
Subject: [PATCH REBASE V3 1/4] sched/numa: Apply the scan delay to every new vma

From: Mel Gorman <[email protected]>

Currently whenever a new task is created we wait for
sysctl_numa_balancing_scan_delay to avoid unnessary scanning
overhead. Extend the same logic to new or very short-lived VMAs.

(Raghavendra: Add initialization in vm_area_dup())

Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Raghavendra K T <[email protected]>
---
include/linux/mm.h | 16 ++++++++++++++++
include/linux/mm_types.h | 7 +++++++
kernel/fork.c | 2 ++
kernel/sched/fair.c | 19 +++++++++++++++++++
4 files changed, 44 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3d4bb18dfcb7..2cce434a5e55 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -29,6 +29,7 @@
#include <linux/pgtable.h>
#include <linux/kasan.h>
#include <linux/memremap.h>
+#include <linux/slab.h>

struct mempolicy;
struct anon_vma;
@@ -626,6 +627,20 @@ struct vm_operations_struct {
unsigned long addr);
};

+#ifdef CONFIG_NUMA_BALANCING
+static inline void vma_numab_state_init(struct vm_area_struct *vma)
+{
+ vma->numab_state = NULL;
+}
+static inline void vma_numab_state_free(struct vm_area_struct *vma)
+{
+ kfree(vma->numab_state);
+}
+#else
+static inline void vma_numab_state_init(struct vm_area_struct *vma) {}
+static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
+#endif /* CONFIG_NUMA_BALANCING */
+
#ifdef CONFIG_PER_VMA_LOCK
/*
* Try to read-lock a vma. The function is allowed to occasionally yield false
@@ -727,6 +742,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
vma->vm_ops = &dummy_vm_ops;
INIT_LIST_HEAD(&vma->anon_vma_chain);
vma_mark_detached(vma, false);
+ vma_numab_state_init(vma);
}

/* Use when VMA is not part of the VMA tree and needs no locking */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 89bbf7d8a312..1cea78f60011 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -475,6 +475,10 @@ struct vma_lock {
struct rw_semaphore lock;
};

+struct vma_numab_state {
+ unsigned long next_scan;
+};
+
/*
* This struct describes a virtual memory area. There is one of these
* per VM-area/task. A VM area is any part of the process virtual memory
@@ -565,6 +569,9 @@ struct vm_area_struct {
#endif
#ifdef CONFIG_NUMA
struct mempolicy *vm_policy; /* NUMA policy for the VMA */
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+ struct vma_numab_state *numab_state; /* NUMA Balancing state */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} __randomize_layout;
diff --git a/kernel/fork.c b/kernel/fork.c
index 75792157f51a..305f963359dc 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -516,6 +516,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
return NULL;
}
INIT_LIST_HEAD(&new->anon_vma_chain);
+ vma_numab_state_init(new);
dup_anon_vma_name(orig, new);

return new;
@@ -523,6 +524,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)

void __vm_area_free(struct vm_area_struct *vma)
{
+ vma_numab_state_free(vma);
free_anon_vma_name(vma);
vma_lock_free(vma);
kmem_cache_free(vm_area_cachep, vma);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a1b1f855b96..7c2bbc8d618b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3027,6 +3027,25 @@ static void task_numa_work(struct callback_head *work)
if (!vma_is_accessible(vma))
continue;

+ /* Initialise new per-VMA NUMAB state. */
+ if (!vma->numab_state) {
+ vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
+ GFP_KERNEL);
+ if (!vma->numab_state)
+ continue;
+
+ vma->numab_state->next_scan = now +
+ msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+ }
+
+ /*
+ * Scanning the VMA's of short lived tasks add more overhead. So
+ * delay the scan for new VMAs.
+ */
+ if (mm->numa_scan_seq && time_before(jiffies,
+ vma->numab_state->next_scan))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
--
2.34.1


2023-03-01 12:20:38

by Raghavendra K T

[permalink] [raw]
Subject: [PATCH REBASE V3 2/4] sched/numa: Enhance vma scanning logic

During the Numa scanning make sure only relevant vmas of the
tasks are scanned.

Before:
All the tasks of a process participate in scanning the vma
even if they do not access vma in it's lifespan.

Now:
Except cases of first few unconditional scans, if a process do
not touch vma (exluding false positive cases of PID collisions)
tasks no longer scan all vma

Logic used:
1) 6 bits of PID used to mark active bit in vma numab status during
fault to remember PIDs accessing vma. (Thanks Mel)

2) Subsequently in scan path, vma scanning is skipped if current PID
had not accessed vma.

3) First two times we do allow unconditional scan to preserve earlier
behaviour of scanning.

Acknowledgement to Bharata B Rao <[email protected]> for initial patch
to store pid information and Peter Zijlstra <[email protected]>
(Usage of test and set bit)

Suggested-by: Mel Gorman <[email protected]>
Signed-off-by: Raghavendra K T <[email protected]>
---
include/linux/mm.h | 14 ++++++++++++++
include/linux/mm_types.h | 1 +
kernel/sched/fair.c | 19 +++++++++++++++++++
mm/memory.c | 3 +++
4 files changed, 37 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2cce434a5e55..b7e4484af05b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1666,6 +1666,16 @@ static inline int xchg_page_access_time(struct page *page, int time)
last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
return last_time << PAGE_ACCESS_TIME_BUCKETS;
}
+
+static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+{
+ unsigned int pid_bit;
+
+ pid_bit = current->pid % BITS_PER_LONG;
+ if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids)) {
+ __set_bit(pid_bit, &vma->numab_state->access_pids);
+ }
+}
#else /* !CONFIG_NUMA_BALANCING */
static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
{
@@ -1715,6 +1725,10 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
{
return false;
}
+
+static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+{
+}
#endif /* CONFIG_NUMA_BALANCING */

#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 1cea78f60011..df4e0bc66d17 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -477,6 +477,7 @@ struct vma_lock {

struct vma_numab_state {
unsigned long next_scan;
+ unsigned long access_pids;
};

/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7c2bbc8d618b..9443ae9db028 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2928,6 +2928,21 @@ static void reset_ptenuma_scan(struct task_struct *p)
p->mm->numa_scan_offset = 0;
}

+static bool vma_is_accessed(struct vm_area_struct *vma)
+{
+ /*
+ * Allow unconditional access first two times, so that all the (pages)
+ * of VMAs get prot_none fault introduced irrespective of accesses.
+ * This is also done to avoid any side effect of task scanning
+ * amplifying the unfairness of disjoint set of VMAs' access.
+ */
+ if (READ_ONCE(current->mm->numa_scan_seq) < 2)
+ return true;
+
+ return test_bit(current->pid % BITS_PER_LONG,
+ &vma->numab_state->access_pids);
+}
+
/*
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
@@ -3046,6 +3061,10 @@ static void task_numa_work(struct callback_head *work)
vma->numab_state->next_scan))
continue;

+ /* Do not scan the VMA if task has not accessed */
+ if (!vma_is_accessed(vma))
+ continue;
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
diff --git a/mm/memory.c b/mm/memory.c
index 255b2f4fdd4a..8fac837cde9e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4647,6 +4647,9 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
{
get_page(page);

+ /* Record the current PID acceesing VMA */
+ vma_set_access_pid_bit(vma);
+
count_vm_numa_event(NUMA_HINT_FAULTS);
if (page_nid == numa_node_id()) {
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
--
2.34.1


2023-03-01 12:21:13

by Raghavendra K T

[permalink] [raw]
Subject: [PATCH REBASE V3 4/4] sched/numa: Use hash_32 to mix up PIDs accessing VMA

before: last 6 bits of PID is used as index to store
information about tasks accessing VMA's.

after: hash_32 is used to take of cases where tasks are
created over a period of time, and thus improve collision
probability.

Result:
The patch series overall improving autonuma cost.

Kernbench around more than 5% improvement and
system time in mmtest autonuma showed more than 80%
improvement

Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Raghavendra K T <[email protected]>
---
include/linux/mm.h | 2 +-
kernel/sched/fair.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5232ebb34145..1b9be34a24fb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1671,7 +1671,7 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
{
unsigned int pid_bit;

- pid_bit = current->pid % BITS_PER_LONG;
+ pid_bit = hash_32(current->pid, ilog2(BITS_PER_LONG));
if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids[1])) {
__set_bit(pid_bit, &vma->numab_state->access_pids[1]);
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a93e7a33281f..8592941dd565 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2941,7 +2941,7 @@ static bool vma_is_accessed(struct vm_area_struct *vma)
return true;

pids = vma->numab_state->access_pids[0] | vma->numab_state->access_pids[1];
- return test_bit(current->pid % BITS_PER_LONG, &pids);
+ return test_bit(hash_32(current->pid, ilog2(BITS_PER_LONG)), &pids);
}

#define VMA_PID_RESET_PERIOD (4 * sysctl_numa_balancing_scan_delay)
--
2.34.1


2023-03-01 12:21:16

by Raghavendra K T

[permalink] [raw]
Subject: [PATCH REBASE V3 3/4] sched/numa: implement access PID reset logic

This helps to ensure, only recently accessed PIDs scan the
VMAs.
Current implementation: (idea supported by PeterZ)
1. Accessing PID information is maintained in two windows.
access_pids[1] being newest.

2. Reset old access PID info i.e. access_pid[0] every
(4 * sysctl_numa_balancing_scan_delay) interval after initial
scan delay period expires.

The above interval seemed to be experimentally optimum since it
avoids frequent reset of access info as well as helps clearing
the old access info regularly.
The reset logic is implemented in scan path.

Suggested-by: Mel Gorman <[email protected]>
Signed-off-by: Raghavendra K T <[email protected]>
---
include/linux/mm.h | 4 ++--
include/linux/mm_types.h | 3 ++-
kernel/sched/fair.c | 23 +++++++++++++++++++++--
3 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b7e4484af05b..5232ebb34145 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1672,8 +1672,8 @@ static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
unsigned int pid_bit;

pid_bit = current->pid % BITS_PER_LONG;
- if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids)) {
- __set_bit(pid_bit, &vma->numab_state->access_pids);
+ if (vma->numab_state && !test_bit(pid_bit, &vma->numab_state->access_pids[1])) {
+ __set_bit(pid_bit, &vma->numab_state->access_pids[1]);
}
}
#else /* !CONFIG_NUMA_BALANCING */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index df4e0bc66d17..e17bdd10dc15 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -477,7 +477,8 @@ struct vma_lock {

struct vma_numab_state {
unsigned long next_scan;
- unsigned long access_pids;
+ unsigned long next_pid_reset;
+ unsigned long access_pids[2];
};

/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9443ae9db028..a93e7a33281f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2930,6 +2930,7 @@ static void reset_ptenuma_scan(struct task_struct *p)

static bool vma_is_accessed(struct vm_area_struct *vma)
{
+ unsigned long pids;
/*
* Allow unconditional access first two times, so that all the (pages)
* of VMAs get prot_none fault introduced irrespective of accesses.
@@ -2939,10 +2940,12 @@ static bool vma_is_accessed(struct vm_area_struct *vma)
if (READ_ONCE(current->mm->numa_scan_seq) < 2)
return true;

- return test_bit(current->pid % BITS_PER_LONG,
- &vma->numab_state->access_pids);
+ pids = vma->numab_state->access_pids[0] | vma->numab_state->access_pids[1];
+ return test_bit(current->pid % BITS_PER_LONG, &pids);
}

+#define VMA_PID_RESET_PERIOD (4 * sysctl_numa_balancing_scan_delay)
+
/*
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
@@ -3051,6 +3054,10 @@ static void task_numa_work(struct callback_head *work)

vma->numab_state->next_scan = now +
msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+
+ /* Reset happens after 4 times scan delay of scan start */
+ vma->numab_state->next_pid_reset = vma->numab_state->next_scan +
+ msecs_to_jiffies(VMA_PID_RESET_PERIOD);
}

/*
@@ -3065,6 +3072,18 @@ static void task_numa_work(struct callback_head *work)
if (!vma_is_accessed(vma))
continue;

+ /*
+ * RESET access PIDs regularly for old VMAs. Resetting after checking
+ * vma for recent access to avoid clearing PID info before access..
+ */
+ if (mm->numa_scan_seq &&
+ time_after(jiffies, vma->numab_state->next_pid_reset)) {
+ vma->numab_state->next_pid_reset = vma->numab_state->next_pid_reset +
+ msecs_to_jiffies(VMA_PID_RESET_PERIOD);
+ vma->numab_state->access_pids[0] = READ_ONCE(vma->numab_state->access_pids[1]);
+ vma->numab_state->access_pids[1] = 0;
+ }
+
do {
start = max(start, vma->vm_start);
end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE);
--
2.34.1