2022-10-23 15:40:38

by Chen Yu

[permalink] [raw]
Subject: [RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up

[Problem Statement]
For a workload that is doing frequent context switches, the throughput
scales well until the number of instances reaches a peak point. After
that peak point, the throughput drops significantly if the number of
instances continues to increase.

The will-it-scale context_switch1 test case exposes the issue. The
test platform has 112 CPUs per LLC domain. The will-it-scale launches
1, 8, 16 ... 112 instances respectively. Each instance is composed
of 2 tasks, and each pair of tasks would do ping-pong scheduling via
pipe_read() and pipe_write(). No task is bound to any CPU.
It is found that, once the number of instances is higher than
56(112 tasks in total, every CPU has 1 task), the throughput
drops accordingly if the instance number continues to increase:

^
throughput|
| X
| X X X
| X X X
| X X
| X X
| X
| X
| X
| X
|
+-----------------.------------------->
56
number of instances

[Symptom analysis]
Both perf profile and lockstat have shown that, the bottleneck
is the runqueue spinlock. Take perf profile for example:

nr_instance rq lock percentage
1 1.22%
8 1.17%
16 1.20%
24 1.22%
32 1.46%
40 1.61%
48 1.63%
56 1.65%
--------------------------
64 3.77% |
72 5.90% | increase
80 7.95% |
88 9.98% v
96 11.81%
104 13.54%
112 15.13%

And the rq lock bottleneck is composed of two paths(perf profile):

(path1):
raw_spin_rq_lock_nested.constprop.0;
try_to_wake_up;
default_wake_function;
autoremove_wake_function;
__wake_up_common;
__wake_up_common_lock;
__wake_up_sync_key;
pipe_write;
new_sync_write;
vfs_write;
ksys_write;
__x64_sys_write;
do_syscall_64;
entry_SYSCALL_64_after_hwframe;write

(path2):
raw_spin_rq_lock_nested.constprop.0;
__sched_text_start;
schedule_idle;
do_idle;
cpu_startup_entry;
start_secondary;
secondary_startup_64_no_verify

The idle percentage is around 30% when there are 112 instances:
%Cpu0 : 2.7 us, 66.7 sy, 0.0 ni, 30.7 id

As a comparison, if set CPU affinity to these workloads,
which stops them from migrating among CPUs, the idle percentage
drops to nearly 0%, and the throughput increases by about 300%.
This indicates that there is room for optimization.

A possible scenario to describe the lock contention:
task A tries to wakeup task B on CPU1, then task A grabs the
runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
to grab its own lock which has been taken by someone else. Then
CPU1 takes more time to quit which hurts the performance.

TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
the waker and leverages the idle CPU to queue the wakee. However, a long
idle duration is still observed. The idle task spends quite some time
on sched_ttwu_pending() before it switches out. This long idle
duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
for more CPUs. The time spent searching for an idle CPU would make
wakee waiting for more time, which in turn leads to more idle time.
The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
might be caused by no runnable wakee being found.

[Proposal]
If a system is busy, and if the workloads are doing frequent context
switches, it might not be a good idea to spread the wakee on different
CPUs. Instead, consider the task running time and enhance wake affine
might be applicable.

This idea has been suggested by Rik at LPC 2019 when discussing
the latency nice. He asked the following question: if P1 is a small-time
slice task on CPU, can we put the waking task P2 on the CPU and wait for
P1 to release the CPU, without wasting time searching for an idle CPU?
At LPC 2021 Vincent Guittot has proposed:
1. If the wakee is a long-running task, should we skip the short idle CPU?
2. If the wakee is a short-running task, can we put it onto a lightly
loaded local CPU?

Inspired by this, if the target CPU is running a short task, and the task
is the only runnable task on target CPU, then the target CPU could be
chosen as the candidate when the system is busy.

The definition of a short task is: The average duration of the task
in each run is no more than sysctl_sched_min_granularity.
If a task switches in and then voluntarily relinquishes the CPU
quickly, it is regarded as a short task. Choosing
sysctl_sched_min_granularity because it is the minimal slice if there
are too many runnable tasks.

Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
If yes, then a compromised "idle" CPU might be acceptable.

The reason is that, if the waker is a short-duration task, it might
relinquish the CPU soon, and the wakee has the chance to be scheduled.
The effect is, the wake affine is enhanced. But this strategy should
only take effect when the system is busy. Otherwise, it could
inhibit spreading the workload when there are many idle CPUs
around, as Peter mentioned.

[Benchmark results]
The baseline is v6.0. The test platform has 56 Cores(112 CPUs) per
LLC domain. First tested with SNC(Sub-Numa-Cluser) disabled,
then with SNC4 enabled(each Cluster has 28 CPUs), to evaluate
the impact on small LLC domain.

[SNC disabled]
The throughput of will-it-scale.context_switch1 has been increased by
331.13% with this patch applied.

netperf
=======
case load baseline(std%) compare%( std%)
TCP_RR 28 threads 1.00 ( 0.61) -0.38 ( 0.66)
TCP_RR 56 threads 1.00 ( 0.51) -0.11 ( 0.52)
TCP_RR 84 threads 1.00 ( 0.30) -0.98 ( 0.28)
TCP_RR 112 threads 1.00 ( 0.22) -1.07 ( 0.21)
TCP_RR 140 threads 1.00 ( 0.19) +185.34 ( 9.21)
TCP_RR 168 threads 1.00 ( 0.17) +195.31 ( 9.48)
TCP_RR 196 threads 1.00 ( 13.32) +0.17 ( 13.39)
TCP_RR 224 threads 1.00 ( 8.81) +0.50 ( 7.18)
UDP_RR 28 threads 1.00 ( 0.94) -0.56 ( 1.03)
UDP_RR 56 threads 1.00 ( 0.82) -0.67 ( 0.83)
UDP_RR 84 threads 1.00 ( 0.15) -2.34 ( 0.71)
UDP_RR 112 threads 1.00 ( 5.54) -2.92 ( 8.35)
UDP_RR 140 threads 1.00 ( 4.90) +139.71 ( 14.04)
UDP_RR 168 threads 1.00 ( 10.56) +151.51 ( 11.16)
UDP_RR 196 threads 1.00 ( 18.68) -4.32 ( 16.22)
UDP_RR 224 threads 1.00 ( 12.84) -4.56 ( 14.15)

hackbench
=========
case load baseline(std%) compare%( std%)
process-pipe 1 group 1.00 ( 1.21) -1.06 ( 0.59)
process-pipe 2 groups 1.00 ( 1.35) -1.21 ( 0.69)
process-pipe 4 groups 1.00 ( 0.36) -0.68 ( 0.15)
process-pipe 8 groups 1.00 ( 0.06) +2.24 ( 0.14)
process-sockets 1 group 1.00 ( 1.04) +2.69 ( 1.18)
process-sockets 2 groups 1.00 ( 2.12) +0.48 ( 1.80)
process-sockets 4 groups 1.00 ( 0.10) -2.30 ( 0.09)
process-sockets 8 groups 1.00 ( 0.04) -1.84 ( 0.06)
threads-pipe 1 group 1.00 ( 0.47) -0.70 ( 1.13)
threads-pipe 2 groups 1.00 ( 0.32) +0.15 ( 0.66)
threads-pipe 4 groups 1.00 ( 0.64) -0.26 ( 0.69)
threads-pipe 8 groups 1.00 ( 0.04) +3.99 ( 0.04)
threads-sockets 1 group 1.00 ( 1.39) -5.40 ( 2.07)
threads-sockets 2 groups 1.00 ( 0.79) -1.32 ( 2.07)
threads-sockets 4 groups 1.00 ( 0.23) -2.08 ( 0.08)
threads-sockets 8 groups 1.00 ( 0.05) -1.84 ( 0.03)

tbench
======
case load baseline(std%) compare%( std%)
loopback 28 threads 1.00 ( 0.12) -0.45 ( 0.09)
loopback 56 threads 1.00 ( 0.34) -0.29 ( 0.10)
loopback 84 threads 1.00 ( 0.06) -0.36 ( 0.05)
loopback 112 threads 1.00 ( 0.05) +0.19 ( 0.05)
loopback 140 threads 1.00 ( 0.28) -4.02 ( 0.10)
loopback 168 threads 1.00 ( 0.31) -3.36 ( 0.33)
loopback 196 threads 1.00 ( 0.25) -2.91 ( 0.28)
loopback 224 threads 1.00 ( 0.15) -3.42 ( 0.22)

schbench
========
case load baseline(std%) compare%( std%)
normal 1 mthread 1.00 ( 0.00) +28.40 ( 0.00)
normal 2 mthreads 1.00 ( 0.00) +8.20 ( 0.00)
normal 4 mthreads 1.00 ( 0.00) +7.58 ( 0.00)
normal 8 mthreads 1.00 ( 0.00) -3.91 ( 0.00)

[SNC4 enabled]
Each LLC domain now has 14 Cores/28 CPUs.

netperf
=======
case load baseline(std%) compare%( std%)
TCP_RR 28 threads 1.00 ( 2.92) +0.21 ( 2.48)
TCP_RR 56 threads 1.00 ( 1.48) -0.15 ( 1.49)
TCP_RR 84 threads 1.00 ( 1.82) +3.29 ( 2.00)
TCP_RR 112 threads 1.00 ( 25.85) +126.43 ( 0.74)
TCP_RR 140 threads 1.00 ( 6.01) -0.20 ( 6.38)
TCP_RR 168 threads 1.00 ( 7.21) -0.13 ( 7.31)
TCP_RR 196 threads 1.00 ( 12.60) -0.28 ( 12.49)
TCP_RR 224 threads 1.00 ( 12.53) -0.29 ( 12.35)
UDP_RR 28 threads 1.00 ( 2.29) -0.69 ( 1.65)
UDP_RR 56 threads 1.00 ( 0.86) -1.30 ( 7.79)
UDP_RR 84 threads 1.00 ( 6.56) +3.11 ( 10.79)
UDP_RR 112 threads 1.00 ( 5.74) +132.30 ( 6.80)
UDP_RR 140 threads 1.00 ( 12.85) -6.79 ( 8.45)
UDP_RR 168 threads 1.00 ( 13.23) -6.69 ( 9.44)
UDP_RR 196 threads 1.00 ( 14.86) -7.59 ( 17.78)
UDP_RR 224 threads 1.00 ( 13.84) -7.01 ( 14.75)

tbench
======
case load baseline(std%) compare%( std%)
loopback 28 threads 1.00 ( 0.27) -0.80 ( 0.33)
loopback 56 threads 1.00 ( 0.59) +0.18 ( 0.53)
loopback 84 threads 1.00 ( 0.23) +2.63 ( 0.48)
loopback 112 threads 1.00 ( 1.50) +6.56 ( 0.28)
loopback 140 threads 1.00 ( 0.35) +3.77 ( 1.67)
loopback 168 threads 1.00 ( 0.69) +4.86 ( 0.12)
loopback 196 threads 1.00 ( 0.91) +3.95 ( 0.34)
loopback 224 threads 1.00 ( 0.26) +4.15 ( 0.06)

hackbench
=========
case load baseline(std%) compare%( std%)
process-pipe 1 group 1.00 ( 1.30) +0.52 ( 0.32)
process-pipe 2 groups 1.00 ( 1.26) +2.20 ( 1.42)
process-pipe 4 groups 1.00 ( 2.60) -4.01 ( 1.31)
process-pipe 8 groups 1.00 ( 1.01) +0.58 ( 1.26)
process-sockets 8 groups 1.00 ( 2.98) -2.06 ( 1.54)
process-sockets 2 groups 1.00 ( 0.62) -1.56 ( 0.19)
process-sockets 4 groups 1.00 ( 1.88) +0.57 ( 0.99)
process-sockets 8 groups 1.00 ( 0.23) -0.60 ( 0.17)
threads-pipe 1 group 1.00 ( 0.68) +1.27 ( 0.39)
threads-pipe 2 groups 1.00 ( 1.56) +0.85 ( 2.82)
threads-pipe 4 groups 1.00 ( 3.16) +0.26 ( 1.72)
threads-pipe 8 groups 1.00 ( 1.03) +2.28 ( 0.95)
threads-sockets 1 group 1.00 ( 1.68) -1.41 ( 3.78)
threads-sockets 2 groups 1.00 ( 0.13) -1.70 ( 0.88)
threads-sockets 4 groups 1.00 ( 5.48) -4.99 ( 2.66)
threads-sockets 8 groups 1.00 ( 0.06) -0.41 ( 0.10)

schbench
========
case load baseline(std%) compare%( std%)
normal 1 mthread 1.00 ( 0.00) -7.81 ( 0.00)*
normal 2 mthreads 1.00 ( 0.00) +6.25 ( 0.00)
normal 4 mthreads 1.00 ( 0.00) +22.50 ( 0.00)
normal 8 mthreads 1.00 ( 0.00) +6.99 ( 0.00)

In summary, overall there is no significant performance regression detected
and there is improvement in some cases. The schbench result is quite
unstable when there is 1 mthread so the -7.81 regress might not be valid.
Other than that, netperf and schbench have shown improvement in the
partially-busy case.

This patch is more about enhancing the wake affine, rather than improving
the SIS efficiency, so Mel's SIS statistic patch was not deployed.

[Limitations]
When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
somewhat arbitrary, because it indicates that the util_avg% is around 50%,
a half busy LLC. I don't have other lightweight/accurate method in mind to
check if the LLC domain is busy or not. By far the test result looks good.

Suggested-by: Tim Chen <[email protected]>
Suggested-by: K Prateek Nayak <[email protected]>
Signed-off-by: Chen Yu <[email protected]>
---
kernel/sched/fair.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8820d0d14519..3a8ee6232c59 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6249,6 +6249,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
if (available_idle_cpu(prev_cpu))
return prev_cpu;

+ /* The only running task is a short duration one. */
+ if (cpu_rq(this_cpu)->nr_running == 1 &&
+ is_short_task(cpu_curr(this_cpu)))
+ return this_cpu;
+
return nr_cpumask_bits;
}

@@ -6623,6 +6628,23 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
/* overloaded LLC is unlikely to have idle cpu/core */
if (nr == 1)
return -1;
+
+ /*
+ * If nr is smaller than 60% of llc_weight, it
+ * indicates that the util_avg% is higher than 50%.
+ * This is calculated by SIS_UTIL in
+ * update_idle_cpu_scan(). The 50% util_avg indicates
+ * a half-busy LLC domain. System busier than this
+ * level could lower its bar to choose a compromised
+ * "idle" CPU, so as to avoid the overhead of cross
+ * CPU wakeup. If the task on target CPU is a short
+ * duration one, and it is the only running task, pick
+ * target directly.
+ */
+ if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
+ cpu_rq(target)->nr_running == 1 &&
+ is_short_task(cpu_curr(target)))
+ return target;
}
}

--
2.25.1


2022-11-02 08:46:44

by kernel test robot

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up

Greeting,

FYI, we noticed a 131.8% improvement of stress-ng.vm-rw.ops_per_sec due to commit:

commit: 697253a9d6dbed4645b9cc8ff8520ff074ff48f0 ("[RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up")
url: https://github.com/intel-lab-lkp/linux/commits/Chen-Yu/sched-fair-Choose-the-CPU-where-short-task-is-running-during-wake-up/20221023-233434
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git fdf756f7127185eeffe00e918e66dfee797f3625
patch link: https://lore.kernel.org/lkml/1a34e009de0dbe5900c7b2c6074c8e0c04e8596a.1666531576.git.yu.c.chen@intel.com
patch subject: [RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up

in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
with following parameters:

nr_threads: 100%
testtime: 60s
class: memory
test: vm-rw
cpufreq_governor: performance


Details are as below:

=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
memory/gcc-11/performance/x86_64-rhel-8.3/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp6/vm-rw/stress-ng/60s

commit:
829d111895 ("sched/fair: Introduce short duration task check")
697253a9d6 ("sched/fair: Choose the CPU where short task is running during wake up")

829d11189582c49d 697253a9d6dbed4645b9cc8ff85
---------------- ---------------------------
%stddev %change %stddev
\ | \
7435055 +3807.6% 2.905e+08 stress-ng.time.involuntary_context_switches
120430 ? 3% -12.0% 106023 stress-ng.time.minor_page_faults
8873 +36.9% 12148 stress-ng.time.percent_of_cpu_this_job_got
5230 +37.4% 7187 stress-ng.time.system_time
286.34 +24.6% 356.82 stress-ng.time.user_time
2.511e+08 +17.3% 2.944e+08 stress-ng.time.voluntary_context_switches
1.262e+08 +131.8% 2.925e+08 stress-ng.vm-rw.ops
2102613 +131.8% 4874433 stress-ng.vm-rw.ops_per_sec
79179 ? 4% +12.2% 88811 ? 5% meminfo.Mapped
6580 -19.1% 5326 uptime.idle
1.833e+09 -65.6% 6.314e+08 ? 4% cpuidle..time
1.663e+08 -97.0% 4915816 ? 60% cpuidle..usage
1.996e+09 +129.6% 4.582e+09 numa-vmstat.node0.nr_foll_pin_acquired
1.996e+09 +129.6% 4.582e+09 numa-vmstat.node0.nr_foll_pin_released
2e+09 ? 2% +125.9% 4.519e+09 numa-vmstat.node1.nr_foll_pin_acquired
2e+09 ? 2% +125.9% 4.519e+09 numa-vmstat.node1.nr_foll_pin_released
26.17 -65.0% 9.17 ? 7% vmstat.cpu.id
111.50 +11.7% 124.50 vmstat.procs.r
6420994 +38.6% 8899424 vmstat.system.cs
977879 -69.2% 300736 vmstat.system.in
24.55 -17.7 6.82 ? 7% mpstat.cpu.all.idle%
2.79 -1.8 0.99 ? 2% mpstat.cpu.all.irq%
0.17 ? 3% -0.1 0.02 ? 19% mpstat.cpu.all.soft%
68.77 +19.1 87.92 mpstat.cpu.all.sys%
3.72 +0.5 4.26 mpstat.cpu.all.usr%
4.003e+09 +128.2% 9.134e+09 proc-vmstat.nr_foll_pin_acquired
4.003e+09 +128.2% 9.134e+09 proc-vmstat.nr_foll_pin_released
19813 ? 4% +12.3% 22254 ? 4% proc-vmstat.nr_mapped
100711 ? 2% -13.3% 87357 ? 3% proc-vmstat.numa_hint_faults
92618 ? 4% -13.2% 80424 proc-vmstat.numa_hint_faults_local
518490 -2.3% 506357 proc-vmstat.pgfault
81.24 +11.3 92.52 turbostat.Busy%
3227 -6.1% 3031 turbostat.Bzy_MHz
70721405 ? 3% -97.7% 1647001 ? 81% turbostat.C1
6.32 ? 2% -6.2 0.08 ? 74% turbostat.C1%
95028420 -97.2% 2652278 ? 57% turbostat.C1E
13.16 ? 18% -9.4 3.76 ? 62% turbostat.C1E%
16.36 ? 14% -74.9% 4.11 ? 59% turbostat.CPU%c1
0.16 ? 2% +88.8% 0.31 turbostat.IPC
64640303 -69.3% 19839421 turbostat.IRQ
132.00 ? 14% +50.6 182.63 turbostat.PKG_%
0.05 -0.0 0.01 ?100% turbostat.POLL%
182221 ? 19% -100.0% 26.69 ?223% sched_debug.cfs_rq:/.MIN_vruntime.avg
2444942 -99.9% 1808 ?223% sched_debug.cfs_rq:/.MIN_vruntime.max
634471 ? 8% -100.0% 212.22 ?223% sched_debug.cfs_rq:/.MIN_vruntime.stddev
2.58 ? 13% -38.7% 1.58 ? 11% sched_debug.cfs_rq:/.h_nr_running.max
0.52 ? 6% -39.0% 0.32 ? 6% sched_debug.cfs_rq:/.h_nr_running.stddev
206008 ?118% -88.0% 24663 ? 19% sched_debug.cfs_rq:/.load.max
23619 ?108% -82.9% 4047 ? 7% sched_debug.cfs_rq:/.load.stddev
182221 ? 19% -100.0% 26.69 ?223% sched_debug.cfs_rq:/.max_vruntime.avg
2444942 -99.9% 1808 ?223% sched_debug.cfs_rq:/.max_vruntime.max
634471 ? 8% -100.0% 212.22 ?223% sched_debug.cfs_rq:/.max_vruntime.stddev
2423739 +48.3% 3593823 sched_debug.cfs_rq:/.min_vruntime.avg
2482870 +50.7% 3740568 ? 2% sched_debug.cfs_rq:/.min_vruntime.max
2135362 +36.0% 2904507 sched_debug.cfs_rq:/.min_vruntime.min
36312 ? 22% +139.1% 86809 ? 7% sched_debug.cfs_rq:/.min_vruntime.stddev
0.44 ? 5% +23.1% 0.54 sched_debug.cfs_rq:/.nr_running.avg
0.35 ? 5% -56.5% 0.15 ? 7% sched_debug.cfs_rq:/.nr_running.stddev
1929 ? 11% -28.7% 1376 ? 18% sched_debug.cfs_rq:/.runnable_avg.max
154.25 ? 5% +174.1% 422.75 ? 22% sched_debug.cfs_rq:/.runnable_avg.min
368.90 ? 4% -50.7% 181.74 ? 13% sched_debug.cfs_rq:/.runnable_avg.stddev
-294805 +134.0% -689810 sched_debug.cfs_rq:/.spread0.min
36310 ? 21% +139.8% 87082 ? 7% sched_debug.cfs_rq:/.spread0.stddev
535.54 ? 3% +18.6% 635.03 sched_debug.cfs_rq:/.util_avg.avg
1521 ? 12% -28.2% 1092 ? 7% sched_debug.cfs_rq:/.util_avg.max
155.42 +130.0% 357.42 ? 25% sched_debug.cfs_rq:/.util_avg.min
296.64 ? 4% -44.3% 165.28 ? 5% sched_debug.cfs_rq:/.util_avg.stddev
196.62 ? 5% +141.7% 475.18 ? 2% sched_debug.cfs_rq:/.util_est_enqueued.avg
191.00 ? 6% -48.0% 99.34 ? 23% sched_debug.cfs_rq:/.util_est_enqueued.stddev
883143 ? 16% +99.5% 1762034 ? 53% sched_debug.cpu.avg_idle.max
5112 ? 8% -38.6% 3136 ? 13% sched_debug.cpu.avg_idle.min
121922 ? 9% +143.8% 297254 ? 70% sched_debug.cpu.avg_idle.stddev
5.19 ? 11% +119.5% 11.39 ? 21% sched_debug.cpu.clock.stddev
2406 ? 3% +31.2% 3157 sched_debug.cpu.curr->pid.avg
1818 ? 3% -64.7% 641.10 ? 11% sched_debug.cpu.curr->pid.stddev
24202 ? 13% +488.8% 142495 ?113% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ? 12% +38.2% 0.00 ? 8% sched_debug.cpu.next_balance.stddev
0.52 ? 4% +13.0% 0.59 sched_debug.cpu.nr_running.avg
2.42 ? 22% -34.5% 1.58 ? 11% sched_debug.cpu.nr_running.max
0.49 ? 9% -41.5% 0.29 ? 3% sched_debug.cpu.nr_running.stddev
1580988 +38.4% 2188325 sched_debug.cpu.nr_switches.avg
1630017 +40.3% 2286191 sched_debug.cpu.nr_switches.max
1389050 ? 2% +27.3% 1768693 ? 4% sched_debug.cpu.nr_switches.min
30550 ? 26% +93.2% 59014 ? 21% sched_debug.cpu.nr_switches.stddev
25.37 -97.5% 0.63 ? 28% perf-stat.i.MPKI
3.278e+10 +99.7% 6.545e+10 perf-stat.i.branch-instructions
0.43 ? 4% -0.1 0.28 ? 5% perf-stat.i.branch-miss-rate%
1.131e+08 +6.5% 1.204e+08 perf-stat.i.branch-misses
0.65 ? 19% +6.5 7.14 ? 26% perf-stat.i.cache-miss-rate%
18561546 ? 3% -60.0% 7428815 ? 8% perf-stat.i.cache-misses
4.386e+09 -97.3% 1.202e+08 ? 36% perf-stat.i.cache-references
6797620 +38.9% 9438603 perf-stat.i.context-switches
2.08 -47.5% 1.09 perf-stat.i.cpi
3.558e+11 +3.8% 3.694e+11 perf-stat.i.cpu-cycles
2285969 -99.9% 3198 ? 64% perf-stat.i.cpu-migrations
19017 ? 4% +161.7% 49759 ? 7% perf-stat.i.cycles-between-cache-misses
0.23 ? 7% -0.2 0.01 ? 49% perf-stat.i.dTLB-load-miss-rate%
97291251 ? 7% -99.0% 969721 ? 18% perf-stat.i.dTLB-load-misses
4.155e+10 +97.7% 8.216e+10 perf-stat.i.dTLB-loads
0.11 ? 4% -0.1 0.00 ? 48% perf-stat.i.dTLB-store-miss-rate%
26172452 ? 4% -99.5% 123305 ? 15% perf-stat.i.dTLB-store-misses
2.359e+10 +97.5% 4.659e+10 perf-stat.i.dTLB-stores
1.688e+11 +100.8% 3.39e+11 perf-stat.i.instructions
0.49 +87.2% 0.92 perf-stat.i.ipc
2.77 +4.0% 2.89 perf-stat.i.metric.GHz
798.02 +90.2% 1517 perf-stat.i.metric.M/sec
4375 -3.8% 4209 perf-stat.i.minor-faults
91.07 +5.2 96.32 perf-stat.i.node-load-miss-rate%
3387594 ? 6% -35.5% 2184384 ? 5% perf-stat.i.node-load-misses
329242 ? 16% -79.2% 68577 ? 6% perf-stat.i.node-loads
69.98 ? 2% +24.2 94.21 perf-stat.i.node-store-miss-rate%
3757932 ? 4% +8.9% 4092914 ? 3% perf-stat.i.node-store-misses
1521063 ? 4% -89.0% 167096 ? 15% perf-stat.i.node-stores
4375 -3.8% 4209 perf-stat.i.page-faults
25.98 -98.6% 0.36 ? 36% perf-stat.overall.MPKI
0.34 -0.2 0.18 perf-stat.overall.branch-miss-rate%
0.42 ? 3% +6.4 6.83 ? 26% perf-stat.overall.cache-miss-rate%
2.11 -48.3% 1.09 perf-stat.overall.cpi
19191 ? 4% +160.7% 50036 ? 7% perf-stat.overall.cycles-between-cache-misses
0.23 ? 8% -0.2 0.00 ? 18% perf-stat.overall.dTLB-load-miss-rate%
0.11 ? 5% -0.1 0.00 ? 15% perf-stat.overall.dTLB-store-miss-rate%
0.47 +93.5% 0.92 perf-stat.overall.ipc
90.95 +5.8 96.76 perf-stat.overall.node-load-miss-rate%
71.15 ? 2% +25.0 96.11 perf-stat.overall.node-store-miss-rate%
3.222e+10 +100.1% 6.447e+10 perf-stat.ps.branch-instructions
1.111e+08 +6.7% 1.185e+08 perf-stat.ps.branch-misses
18267699 ? 3% -59.9% 7316963 ? 8% perf-stat.ps.cache-misses
4.312e+09 -97.3% 1.185e+08 ? 36% perf-stat.ps.cache-references
6681494 +39.1% 9295148 perf-stat.ps.context-switches
3.5e+11 +4.0% 3.639e+11 perf-stat.ps.cpu-cycles
2247108 -99.9% 3066 ? 63% perf-stat.ps.cpu-migrations
95654455 ? 7% -99.0% 978934 ? 18% perf-stat.ps.dTLB-load-misses
4.085e+10 +98.1% 8.092e+10 perf-stat.ps.dTLB-loads
25726088 ? 4% -99.5% 120909 ? 15% perf-stat.ps.dTLB-store-misses
2.319e+10 +97.9% 4.589e+10 perf-stat.ps.dTLB-stores
1.659e+11 +101.2% 3.339e+11 perf-stat.ps.instructions
4267 -3.3% 4125 perf-stat.ps.minor-faults
3332737 ? 6% -35.5% 2149192 ? 5% perf-stat.ps.node-load-misses
329640 ? 16% -78.3% 71563 ? 5% perf-stat.ps.node-loads
3695109 ? 4% +9.1% 4030054 ? 3% perf-stat.ps.node-store-misses
1496222 ? 4% -89.1% 163294 ? 15% perf-stat.ps.node-stores
4267 -3.3% 4125 perf-stat.ps.page-faults
1.055e+13 +100.4% 2.114e+13 perf-stat.total.instructions
18.71 -18.6 0.10 ?223% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
18.56 -18.5 0.10 ?223% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
18.55 -18.4 0.10 ?223% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
18.53 -18.4 0.10 ?223% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
13.24 -9.2 3.99 perf-profile.calltrace.cycles-pp.pipe_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.02 -9.2 6.78 perf-profile.calltrace.cycles-pp.read
14.88 -9.2 5.72 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
13.60 -9.1 4.45 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
14.68 -9.1 5.60 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
13.81 -8.7 5.08 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
8.37 -8.3 0.09 ?223% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
10.17 -8.1 2.03 ? 2% perf-profile.calltrace.cycles-pp.schedule.pipe_read.vfs_read.ksys_read.do_syscall_64
10.06 -8.1 1.94 ? 2% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_read.vfs_read.ksys_read
7.57 -7.6 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
7.55 -7.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
7.44 ? 2% -7.4 0.00 perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
9.10 -6.5 2.58 ? 3% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.vfs_write.ksys_write
9.27 -6.5 2.78 ? 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.vfs_write.ksys_write.do_syscall_64
11.36 -6.5 4.90 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
11.05 -6.4 4.62 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
10.54 -6.3 4.24 ? 2% perf-profile.calltrace.cycles-pp.pipe_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.73 -6.3 2.47 ? 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
8.76 -6.2 2.52 ? 3% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.vfs_write
6.84 ? 2% -6.0 0.86 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_read.vfs_read
21.14 -5.8 15.29 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.copy_page_to_iter.process_vm_rw_single_vec
5.64 ? 2% -5.6 0.00 perf-profile.calltrace.cycles-pp.sched_ttwu_pending.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary
21.40 -5.6 15.80 perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.copy_page_to_iter.process_vm_rw_single_vec.process_vm_rw_core
5.31 ? 2% -5.3 0.00 perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_read
5.11 ? 2% -5.1 0.00 perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.flush_smp_call_function_queue.do_idle.cpu_startup_entry
5.07 ? 2% -5.1 0.00 perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.flush_smp_call_function_queue.do_idle
11.46 -4.7 6.77 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
11.50 -4.6 6.92 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
23.22 -3.9 19.30 perf-profile.calltrace.cycles-pp._copy_to_iter.copy_page_to_iter.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw
11.71 -3.8 7.95 perf-profile.calltrace.cycles-pp.write
23.63 -3.3 20.33 perf-profile.calltrace.cycles-pp.copy_page_to_iter.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
1.04 ? 4% -0.5 0.56 ? 2% perf-profile.calltrace.cycles-pp.stress_vm_child
0.72 ? 4% +0.3 1.00 ? 5% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.72 ? 4% +0.3 1.04 ? 5% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.73 +0.5 1.19 perf-profile.calltrace.cycles-pp.__might_fault._copy_to_iter.copy_page_to_iter.process_vm_rw_single_vec.process_vm_rw_core
0.78 +0.5 1.32 perf-profile.calltrace.cycles-pp.stress_vm_rw
0.00 +0.5 0.55 perf-profile.calltrace.cycles-pp.__import_iovec.import_iovec.process_vm_rw.__x64_sys_process_vm_writev.do_syscall_64
0.00 +0.6 0.55 perf-profile.calltrace.cycles-pp.__import_iovec.import_iovec.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.import_iovec.process_vm_rw.__x64_sys_process_vm_writev.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.import_iovec.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.59 ? 2% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.vfs_write.ksys_write.do_syscall_64
0.00 +0.6 0.63 perf-profile.calltrace.cycles-pp.__might_resched.__might_fault._copy_to_iter.copy_page_to_iter.process_vm_rw_single_vec
0.00 +0.6 0.64 perf-profile.calltrace.cycles-pp.__might_resched.__might_fault._copy_from_iter.copy_page_from_iter.process_vm_rw_single_vec
0.00 +1.1 1.12 perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +1.1 1.13 perf-profile.calltrace.cycles-pp.mod_node_page_state.gup_put_folio.unpin_user_pages.process_vm_rw_single_vec.process_vm_rw_core
0.00 +1.1 1.13 perf-profile.calltrace.cycles-pp.__might_resched.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core
0.00 +1.1 1.14 perf-profile.calltrace.cycles-pp.mod_node_page_state.gup_put_folio.unpin_user_pages_dirty_lock.process_vm_rw_single_vec.process_vm_rw_core
0.00 +1.2 1.20 perf-profile.calltrace.cycles-pp.__might_fault._copy_from_iter.copy_page_from_iter.process_vm_rw_single_vec.process_vm_rw_core
0.00 +1.2 1.20 perf-profile.calltrace.cycles-pp.schedule.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.00 +1.3 1.28 perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.20 +1.6 2.78 perf-profile.calltrace.cycles-pp._raw_spin_lock.follow_page_pte.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec
0.00 +1.7 1.70 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.02 +1.7 2.75 perf-profile.calltrace.cycles-pp.gup_put_folio.unpin_user_pages.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw
0.00 +1.7 1.74 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.99 +1.8 2.81 perf-profile.calltrace.cycles-pp.gup_put_folio.unpin_user_pages_dirty_lock.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw
0.00 +2.0 1.98 perf-profile.calltrace.cycles-pp.follow_pud_mask.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core
1.31 +2.1 3.43 ? 2% perf-profile.calltrace.cycles-pp.unpin_user_pages.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
1.26 +2.3 3.57 ? 2% perf-profile.calltrace.cycles-pp.unpin_user_pages_dirty_lock.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_writev
0.00 +2.3 2.35 perf-profile.calltrace.cycles-pp.mod_node_page_state.try_grab_page.follow_page_pte.__get_user_pages.__get_user_pages_remote
0.00 +2.5 2.52 perf-profile.calltrace.cycles-pp.follow_page_mask.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core
0.00 +2.5 2.54 perf-profile.calltrace.cycles-pp.follow_pmd_mask.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core
3.06 ? 3% +2.6 5.68 ? 2% perf-profile.calltrace.cycles-pp.try_grab_page.follow_page_pte.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec
31.57 +6.5 38.03 perf-profile.calltrace.cycles-pp.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64
32.70 +6.8 39.48 perf-profile.calltrace.cycles-pp.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.21 +7.3 13.49 perf-profile.calltrace.cycles-pp.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_readv
33.27 +7.4 40.69 perf-profile.calltrace.cycles-pp.process_vm_rw.__x64_sys_process_vm_readv.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_readv
6.08 +7.5 13.54 perf-profile.calltrace.cycles-pp.follow_page_pte.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core
33.27 +7.5 40.74 perf-profile.calltrace.cycles-pp.__x64_sys_process_vm_readv.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_readv
33.41 +7.6 41.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_readv
33.51 +7.7 41.20 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.process_vm_readv
7.38 +7.7 15.08 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter.copy_page_from_iter.process_vm_rw_single_vec
33.87 +7.9 41.80 perf-profile.calltrace.cycles-pp.process_vm_readv
7.60 +8.0 15.61 perf-profile.calltrace.cycles-pp.copyin._copy_from_iter.copy_page_from_iter.process_vm_rw_single_vec.process_vm_rw_core
4.77 +8.3 13.10 perf-profile.calltrace.cycles-pp.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_writev
8.95 +10.2 19.11 perf-profile.calltrace.cycles-pp._copy_from_iter.copy_page_from_iter.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw
9.32 +10.7 20.02 perf-profile.calltrace.cycles-pp.copy_page_from_iter.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_writev
10.74 +15.2 25.97 perf-profile.calltrace.cycles-pp.__get_user_pages.__get_user_pages_remote.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw
15.69 +21.8 37.49 perf-profile.calltrace.cycles-pp.process_vm_rw_single_vec.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_writev.do_syscall_64
16.34 +22.6 38.93 perf-profile.calltrace.cycles-pp.process_vm_rw_core.process_vm_rw.__x64_sys_process_vm_writev.do_syscall_64.entry_SYSCALL_64_after_hwframe
16.79 +23.3 40.06 perf-profile.calltrace.cycles-pp.process_vm_rw.__x64_sys_process_vm_writev.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_writev
16.81 +23.3 40.11 perf-profile.calltrace.cycles-pp.__x64_sys_process_vm_writev.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_writev
16.88 +23.4 40.29 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.process_vm_writev
16.93 +23.5 40.41 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.process_vm_writev
17.20 +23.9 41.07 perf-profile.calltrace.cycles-pp.process_vm_writev
18.71 -18.5 0.24 ? 87% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
18.71 -18.5 0.24 ? 87% perf-profile.children.cycles-pp.cpu_startup_entry
18.69 -18.4 0.24 ? 87% perf-profile.children.cycles-pp.do_idle
18.56 -18.3 0.24 ? 88% perf-profile.children.cycles-pp.start_secondary
16.17 -9.2 6.94 perf-profile.children.cycles-pp.read
13.28 -9.2 4.07 perf-profile.children.cycles-pp.pipe_read
13.63 -9.2 4.47 perf-profile.children.cycles-pp.vfs_read
12.26 -9.1 3.14 perf-profile.children.cycles-pp.__schedule
13.81 -8.7 5.09 perf-profile.children.cycles-pp.ksys_read
8.44 -8.2 0.20 ? 88% perf-profile.children.cycles-pp.cpuidle_idle_call
7.62 ? 2% -7.6 0.00 perf-profile.children.cycles-pp.flush_smp_call_function_queue
7.64 -7.5 0.18 ? 88% perf-profile.children.cycles-pp.cpuidle_enter
7.62 -7.4 0.18 ? 88% perf-profile.children.cycles-pp.cpuidle_enter_state
7.22 ? 2% -7.2 0.00 perf-profile.children.cycles-pp.sched_ttwu_pending
7.28 ? 4% -7.2 0.12 ? 5% perf-profile.children.cycles-pp.update_cfs_group
10.19 -7.0 3.24 perf-profile.children.cycles-pp.schedule
7.02 -6.9 0.17 ? 88% perf-profile.children.cycles-pp.mwait_idle_with_hints
9.11 -6.5 2.59 ? 2% perf-profile.children.cycles-pp.__wake_up_common
9.28 -6.5 2.81 ? 2% perf-profile.children.cycles-pp.__wake_up_common_lock
11.38 -6.4 4.93 perf-profile.children.cycles-pp.ksys_write
11.07 -6.4 4.65 ? 2% perf-profile.children.cycles-pp.vfs_write
10.56 -6.3 4.29 ? 2% perf-profile.children.cycles-pp.pipe_write
8.74 -6.2 2.50 ? 3% perf-profile.children.cycles-pp.try_to_wake_up
8.76 -6.2 2.53 ? 3% perf-profile.children.cycles-pp.autoremove_wake_function
7.22 ? 2% -6.2 1.06 ? 5% perf-profile.children.cycles-pp.ttwu_do_activate
7.18 ? 2% -6.2 1.02 ? 5% perf-profile.children.cycles-pp.enqueue_task_fair
6.86 ? 2% -6.0 0.87 perf-profile.children.cycles-pp.dequeue_task_fair
22.00 -5.8 16.24 perf-profile.children.cycles-pp.copyout
5.66 ? 2% -5.1 0.52 ? 5% perf-profile.children.cycles-pp.enqueue_entity
5.34 ? 2% -4.9 0.44 ? 2% perf-profile.children.cycles-pp.dequeue_entity
5.60 ? 2% -4.7 0.85 perf-profile.children.cycles-pp.update_load_avg
4.27 -4.2 0.08 ? 85% perf-profile.children.cycles-pp.intel_idle
23.89 -4.0 19.90 perf-profile.children.cycles-pp._copy_to_iter
11.78 -3.7 8.11 perf-profile.children.cycles-pp.write
24.29 -3.4 20.93 perf-profile.children.cycles-pp.copy_page_to_iter
3.47 -3.2 0.31 ? 13% perf-profile.children.cycles-pp.select_task_rq
3.37 -3.1 0.26 ? 15% perf-profile.children.cycles-pp.select_task_rq_fair
2.96 -2.8 0.15 ? 28% perf-profile.children.cycles-pp.select_idle_sibling
2.83 ? 2% -2.7 0.09 ? 89% perf-profile.children.cycles-pp.intel_idle_irq
2.53 -2.5 0.06 ? 9% perf-profile.children.cycles-pp.select_idle_cpu
0.89 -0.8 0.11 perf-profile.children.cycles-pp.finish_task_switch
0.80 -0.8 0.04 ? 71% perf-profile.children.cycles-pp.ttwu_queue_wakelist
1.00 -0.6 0.37 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.90 -0.6 0.34 perf-profile.children.cycles-pp.prepare_to_wait_event
0.60 ? 2% -0.5 0.07 ? 5% perf-profile.children.cycles-pp.switch_mm_irqs_off
1.04 ? 4% -0.5 0.57 ? 2% perf-profile.children.cycles-pp.stress_vm_child
0.90 -0.4 0.52 ? 3% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.66 ? 2% -0.3 0.32 ? 3% perf-profile.children.cycles-pp.prepare_task_switch
0.57 ? 2% -0.3 0.25 perf-profile.children.cycles-pp.update_rq_clock
0.77 -0.3 0.47 ? 3% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.50 -0.3 0.22 ? 4% perf-profile.children.cycles-pp.___perf_sw_event
0.68 -0.3 0.40 perf-profile.children.cycles-pp.__switch_to_asm
0.50 -0.2 0.29 perf-profile.children.cycles-pp.security_file_permission
0.26 ? 5% -0.2 0.08 ? 14% perf-profile.children.cycles-pp.task_tick_fair
0.50 -0.2 0.33 perf-profile.children.cycles-pp.set_next_entity
0.43 ? 3% -0.2 0.26 ? 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.36 ? 3% -0.2 0.21 ? 6% perf-profile.children.cycles-pp.tick_sched_handle
0.40 ? 2% -0.2 0.24 perf-profile.children.cycles-pp.apparmor_file_permission
0.38 ? 3% -0.2 0.23 ? 6% perf-profile.children.cycles-pp.tick_sched_timer
0.36 ? 4% -0.2 0.21 ? 6% perf-profile.children.cycles-pp.update_process_times
0.32 ? 4% -0.1 0.17 ? 5% perf-profile.children.cycles-pp.scheduler_tick
0.25 ? 2% -0.1 0.10 ? 13% perf-profile.children.cycles-pp.find_vma
0.69 -0.1 0.55 ? 2% perf-profile.children.cycles-pp.mutex_lock
0.26 -0.1 0.12 ? 11% perf-profile.children.cycles-pp.find_extend_vma
0.38 ? 2% -0.1 0.26 perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.54 ? 2% -0.1 0.43 ? 4% perf-profile.children.cycles-pp.hrtimer_interrupt
0.54 ? 2% -0.1 0.43 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.26 -0.1 0.16 ? 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.34 ? 2% -0.1 0.24 ? 2% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.17 ? 2% -0.1 0.08 ? 15% perf-profile.children.cycles-pp.vmacache_find
0.76 -0.1 0.69 ? 2% perf-profile.children.cycles-pp.switch_fpu_return
0.51 -0.1 0.43 perf-profile.children.cycles-pp.__switch_to
0.20 -0.1 0.14 ? 3% perf-profile.children.cycles-pp.native_sched_clock
0.18 -0.1 0.12 ? 5% perf-profile.children.cycles-pp.perf_tp_event
0.10 ? 6% -0.0 0.05 ? 7% perf-profile.children.cycles-pp.anon_pipe_buf_release
0.58 -0.0 0.54 ? 2% perf-profile.children.cycles-pp.restore_fpregs_from_fpstate
0.13 -0.0 0.10 ? 3% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.10 -0.0 0.07 ? 6% perf-profile.children.cycles-pp.perf_trace_buf_update
0.24 -0.0 0.22 ? 2% perf-profile.children.cycles-pp.mutex_unlock
0.08 -0.0 0.06 perf-profile.children.cycles-pp.__list_add_valid
0.11 ? 5% -0.0 0.09 perf-profile.children.cycles-pp.aa_file_perm
0.22 ? 2% -0.0 0.20 ? 2% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.19 ? 2% +0.0 0.21 ? 2% perf-profile.children.cycles-pp.mmput
0.05 +0.0 0.07 perf-profile.children.cycles-pp.__list_del_entry_valid
0.06 ? 8% +0.0 0.08 ? 4% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.06 ? 9% +0.0 0.08 perf-profile.children.cycles-pp.__rdgsbase_inactive
0.13 +0.0 0.16 ? 3% perf-profile.children.cycles-pp.atime_needs_update
0.05 ? 8% +0.0 0.09 perf-profile.children.cycles-pp.pick_next_entity
0.19 +0.0 0.23 perf-profile.children.cycles-pp.down_read_killable
0.28 +0.0 0.32 ? 3% perf-profile.children.cycles-pp.__update_load_avg_se
0.02 ? 99% +0.0 0.07 perf-profile.children.cycles-pp.__get_task_ioprio
0.04 ? 44% +0.0 0.09 ? 8% perf-profile.children.cycles-pp.perf_trace_sched_switch
0.04 ? 44% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.09 ? 5% +0.1 0.14 ? 3% perf-profile.children.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.05 perf-profile.children.cycles-pp.set_next_buddy
0.12 +0.1 0.17 ? 6% perf-profile.children.cycles-pp.file_update_time
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.fsnotify_perm
0.00 +0.1 0.06 ? 20% perf-profile.children.cycles-pp.hrtimer_active
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.resched_curr
0.14 ? 3% +0.1 0.20 ? 2% perf-profile.children.cycles-pp.down_read
0.00 +0.1 0.06 perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.01 ?223% +0.1 0.08 ? 6% perf-profile.children.cycles-pp.kmalloc_slab
0.13 ? 4% +0.1 0.20 perf-profile.children.cycles-pp.get_task_mm
0.07 ? 11% +0.1 0.14 ? 10% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.07 ? 5% perf-profile.children.cycles-pp.idr_find
0.13 ? 2% +0.1 0.21 perf-profile.children.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.08 ? 16% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.11 +0.1 0.20 ? 3% perf-profile.children.cycles-pp.check_preempt_curr
0.08 ? 6% +0.1 0.17 ? 2% perf-profile.children.cycles-pp.up_read
0.01 ?223% +0.1 0.10 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.00 +0.1 0.10 ? 5% perf-profile.children.cycles-pp.__calc_delta
0.00 +0.1 0.10 perf-profile.children.cycles-pp.check_stack_object
0.81 +0.1 0.93 ? 3% perf-profile.children.cycles-pp.pick_next_task_fair
0.07 +0.1 0.19 perf-profile.children.cycles-pp.put_prev_entity
0.13 +0.1 0.25 ? 2% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.06 +0.1 0.19 ? 7% perf-profile.children.cycles-pp.current_time
0.16 +0.1 0.30 ? 3% perf-profile.children.cycles-pp.reweight_entity
0.13 +0.1 0.26 perf-profile.children.cycles-pp.os_xsave
0.64 +0.2 0.79 perf-profile.children.cycles-pp.find_get_task_by_vpid
0.00 +0.2 0.16 ? 3% perf-profile.children.cycles-pp.check_preempt_wakeup
0.11 ? 3% +0.2 0.27 perf-profile.children.cycles-pp.__check_object_size
0.10 ? 3% +0.2 0.27 perf-profile.children.cycles-pp.follow_huge_addr
0.18 ? 2% +0.2 0.36 ? 3% perf-profile.children.cycles-pp.__radix_tree_lookup
0.37 ? 2% +0.2 0.55 perf-profile.children.cycles-pp.mm_access
0.12 ? 4% +0.2 0.30 ? 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.13 ? 3% +0.2 0.32 ? 2% perf-profile.children.cycles-pp.kfree
0.13 ? 3% +0.3 0.38 perf-profile.children.cycles-pp.mark_page_accessed
0.15 +0.3 0.42 perf-profile.children.cycles-pp.pud_huge
0.14 ? 3% +0.3 0.42 perf-profile.children.cycles-pp.pmd_huge
0.30 ? 2% +0.4 0.70 perf-profile.children.cycles-pp.__kmalloc
0.24 +0.4 0.66 perf-profile.children.cycles-pp.rcu_all_qs
0.28 ? 2% +0.4 0.73 perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.28 ? 2% +0.5 0.80 perf-profile.children.cycles-pp.folio_mark_accessed
0.43 +0.5 0.97 perf-profile.children.cycles-pp.__entry_text_start
0.79 +0.5 1.34 perf-profile.children.cycles-pp.stress_vm_rw
0.70 +0.6 1.32 perf-profile.children.cycles-pp.__might_sleep
0.46 ? 2% +0.7 1.12 perf-profile.children.cycles-pp.__import_iovec
0.34 ? 2% +0.7 1.02 perf-profile.children.cycles-pp.vm_normal_page
0.48 ? 2% +0.7 1.16 perf-profile.children.cycles-pp.import_iovec
0.59 +0.8 1.36 perf-profile.children.cycles-pp.__cond_resched
0.57 ? 8% +0.8 1.35 perf-profile.children.cycles-pp._copy_from_user
2.53 +1.0 3.51 perf-profile.children.cycles-pp._raw_spin_lock
0.76 ? 6% +1.1 1.84 perf-profile.children.cycles-pp.iovec_from_user
0.00 +1.3 1.29 perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.83 +1.3 2.16 perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.93 +1.4 2.37 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.80 +1.5 2.25 perf-profile.children.cycles-pp.follow_pud_mask
1.54 +1.7 3.26 perf-profile.children.cycles-pp.__might_fault
1.02 +1.8 2.79 perf-profile.children.cycles-pp.follow_page_mask
0.98 +1.8 2.82 perf-profile.children.cycles-pp.follow_pmd_mask
1.48 +1.9 3.37 perf-profile.children.cycles-pp.__might_resched
1.34 +2.2 3.49 ? 2% perf-profile.children.cycles-pp.unpin_user_pages
1.29 +2.4 3.66 ? 2% perf-profile.children.cycles-pp.unpin_user_pages_dirty_lock
29.90 +2.5 32.38 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.20 ? 3% +2.9 6.06 ? 2% perf-profile.children.cycles-pp.try_grab_page
1.38 +3.5 4.88 perf-profile.children.cycles-pp.mod_node_page_state
2.10 +3.7 5.81 perf-profile.children.cycles-pp.gup_put_folio
33.30 +7.5 40.76 perf-profile.children.cycles-pp.__x64_sys_process_vm_readv
33.96 +8.1 42.03 perf-profile.children.cycles-pp.process_vm_readv
6.43 +8.2 14.60 perf-profile.children.cycles-pp.follow_page_pte
7.85 +8.2 16.04 perf-profile.children.cycles-pp.copyin
9.42 +10.4 19.86 perf-profile.children.cycles-pp._copy_from_iter
9.80 +11.0 20.77 perf-profile.children.cycles-pp.copy_page_from_iter
10.93 +15.6 26.52 perf-profile.children.cycles-pp.__get_user_pages
10.98 +15.6 26.60 perf-profile.children.cycles-pp.__get_user_pages_remote
76.52 +17.3 93.84 perf-profile.children.cycles-pp.do_syscall_64
76.87 +17.5 94.33 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
16.81 +23.3 40.12 perf-profile.children.cycles-pp.__x64_sys_process_vm_writev
17.29 +24.0 41.30 perf-profile.children.cycles-pp.process_vm_writev
47.32 +28.4 75.68 perf-profile.children.cycles-pp.process_vm_rw_single_vec
49.06 +29.4 78.48 perf-profile.children.cycles-pp.process_vm_rw_core
50.08 +30.7 80.80 perf-profile.children.cycles-pp.process_vm_rw
7.27 ? 4% -7.2 0.11 ? 5% perf-profile.self.cycles-pp.update_cfs_group
6.92 -6.8 0.17 ? 88% perf-profile.self.cycles-pp.mwait_idle_with_hints
4.87 ? 2% -4.6 0.28 ? 2% perf-profile.self.cycles-pp.update_load_avg
1.02 -0.7 0.34 ? 2% perf-profile.self.cycles-pp.__schedule
0.98 -0.6 0.35 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.59 ? 2% -0.5 0.06 ? 7% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.98 -0.4 0.54 perf-profile.self.cycles-pp.stress_vm_child
0.44 ? 2% -0.4 0.08 ? 5% perf-profile.self.cycles-pp.update_rq_clock
0.60 ? 2% -0.3 0.30 perf-profile.self.cycles-pp.pipe_read
0.35 ? 2% -0.3 0.06 ? 8% perf-profile.self.cycles-pp.__wake_up_common
0.68 -0.3 0.40 perf-profile.self.cycles-pp.__switch_to_asm
0.44 -0.3 0.17 ? 2% perf-profile.self.cycles-pp.___perf_sw_event
0.34 ? 2% -0.3 0.08 ? 5% perf-profile.self.cycles-pp.finish_task_switch
0.34 -0.2 0.12 ? 4% perf-profile.self.cycles-pp.enqueue_entity
0.27 ? 2% -0.2 0.10 ? 4% perf-profile.self.cycles-pp.prepare_task_switch
0.26 ? 5% -0.2 0.10 ? 4% perf-profile.self.cycles-pp.try_to_wake_up
0.45 -0.2 0.30 ? 4% perf-profile.self.cycles-pp.mutex_lock
0.32 ? 15% -0.2 0.17 ? 4% perf-profile.self.cycles-pp.read
0.33 -0.1 0.19 perf-profile.self.cycles-pp.prepare_to_wait_event
0.28 ? 2% -0.1 0.14 ? 4% perf-profile.self.cycles-pp.apparmor_file_permission
0.18 ? 2% -0.1 0.04 ? 73% perf-profile.self.cycles-pp.select_idle_sibling
0.40 ? 2% -0.1 0.28 perf-profile.self.cycles-pp.update_curr
0.37 ? 2% -0.1 0.25 ? 2% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.50 -0.1 0.41 perf-profile.self.cycles-pp.__switch_to
0.15 ? 3% -0.1 0.07 ? 17% perf-profile.self.cycles-pp.vmacache_find
0.16 ? 4% -0.1 0.08 ? 4% perf-profile.self.cycles-pp.dequeue_entity
0.16 -0.1 0.09 ? 4% perf-profile.self.cycles-pp.dequeue_task_fair
0.20 ? 2% -0.1 0.13 perf-profile.self.cycles-pp.native_sched_clock
0.11 ? 3% -0.1 0.06 ? 9% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.40 -0.1 0.35 perf-profile.self.cycles-pp.find_get_task_by_vpid
0.19 -0.1 0.14 ? 3% perf-profile.self.cycles-pp.switch_fpu_return
0.10 -0.1 0.05 perf-profile.self.cycles-pp.select_task_rq
0.10 ? 4% -0.0 0.06 ? 7% perf-profile.self.cycles-pp.security_file_permission
0.58 -0.0 0.54 ? 2% perf-profile.self.cycles-pp.restore_fpregs_from_fpstate
0.10 -0.0 0.07 ? 5% perf-profile.self.cycles-pp.atime_needs_update
0.24 -0.0 0.21 ? 2% perf-profile.self.cycles-pp.mutex_unlock
0.22 ? 2% -0.0 0.19 ? 2% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.10 ? 3% -0.0 0.08 perf-profile.self.cycles-pp.aa_file_perm
0.08 ? 4% -0.0 0.06 ? 9% perf-profile.self.cycles-pp.perf_tp_event
0.05 +0.0 0.06 perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.04 ? 44% +0.0 0.06 perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.07 ? 5% +0.0 0.09 ? 6% perf-profile.self.cycles-pp.get_task_mm
0.09 +0.0 0.12 ? 3% perf-profile.self.cycles-pp.schedule
0.05 +0.0 0.08 perf-profile.self.cycles-pp.pick_next_entity
0.05 +0.0 0.08 perf-profile.self.cycles-pp.__rdgsbase_inactive
0.04 ? 44% +0.0 0.08 ? 6% perf-profile.self.cycles-pp.__get_user_pages_remote
0.08 ? 8% +0.0 0.12 ? 10% perf-profile.self.cycles-pp.ktime_get
0.25 +0.0 0.29 ? 3% perf-profile.self.cycles-pp.__update_load_avg_se
0.09 +0.0 0.14 ? 2% perf-profile.self.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.05 perf-profile.self.cycles-pp.file_update_time
0.00 +0.1 0.05 perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.00 +0.1 0.05 ? 7% perf-profile.self.cycles-pp.__list_del_entry_valid
0.01 ?223% +0.1 0.06 perf-profile.self.cycles-pp.__get_task_ioprio
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.resched_curr
0.00 +0.1 0.06 ? 20% perf-profile.self.cycles-pp.hrtimer_active
0.10 ? 5% +0.1 0.15 ? 2% perf-profile.self.cycles-pp.vfs_write
0.02 ? 99% +0.1 0.08 ? 4% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.02 ? 99% +0.1 0.08 ? 8% perf-profile.self.cycles-pp.perf_trace_sched_switch
0.00 +0.1 0.06 perf-profile.self.cycles-pp.kmalloc_slab
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.check_preempt_wakeup
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.idr_find
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.ksys_write
0.00 +0.1 0.07 perf-profile.self.cycles-pp.__wake_up_common_lock
0.00 +0.1 0.07 ? 17% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.13 +0.1 0.21 ? 4% perf-profile.self.cycles-pp.pick_next_task_fair
0.00 +0.1 0.08 perf-profile.self.cycles-pp.check_stack_object
0.05 +0.1 0.13 ? 3% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.09 ? 5% +0.1 0.17 ? 3% perf-profile.self.cycles-pp.write
0.05 +0.1 0.14 ? 2% perf-profile.self.cycles-pp.follow_huge_addr
0.07 ? 5% +0.1 0.16 perf-profile.self.cycles-pp.up_read
0.00 +0.1 0.09 ? 4% perf-profile.self.cycles-pp.exit_to_user_mode_loop
0.00 +0.1 0.09 ? 4% perf-profile.self.cycles-pp.__calc_delta
0.06 ? 7% +0.1 0.16 ? 3% perf-profile.self.cycles-pp.__check_object_size
0.11 +0.1 0.21 ? 2% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.00 +0.1 0.10 ? 3% perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.00 +0.1 0.10 perf-profile.self.cycles-pp.current_time
0.06 ? 6% +0.1 0.17 ? 2% perf-profile.self.cycles-pp.__import_iovec
0.15 ? 5% +0.1 0.27 perf-profile.self.cycles-pp.process_vm_rw
0.05 +0.1 0.17 perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.17 ? 2% +0.1 0.29 perf-profile.self.cycles-pp.__entry_text_start
0.07 +0.1 0.20 ? 3% perf-profile.self.cycles-pp._copy_from_user
0.13 ? 2% +0.1 0.26 perf-profile.self.cycles-pp.os_xsave
0.16 ? 2% +0.1 0.30 perf-profile.self.cycles-pp.pipe_write
0.19 ? 3% +0.1 0.32 ? 2% perf-profile.self.cycles-pp.process_vm_readv
0.09 +0.2 0.25 perf-profile.self.cycles-pp.iovec_from_user
0.36 +0.2 0.52 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.09 +0.2 0.26 perf-profile.self.cycles-pp.mark_page_accessed
0.18 ? 3% +0.2 0.35 ? 3% perf-profile.self.cycles-pp.__radix_tree_lookup
0.12 ? 4% +0.2 0.30 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.16 ? 2% +0.2 0.34 ? 2% perf-profile.self.cycles-pp.copyout
0.10 +0.2 0.28 perf-profile.self.cycles-pp.pud_huge
0.13 ? 2% +0.2 0.31 perf-profile.self.cycles-pp.process_vm_rw_core
0.13 ? 2% +0.2 0.31 perf-profile.self.cycles-pp.kfree
0.16 ? 2% +0.2 0.35 perf-profile.self.cycles-pp.do_syscall_64
0.14 ? 3% +0.2 0.33 ? 2% perf-profile.self.cycles-pp.process_vm_writev
0.09 ? 4% +0.2 0.28 perf-profile.self.cycles-pp.pmd_huge
0.13 ? 3% +0.2 0.35 ? 2% perf-profile.self.cycles-pp.copyin
0.13 ? 2% +0.2 0.35 perf-profile.self.cycles-pp.rcu_all_qs
0.17 ? 2% +0.2 0.40 perf-profile.self.cycles-pp.__kmalloc
0.23 ? 2% +0.3 0.57 perf-profile.self.cycles-pp.__might_fault
0.11 ? 3% +0.4 0.46 ? 4% perf-profile.self.cycles-pp.ksys_read
0.29 ? 3% +0.4 0.67 ? 6% perf-profile.self.cycles-pp.unpin_user_pages
0.23 ? 2% +0.4 0.66 perf-profile.self.cycles-pp.folio_mark_accessed
0.27 ? 2% +0.4 0.70 ? 2% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.43 +0.5 0.92 perf-profile.self.cycles-pp.copy_page_from_iter
0.26 ? 2% +0.5 0.76 perf-profile.self.cycles-pp.vm_normal_page
0.62 +0.5 1.12 perf-profile.self.cycles-pp.__might_sleep
0.27 ? 2% +0.5 0.78 ? 6% perf-profile.self.cycles-pp.unpin_user_pages_dirty_lock
0.31 +0.5 0.82 perf-profile.self.cycles-pp.__cond_resched
0.69 +0.6 1.25 perf-profile.self.cycles-pp.stress_vm_rw
0.41 +0.6 1.03 perf-profile.self.cycles-pp.copy_page_to_iter
0.51 +0.6 1.14 perf-profile.self.cycles-pp.process_vm_rw_single_vec
2.47 ? 4% +1.1 3.57 ? 4% perf-profile.self.cycles-pp.try_grab_page
0.64 +1.2 1.83 perf-profile.self.cycles-pp.follow_pud_mask
1.10 +1.2 2.32 perf-profile.self.cycles-pp._copy_to_iter
1.86 +1.3 3.21 perf-profile.self.cycles-pp._raw_spin_lock
0.96 +1.4 2.38 perf-profile.self.cycles-pp._copy_from_iter
0.84 +1.5 2.38 perf-profile.self.cycles-pp.follow_pmd_mask
1.31 +1.6 2.89 perf-profile.self.cycles-pp.__might_resched
0.92 +1.6 2.51 perf-profile.self.cycles-pp.follow_page_mask
0.86 +1.6 2.48 perf-profile.self.cycles-pp.__get_user_pages
1.41 +2.0 3.38 ? 2% perf-profile.self.cycles-pp.gup_put_folio
29.59 +2.4 32.03 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.52 +2.8 4.30 perf-profile.self.cycles-pp.follow_page_pte
1.24 +3.2 4.48 perf-profile.self.cycles-pp.mod_node_page_state



To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (54.25 kB)
config-5.19.0-13710-g697253a9d6db (166.90 kB)
job-script (8.11 kB)
job.yaml (5.55 kB)
reproduce (349.00 B)
Download all attachments

2022-11-03 13:27:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up

On Sun, Oct 23, 2022 at 11:33:39PM +0800, Chen Yu wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8820d0d14519..3a8ee6232c59 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6249,6 +6249,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> if (available_idle_cpu(prev_cpu))
> return prev_cpu;
>
> + /* The only running task is a short duration one. */
> + if (cpu_rq(this_cpu)->nr_running == 1 &&
> + is_short_task(cpu_curr(this_cpu)))
> + return this_cpu;
> +
> return nr_cpumask_bits;
> }

This is very close to using is_short_task() as dynamic WF_SYNC hint, no?

> @@ -6623,6 +6628,23 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> /* overloaded LLC is unlikely to have idle cpu/core */
> if (nr == 1)
> return -1;
> +
> + /*
> + * If nr is smaller than 60% of llc_weight, it
> + * indicates that the util_avg% is higher than 50%.
> + * This is calculated by SIS_UTIL in
> + * update_idle_cpu_scan(). The 50% util_avg indicates
> + * a half-busy LLC domain. System busier than this
> + * level could lower its bar to choose a compromised
> + * "idle" CPU, so as to avoid the overhead of cross
> + * CPU wakeup. If the task on target CPU is a short
> + * duration one, and it is the only running task, pick
> + * target directly.
> + */
> + if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> + cpu_rq(target)->nr_running == 1 &&
> + is_short_task(cpu_curr(target)))
> + return target;
> }
> }

And here you're basically saying that if the domain is 'busy' and the
task is short, don't spend time searching for a better location.

Should we perhaps only consider shortness; after all, spending more time
searching for an idle cpu than the task would've taken to run is daft.
Business of the domain seems unrelated to that.


Also, I'm not sure on your criteria for short; but I don't have enough
thoughts on that yet.

2022-11-04 04:44:38

by Chen Yu

[permalink] [raw]
Subject: Re: [RFC PATCH v2 2/2] sched/fair: Choose the CPU where short task is running during wake up

On 2022-11-03 at 14:04:43 +0100, Peter Zijlstra wrote:
> On Sun, Oct 23, 2022 at 11:33:39PM +0800, Chen Yu wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8820d0d14519..3a8ee6232c59 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6249,6 +6249,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> > if (available_idle_cpu(prev_cpu))
> > return prev_cpu;
> >
> > + /* The only running task is a short duration one. */
> > + if (cpu_rq(this_cpu)->nr_running == 1 &&
> > + is_short_task(cpu_curr(this_cpu)))
> > + return this_cpu;
> > +
> > return nr_cpumask_bits;
> > }
>
> This is very close to using is_short_task() as dynamic WF_SYNC hint, no?
>
Yes. I think a short task waker is a subset of WF_SYNC wake up, because a short
task waker might go to sleep soon after wakeup the wakee.
> > @@ -6623,6 +6628,23 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > /* overloaded LLC is unlikely to have idle cpu/core */
> > if (nr == 1)
> > return -1;
> > +
> > + /*
> > + * If nr is smaller than 60% of llc_weight, it
> > + * indicates that the util_avg% is higher than 50%.
> > + * This is calculated by SIS_UTIL in
> > + * update_idle_cpu_scan(). The 50% util_avg indicates
> > + * a half-busy LLC domain. System busier than this
> > + * level could lower its bar to choose a compromised
> > + * "idle" CPU, so as to avoid the overhead of cross
> > + * CPU wakeup. If the task on target CPU is a short
> > + * duration one, and it is the only running task, pick
> > + * target directly.
> > + */
> > + if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> > + cpu_rq(target)->nr_running == 1 &&
> > + is_short_task(cpu_curr(target)))
> > + return target;
> > }
> > }
>
> And here you're basically saying that if the domain is 'busy' and the
> task is short, don't spend time searching for a better location.
>
> Should we perhaps only consider shortness; after all, spending more time
> searching for an idle cpu than the task would've taken to run is daft.
> Business of the domain seems unrelated to that.
I see, the this_sd->avg_scan_cost could be used for the comparison, I'll
have a try.
>
>
> Also, I'm not sure on your criteria for short; but I don't have enough
> thoughts on that yet.
Yes, the criteria to define a short task is arbitrary. If we compare the
avg_duration of a task with the sd->avg_scan_cost then we can skip the
defination of short task.

thanks,
Chenyu