Subject: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt

a severe qperf performance decrease was reported in the below use case:
For a hardware with 2 NUMA nodes, node0 has cpu0-31, node1 has cpu32-63.
Ethernet is located in node1.

Run the below commands:
$ taskset -c 32-63 stress -c 32 &
$ qperf 192.168.50.166 tcp_lat
tcp_lat:
latency = 2.95ms.
Normally the latency should be less than 20us. But in the above test,
latency increased dramatically to 2.95ms.

This is caused by ping-pong of qperf between node0 and node1. Since it
is a sync wake-up and waker's nr_running == 1, WAKE_AFFINE will pull
qperf to node1, but LB will soon migrate qperf back to node0.
Not like a normal sync wake-up coming from a task, the waker in the above
test is an interrupt and nr_running happens to be 1 since stress starts
32 threads on node1 with 32 cpus.

Testing also shows the performance of qperf won't drop if the number
of threads are increased to 64, 96 or larger values:
$ taskset -c 32-63 stress -c 96 &
$ qperf 192.168.50.166 tcp_lat
tcp_lat:
latency = 14.7us.

Obviously "-c 96" makes "cpu_rq(this_cpu)->nr_running == 1" false in
wake_affine_idle() so WAKE_AFFINE won't pull qperf to node1.

To fix this issue, this patch checks the waker of sync wake-up is a task
but not an interrupt. In this case, the waker will schedule out and give
CPU to wakee.

Reported-by: Yongjia Xie <[email protected]>
Signed-off-by: Barry Song <[email protected]>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d73bdbb2d40..8ad2d732033d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5829,7 +5829,12 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;

- if (sync && cpu_rq(this_cpu)->nr_running == 1)
+ /*
+ * If this is a sync wake-up and the only running thread is just
+ * waker, thus, waker is not interrupt, we assume wakee will get
+ * the cpu of waker soon
+ */
+ if (sync && cpu_rq(this_cpu)->nr_running == 1 && in_task())
return this_cpu;

if (available_idle_cpu(prev_cpu))
--
2.25.1


2021-04-27 04:23:52

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt

On Tue, 2021-04-27 at 14:37 +1200, Barry Song wrote:
>
> To fix this issue, this patch checks the waker of sync wake-up is a task
> but not an interrupt. In this case, the waker will schedule out and give
> CPU to wakee.

That rash "the waker will schedule out" assumption, ie this really
really is a synchronous wakeup, may be true in your particular case,
but the sync hint is so overused as to be fairly meaningless. We've
squabbled with it repeatedly over the years because of that. It really
should either become more of a communication of intent than it
currently is, or just go away.

I'd argue for go away, simply because there is no way for the kernel to
know that there isn't more work directly behind any particular wakeup.

Take a pipe, does shoving some bits through a pipe mean you have no
further use of your CPU? IFF you're doing nothing but playing ping-
pong, sure it does, but how many real proggies have zero overlap among
its threads of execution? The mere notion of threaded apps having no
overlap *to be converted to throughput* is dainbramaged, which should
be the death knell of the sync wakeup hint. Threaded apps can't do
stuff like, oh, networking, which uses the sync hint heavily, without
at least to some extent defeating the purpose of threading if we were
to take the hint seriously. Heck, just look at the beauty (gak) of
wake_wide(). It was born specifically to combat the pure-evil side of
the sync wakeup hint.

Bah, 'nuff "Danger Will Robinson, that thing will *eat you*!!" ;-)

-Mike

Subject: RE: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt



> -----Original Message-----
> From: Mike Galbraith [mailto:[email protected]]
> Sent: Tuesday, April 27, 2021 4:22 PM
> To: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>;
> [email protected]; yangyicong <[email protected]>; Liguozhu (Kenneth)
> <[email protected]>; [email protected]; wanghuiqiang
> <[email protected]>; xieyongjia (A) <[email protected]>
> Subject: Re: [PATCH] sched/fair: don't use waker's cpu if the waker of sync
> wake-up is interrupt
>
> On Tue, 2021-04-27 at 14:37 +1200, Barry Song wrote:
> >
> > To fix this issue, this patch checks the waker of sync wake-up is a task
> > but not an interrupt. In this case, the waker will schedule out and give
> > CPU to wakee.
>
> That rash "the waker will schedule out" assumption, ie this really
> really is a synchronous wakeup, may be true in your particular case,

Hi Mike,

For my particular case, sync hint is used by interrupt but not task.
so "the waker will schedule out" is false because the existing task
is relevant at all.

Here the description "the waker will schedule out" is just copying
the general idea of sync wake-up though the real code might overuse
this hint.

> but the sync hint is so overused as to be fairly meaningless. We've
> squabbled with it repeatedly over the years because of that. It really
> should either become more of a communication of intent than it
> currently is, or just go away.

I agree sync hint might have been overused by other kernel subsystem.
But this patch will at least fix a case: sync waker is interrupt,
in this case, the existing task has nothing to do with waker and wakee,
so this case should be excluded from wake_affine_idle().

>
> I'd argue for go away, simply because there is no way for the kernel to
> know that there isn't more work directly behind any particular wakeup.

To some extent, "go way" might be a good choice. But this is a more
aggressive action. For those cases waker will really scheduler out,
wakee won't be able to take the advantage of getting the idle cpu
of waker.

>
> Take a pipe, does shoving some bits through a pipe mean you have no
> further use of your CPU? IFF you're doing nothing but playing ping-
> pong, sure it does, but how many real proggies have zero overlap among
> its threads of execution? The mere notion of threaded apps having no
> overlap *to be converted to throughput* is dainbramaged, which should
> be the death knell of the sync wakeup hint. Threaded apps can't do
> stuff like, oh, networking, which uses the sync hint heavily, without
> at least to some extent defeating the purpose of threading if we were
> to take the hint seriously. Heck, just look at the beauty (gak) of
> wake_wide(). It was born specifically to combat the pure-evil side of
> the sync wakeup hint.

As above, removing the code of migrating wakee to the cpu of sync waker
could be an option, but needs more investigations.

>
> Bah, 'nuff "Danger Will Robinson, that thing will *eat you*!!" ;-)
>
> -Mike

Thanks
Barry

2021-04-27 05:58:17

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt

On Tue, 2021-04-27 at 04:44 +0000, Song Bao Hua (Barry Song) wrote:
>
>
> I agree sync hint might have been overused by other kernel subsystem.
> But this patch will at least fix a case: sync waker is interrupt,
> in this case, the existing task has nothing to do with waker and wakee,
> so this case should be excluded from wake_affine_idle().

I long ago tried filtering interrupt wakeups, and met some surprises.
Wakeup twiddling always managing to end up being a rob Peter to pay
Paul operation despite our best efforts, here's hoping that your pile
of stolen cycles is small enough to escape performance bot notice :)

-Mike

Subject: RE: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt



> -----Original Message-----
> From: Mike Galbraith [mailto:[email protected]]
> Sent: Tuesday, April 27, 2021 5:55 PM
> To: Song Bao Hua (Barry Song) <[email protected]>;
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; xuwei (O)
> <[email protected]>; Zengtao (B) <[email protected]>;
> [email protected]; yangyicong <[email protected]>; Liguozhu (Kenneth)
> <[email protected]>; [email protected]; wanghuiqiang
> <[email protected]>; xieyongjia (A) <[email protected]>
> Subject: Re: [PATCH] sched/fair: don't use waker's cpu if the waker of sync
> wake-up is interrupt
>
> On Tue, 2021-04-27 at 04:44 +0000, Song Bao Hua (Barry Song) wrote:
> >
> >
> > I agree sync hint might have been overused by other kernel subsystem.
> > But this patch will at least fix a case: sync waker is interrupt,
> > in this case, the existing task has nothing to do with waker and wakee,
> > so this case should be excluded from wake_affine_idle().
>
> I long ago tried filtering interrupt wakeups, and met some surprises.
> Wakeup twiddling always managing to end up being a rob Peter to pay
> Paul operation despite our best efforts, here's hoping that your pile
> of stolen cycles is small enough to escape performance bot notice :)

Would you like to share the link you did before to filter interrupt
wakeups?

The wake up path has hundreds of lines of code, so I don't expect that
reading preempt_count will cause visible performance losses to bot. But
who knows :-)

>
> -Mike

Thanks
Barry

2021-04-27 06:18:06

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt

On Tue, 2021-04-27 at 06:05 +0000, Song Bao Hua (Barry Song) wrote:
>
> Would you like to share the link you did before to filter interrupt
> wakeups?

Can't, failed experiments go only to my Bitmaster-9000 patch shredder
to avoid needing a snow plow to get to near my box. Besides, it was
long ago in a much changed code base.

-Mike

2021-05-05 13:08:05

by kernel test robot

[permalink] [raw]
Subject: [sched/fair] 5f94d1b650: stress-ng.sock.ops_per_sec -25.2% regression



Greeting,

FYI, we noticed a -25.2% regression of stress-ng.sock.ops_per_sec due to commit:


commit: 5f94d1b650d14e01f383e3d02e40c13a9ccecaf4 ("[PATCH] sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")
url: https://github.com/0day-ci/linux/commits/Barry-Song/sched-fair-don-t-use-waker-s-cpu-if-the-waker-of-sync-wake-up-is-interrupt/20210427-104620
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 2ea46c6fc9452ac100ad907b051d797225847e33

in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:

nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: sock
cpufreq_governor: performance
ucode: 0x5003006


In addition to that, the commit also has significant impact on the following tests:

+------------------+-------------------------------------------------------------------------------------+
| testcase: change | apachebench: apachebench.requests_per_second -5.6% regression |
| test machine | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | cluster=cs-localhost |
| | concurrency=8000 |
| | cpufreq_governor=performance |
| | runtime=300s |
| | ucode=0x5003006 |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | netperf: netperf.Throughput_Mbps -52.4% regression |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cluster=cs-localhost |
| | cpufreq_governor=performance |
| | ip=ipv4 |
| | nr_threads=1 |
| | runtime=300s |
| | test=UDP_STREAM |
| | ucode=0x5003006 |
+------------------+-------------------------------------------------------------------------------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file

=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
os/gcc-9/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/sock/stress-ng/60s/0x5003006

commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")

2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
668165 -25.2% 499709 ? 7% stress-ng.sock.ops
11135 -25.2% 8328 ? 7% stress-ng.sock.ops_per_sec
2633 ? 5% -18.0% 2159 ? 8% stress-ng.time.involuntary_context_switches
11225 ? 2% -6.1% 10545 stress-ng.time.minor_page_faults
1283 -0.9% 1272 stress-ng.time.percent_of_cpu_this_job_got
32.76 -21.3% 25.78 ? 6% stress-ng.time.user_time
39827551 -9.3% 36139210 ? 4% stress-ng.time.voluntary_context_switches
0.53 -0.1 0.44 ? 4% mpstat.cpu.all.usr%
1218481 -8.8% 1111783 ? 4% vmstat.system.cs
36137621 +30.6% 47196861 ? 12% cpuidle.C1.time
9079420 +19.3% 10827569 ? 8% cpuidle.C1.usage
73682855 -15.0% 62612480 ? 6% cpuidle.POLL.time
30399709 -17.5% 25089524 ? 9% cpuidle.POLL.usage
20602586 ? 4% -23.7% 15709836 ? 12% numa-numastat.node0.local_node
20665440 ? 4% -23.8% 15744289 ? 11% numa-numastat.node0.numa_hit
21017060 ? 3% -25.6% 15638989 ? 12% numa-numastat.node1.local_node
21040859 ? 3% -25.4% 15691162 ? 12% numa-numastat.node1.numa_hit
11110566 -22.4% 8621534 ? 10% numa-vmstat.node0.numa_hit
11047694 -22.3% 8586688 ? 10% numa-vmstat.node0.numa_local
10652347 -23.7% 8131841 ? 8% numa-vmstat.node1.numa_hit
10462586 -24.4% 7914174 ? 9% numa-vmstat.node1.numa_local
6200 ? 2% -11.7% 5472 ? 4% slabinfo.dmaengine-unmap-16.active_objs
6204 ? 2% -11.7% 5478 ? 4% slabinfo.dmaengine-unmap-16.num_objs
8326 ? 2% -24.8% 6257 ? 8% slabinfo.sock_inode_cache.active_objs
8326 ? 2% -24.8% 6257 ? 8% slabinfo.sock_inode_cache.num_objs
9070645 +19.3% 10820329 ? 8% turbostat.C1
0.58 +0.2 0.77 ? 12% turbostat.C1%
54.00 +4.8% 56.57 turbostat.PkgTmp
202.21 +1.4% 205.06 turbostat.PkgWatt
80.33 +17.3% 94.21 ? 3% turbostat.RAMWatt
23149 -2.7% 22523 proc-vmstat.nr_slab_reclaimable
1054 ? 25% -59.0% 432.43 ? 20% proc-vmstat.numa_hint_faults
41537249 -24.9% 31186035 ? 7% proc-vmstat.numa_hit
41450577 -25.0% 31099388 ? 7% proc-vmstat.numa_local
3.312e+08 -24.8% 2.491e+08 ? 7% proc-vmstat.pgalloc_normal
3.311e+08 -24.8% 2.489e+08 ? 7% proc-vmstat.pgfree
273665 ? 3% -38.2% 169120 ? 17% interrupts.CAL:Function_call_interrupts
3560 ? 33% -44.7% 1968 ? 35% interrupts.CPU1.CAL:Function_call_interrupts
2900 ? 37% -50.5% 1436 ? 58% interrupts.CPU13.CAL:Function_call_interrupts
488.67 ? 29% -62.2% 184.71 ? 63% interrupts.CPU24.RES:Rescheduling_interrupts
3783 ? 19% -59.9% 1518 ? 40% interrupts.CPU25.CAL:Function_call_interrupts
282.67 ? 29% -56.6% 122.57 ? 54% interrupts.CPU25.RES:Rescheduling_interrupts
3736 ? 38% -66.6% 1246 ? 34% interrupts.CPU27.CAL:Function_call_interrupts
268.50 ? 39% -71.3% 77.00 ? 49% interrupts.CPU27.RES:Rescheduling_interrupts
3493 ? 41% -52.3% 1667 ? 40% interrupts.CPU36.CAL:Function_call_interrupts
3883 ? 54% -55.7% 1719 ? 46% interrupts.CPU5.CAL:Function_call_interrupts
4026 ? 20% -55.6% 1788 ? 37% interrupts.CPU50.CAL:Function_call_interrupts
4144 ? 23% -51.2% 2022 ? 70% interrupts.CPU50.NMI:Non-maskable_interrupts
4144 ? 23% -51.2% 2022 ? 70% interrupts.CPU50.PMI:Performance_monitoring_interrupts
2631 ? 38% -46.5% 1408 ? 35% interrupts.CPU52.CAL:Function_call_interrupts
4379 ? 54% -59.0% 1796 ? 78% interrupts.CPU55.CAL:Function_call_interrupts
2710 ? 35% -58.4% 1127 ? 63% interrupts.CPU56.CAL:Function_call_interrupts
183.67 ? 21% -65.4% 63.57 ? 58% interrupts.CPU56.RES:Rescheduling_interrupts
2969 ? 37% -52.6% 1407 ? 48% interrupts.CPU57.CAL:Function_call_interrupts
3712 ? 18% -73.9% 968.57 ? 96% interrupts.CPU60.NMI:Non-maskable_interrupts
3712 ? 18% -73.9% 968.57 ? 96% interrupts.CPU60.PMI:Performance_monitoring_interrupts
2946 ? 27% -56.2% 1289 ? 52% interrupts.CPU61.CAL:Function_call_interrupts
2221 ? 35% -59.7% 895.29 ? 89% interrupts.CPU61.NMI:Non-maskable_interrupts
2221 ? 35% -59.7% 895.29 ? 89% interrupts.CPU61.PMI:Performance_monitoring_interrupts
186.50 ? 29% -52.0% 89.43 ? 76% interrupts.CPU65.RES:Rescheduling_interrupts
4913 ? 42% -73.8% 1286 ? 51% interrupts.CPU67.CAL:Function_call_interrupts
3640 ? 42% -63.3% 1336 ? 37% interrupts.CPU75.CAL:Function_call_interrupts
4197 ? 41% -74.5% 1071 ? 51% interrupts.CPU76.CAL:Function_call_interrupts
274.00 ? 46% -66.9% 90.57 ? 70% interrupts.CPU76.RES:Rescheduling_interrupts
4458 ? 33% -48.2% 2307 ? 41% interrupts.CPU78.CAL:Function_call_interrupts
174.00 ? 21% -42.9% 99.43 ? 42% interrupts.CPU91.RES:Rescheduling_interrupts
2615 ? 43% -52.8% 1235 ? 41% interrupts.CPU93.CAL:Function_call_interrupts
17598 ? 4% -23.0% 13556 ? 14% softirqs.CPU0.RCU
13220 ? 6% -20.5% 10514 ? 10% softirqs.CPU1.RCU
12111 ? 13% -19.9% 9699 ? 9% softirqs.CPU16.RCU
12269 ? 4% -18.3% 10030 ? 13% softirqs.CPU2.RCU
15935 ? 10% -15.7% 13430 ? 20% softirqs.CPU24.RCU
14418 ? 14% -26.6% 10583 ? 14% softirqs.CPU25.RCU
788851 ? 32% -55.0% 354883 ? 28% softirqs.CPU27.NET_RX
13652 ? 13% -28.7% 9734 ? 5% softirqs.CPU27.RCU
13432 ? 10% -15.7% 11328 ? 4% softirqs.CPU27.SCHED
11413 ? 12% -17.5% 9415 ? 15% softirqs.CPU4.RCU
13077 ? 13% -13.7% 11288 ? 21% softirqs.CPU49.SCHED
14547 ? 12% -22.2% 11314 ? 8% softirqs.CPU50.RCU
13654 ? 3% -11.6% 12076 ? 11% softirqs.CPU52.RCU
830025 ? 36% -63.2% 305684 ? 58% softirqs.CPU56.NET_RX
13872 ? 16% -38.1% 8591 ? 10% softirqs.CPU56.RCU
13666 ? 10% -29.7% 9607 ? 17% softirqs.CPU57.RCU
13660 ? 13% -26.0% 10105 ? 13% softirqs.CPU60.RCU
13861 ? 11% -12.2% 12172 ? 10% softirqs.CPU60.SCHED
12378 ? 9% -25.7% 9199 ? 13% softirqs.CPU61.RCU
13237 ? 12% -17.8% 10879 ? 10% softirqs.CPU63.RCU
14837 ? 20% -26.4% 10923 ? 12% softirqs.CPU65.RCU
13078 ? 10% -20.0% 10462 ? 9% softirqs.CPU68.RCU
1014501 ? 29% -51.1% 496322 ? 35% softirqs.CPU76.NET_RX
14929 ? 14% -33.3% 9964 ? 8% softirqs.CPU76.RCU
14824 ? 9% -20.2% 11834 ? 9% softirqs.CPU76.SCHED
14956 ? 15% -29.7% 10517 ? 16% softirqs.CPU79.RCU
14781 ? 18% -30.9% 10210 ? 20% softirqs.CPU80.RCU
665014 ? 50% -46.2% 357487 ? 55% softirqs.CPU83.NET_RX
12417 ? 14% -25.5% 9246 ? 9% softirqs.CPU83.RCU
12715 ? 10% -19.8% 10191 ? 10% softirqs.CPU86.RCU
54301315 -10.1% 48817727 ? 3% softirqs.NET_RX
1165787 ? 2% -14.0% 1002845 ? 2% softirqs.RCU
21.24 -3.7% 20.44 ? 2% perf-stat.i.MPKI
7.91e+09 -18.5% 6.447e+09 ? 5% perf-stat.i.branch-instructions
1.218e+08 -17.6% 1.003e+08 ? 5% perf-stat.i.branch-misses
0.96 ? 31% +24.3 25.26 ? 28% perf-stat.i.cache-miss-rate%
4298146 ? 3% +3705.0% 1.635e+08 ? 22% perf-stat.i.cache-misses
8.4e+08 -20.9% 6.648e+08 ? 7% perf-stat.i.cache-references
1262713 -9.0% 1149476 ? 4% perf-stat.i.context-switches
1.30 +20.3% 1.57 ? 5% perf-stat.i.cpi
133.25 ? 2% -5.4% 126.01 ? 2% perf-stat.i.cpu-migrations
11560 ? 3% -96.1% 449.24 ? 21% perf-stat.i.cycles-between-cache-misses
1.12e+10 -18.1% 9.179e+09 ? 5% perf-stat.i.dTLB-loads
41181 ? 15% -31.8% 28074 ? 15% perf-stat.i.dTLB-store-misses
6.345e+09 -18.1% 5.195e+09 ? 5% perf-stat.i.dTLB-stores
94303727 -16.4% 78848232 ? 5% perf-stat.i.iTLB-load-misses
3.898e+10 -18.5% 3.178e+10 ? 5% perf-stat.i.instructions
0.78 -17.1% 0.64 ? 5% perf-stat.i.ipc
0.85 ? 8% -23.3% 0.65 ? 4% perf-stat.i.metric.K/sec
274.05 -18.0% 224.70 ? 5% perf-stat.i.metric.M/sec
81.79 +15.2 96.96 perf-stat.i.node-load-miss-rate%
874824 ? 3% +7178.9% 63677834 ? 23% perf-stat.i.node-load-misses
192930 ? 4% +534.0% 1223150 ? 21% perf-stat.i.node-loads
87.79 ? 2% +9.5 97.27 perf-stat.i.node-store-miss-rate%
657426 ? 3% +1728.3% 12019579 ? 22% perf-stat.i.node-store-misses
83513 ? 17% -51.2% 40778 ? 6% perf-stat.i.node-stores
21.55 -3.0% 20.90 perf-stat.overall.MPKI
1.54 +0.0 1.56 perf-stat.overall.branch-miss-rate%
0.51 ? 3% +24.6 25.11 ? 28% perf-stat.overall.cache-miss-rate%
1.28 +22.6% 1.57 ? 5% perf-stat.overall.cpi
11588 ? 3% -97.2% 322.82 ? 26% perf-stat.overall.cycles-between-cache-misses
0.78 -18.2% 0.64 ? 5% perf-stat.overall.ipc
81.93 +16.2 98.10 perf-stat.overall.node-load-miss-rate%
88.73 ? 2% +10.9 99.64 perf-stat.overall.node-store-miss-rate%
7.782e+09 -18.5% 6.343e+09 ? 5% perf-stat.ps.branch-instructions
1.198e+08 -17.6% 98698837 ? 5% perf-stat.ps.branch-misses
4230041 ? 3% +3703.2% 1.609e+08 ? 22% perf-stat.ps.cache-misses
8.265e+08 -20.9% 6.541e+08 ? 7% perf-stat.ps.cache-references
1242259 -9.0% 1130898 ? 4% perf-stat.ps.context-switches
131.16 ? 2% -5.4% 124.02 ? 2% perf-stat.ps.cpu-migrations
1.102e+10 -18.1% 9.031e+09 ? 5% perf-stat.ps.dTLB-loads
40640 ? 15% -31.9% 27660 ? 15% perf-stat.ps.dTLB-store-misses
6.243e+09 -18.1% 5.111e+09 ? 5% perf-stat.ps.dTLB-stores
92776465 -16.4% 77575353 ? 5% perf-stat.ps.iTLB-load-misses
3.835e+10 -18.5% 3.127e+10 ? 5% perf-stat.ps.instructions
860793 ? 3% +7176.9% 62639147 ? 23% perf-stat.ps.node-load-misses
189892 ? 4% +533.7% 1203290 ? 21% perf-stat.ps.node-loads
646849 ? 3% +1727.9% 11823793 ? 22% perf-stat.ps.node-store-misses
82269 ? 17% -51.1% 40208 ? 6% perf-stat.ps.node-stores
2.442e+12 -18.9% 1.98e+12 ? 5% perf-stat.total.instructions
5.42 ? 7% -1.0 4.38 ? 9% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto
5.29 ? 8% -1.0 4.25 ? 9% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
2.44 ? 8% -0.8 1.64 ? 14% perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg
2.33 ? 8% -0.8 1.56 ? 15% perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked
0.66 ? 8% -0.3 0.39 ? 63% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
1.28 ? 7% -0.3 1.01 ? 10% perf-profile.calltrace.cycles-pp.sock_do_ioctl.sock_ioctl.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64
1.37 ? 8% -0.3 1.11 ? 10% perf-profile.calltrace.cycles-pp.lock_sock_nested.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
0.89 ? 7% -0.2 0.68 ? 11% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
0.81 ? 7% -0.2 0.66 ? 7% perf-profile.calltrace.cycles-pp.tcp_rcv_space_adjust.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
0.61 ? 8% +0.2 0.82 ? 12% perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
0.45 ? 44% +0.3 0.71 ? 13% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_recvmsg.inet_recvmsg
0.34 ? 70% +0.3 0.65 ? 12% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock.tcp_recvmsg
1.77 ? 7% -0.4 1.40 ? 9% perf-profile.children.cycles-pp.sock_ioctl
1.74 ? 5% -0.3 1.39 ? 9% perf-profile.children.cycles-pp.__dev_queue_xmit
1.15 ? 7% -0.3 0.85 ? 10% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.94 ? 8% -0.2 0.71 ? 10% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.93 ? 4% -0.2 0.76 ? 9% perf-profile.children.cycles-pp.dev_hard_start_xmit
0.61 ? 9% -0.2 0.45 ? 20% perf-profile.children.cycles-pp.update_rq_clock
0.71 ? 8% -0.2 0.55 ? 11% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.64 ? 5% -0.1 0.50 ? 8% perf-profile.children.cycles-pp.__entry_text_start
0.66 ? 5% -0.1 0.52 ? 14% perf-profile.children.cycles-pp.aa_sk_perm
0.84 ? 5% -0.1 0.71 ? 9% perf-profile.children.cycles-pp.loopback_xmit
0.57 ? 7% -0.1 0.45 ? 10% perf-profile.children.cycles-pp.tcp_mstamp_refresh
0.45 ? 6% -0.1 0.33 ? 13% perf-profile.children.cycles-pp.security_socket_sendmsg
0.45 ? 8% -0.1 0.34 ? 12% perf-profile.children.cycles-pp.sockfd_lookup_light
0.35 ? 12% -0.1 0.24 ? 11% perf-profile.children.cycles-pp.__virt_addr_valid
0.55 ? 8% -0.1 0.45 ? 11% perf-profile.children.cycles-pp.___might_sleep
0.40 ? 8% -0.1 0.30 ? 11% perf-profile.children.cycles-pp.__fput
0.41 ? 8% -0.1 0.31 ? 11% perf-profile.children.cycles-pp.task_work_run
0.41 ? 7% -0.1 0.34 ? 12% perf-profile.children.cycles-pp.__might_fault
0.26 ? 8% -0.1 0.19 ? 8% perf-profile.children.cycles-pp.sock_close
0.24 ? 9% -0.1 0.18 ? 9% perf-profile.children.cycles-pp.inet_release
0.25 ? 8% -0.1 0.19 ? 8% perf-profile.children.cycles-pp.__sock_release
0.20 ? 7% -0.1 0.14 ? 12% perf-profile.children.cycles-pp.__tcp_close
0.22 ? 8% -0.1 0.16 ? 9% perf-profile.children.cycles-pp.tcp_close
0.19 ? 11% -0.1 0.14 ? 18% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.28 ? 7% -0.1 0.22 ? 13% perf-profile.children.cycles-pp.__cond_resched
0.15 ? 9% -0.1 0.09 ? 21% perf-profile.children.cycles-pp.resched_curr
0.18 ? 9% -0.1 0.13 ? 19% perf-profile.children.cycles-pp.check_preempt_curr
0.18 ? 6% -0.0 0.13 ? 8% perf-profile.children.cycles-pp.__put_user_nocheck_4
0.14 ? 10% -0.0 0.10 ? 16% perf-profile.children.cycles-pp.__sk_mem_schedule
0.19 ? 8% -0.0 0.15 ? 10% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.16 ? 9% -0.0 0.12 ? 19% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 ? 9% -0.0 0.09 ? 19% perf-profile.children.cycles-pp.__sk_mem_raise_allocated
0.18 ? 8% -0.0 0.14 ? 11% perf-profile.children.cycles-pp.do_tcp_setsockopt
0.21 ? 7% -0.0 0.17 ? 10% perf-profile.children.cycles-pp.tcp_rcv_synsent_state_process
0.11 ? 4% -0.0 0.08 ? 13% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.14 ? 9% -0.0 0.11 ? 9% perf-profile.children.cycles-pp.open_related_ns
0.13 ? 6% -0.0 0.10 ? 16% perf-profile.children.cycles-pp.__sys_accept4
0.12 ? 5% -0.0 0.09 ? 10% perf-profile.children.cycles-pp.tcp_fin
0.07 ? 7% -0.0 0.04 ? 64% perf-profile.children.cycles-pp.__x64_sys_accept4
0.13 ? 9% -0.0 0.10 ? 14% perf-profile.children.cycles-pp.import_single_range
0.09 ? 9% -0.0 0.07 ? 13% perf-profile.children.cycles-pp.__ip_finish_output
0.08 -0.0 0.06 ? 13% perf-profile.children.cycles-pp.__sock_wfree
0.07 ? 6% -0.0 0.05 ? 42% perf-profile.children.cycles-pp.ksys_read
0.07 ? 6% -0.0 0.05 ? 41% perf-profile.children.cycles-pp.vfs_read
0.09 ? 4% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.tcp_update_recv_tstamps
0.09 ? 14% +0.0 0.13 ? 12% perf-profile.children.cycles-pp.perf_tp_event
0.07 ? 11% +0.0 0.11 ? 13% perf-profile.children.cycles-pp.ip_copy_addrs
0.15 ? 9% +0.1 0.20 ? 10% perf-profile.children.cycles-pp.nr_iowait_cpu
0.08 ? 12% +0.1 0.17 ? 13% perf-profile.children.cycles-pp.available_idle_cpu
0.00 +0.2 0.22 ? 38% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.2 0.23 ? 44% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.4 0.37 ? 46% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
1.15 ? 7% -0.3 0.85 ? 10% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.47 ? 9% -0.2 0.32 ? 19% perf-profile.self.cycles-pp.update_rq_clock
0.62 ? 5% -0.2 0.47 ? 12% perf-profile.self.cycles-pp.__dev_queue_xmit
0.64 ? 5% -0.2 0.49 ? 8% perf-profile.self.cycles-pp.__entry_text_start
0.50 ? 8% -0.1 0.39 ? 11% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.33 ? 12% -0.1 0.22 ? 12% perf-profile.self.cycles-pp.__virt_addr_valid
0.45 ? 4% -0.1 0.35 ? 14% perf-profile.self.cycles-pp.aa_sk_perm
0.33 ? 5% -0.1 0.26 ? 14% perf-profile.self.cycles-pp.set_next_entity
0.09 ? 8% -0.1 0.04 ? 63% perf-profile.self.cycles-pp.dev_hard_start_xmit
0.15 ? 9% -0.1 0.09 ? 21% perf-profile.self.cycles-pp.resched_curr
0.27 ? 7% -0.1 0.22 ? 12% perf-profile.self.cycles-pp.__sys_sendto
0.23 ? 7% -0.1 0.18 ? 9% perf-profile.self.cycles-pp.sock_ioctl
0.21 ? 6% -0.1 0.16 ? 10% perf-profile.self.cycles-pp._copy_to_iter
0.18 ? 5% -0.0 0.13 ? 8% perf-profile.self.cycles-pp.__put_user_nocheck_4
0.15 ? 10% -0.0 0.10 ? 11% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.19 ? 11% -0.0 0.14 ? 14% perf-profile.self.cycles-pp.inet_ioctl
0.16 ? 9% -0.0 0.12 ? 20% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.19 ? 9% -0.0 0.15 ? 12% perf-profile.self.cycles-pp.__sys_recvfrom
0.14 ? 7% -0.0 0.09 ? 12% perf-profile.self.cycles-pp.lock_sock_nested
0.13 ? 9% -0.0 0.09 ? 16% perf-profile.self.cycles-pp.__sk_mem_raise_allocated
0.19 ? 5% -0.0 0.16 ? 9% perf-profile.self.cycles-pp.process_backlog
0.07 ? 8% -0.0 0.04 ? 63% perf-profile.self.cycles-pp.tcp_mstamp_refresh
0.12 ? 7% -0.0 0.09 ? 15% perf-profile.self.cycles-pp.sockfd_lookup_light
0.10 ? 4% -0.0 0.07 ? 12% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.08 -0.0 0.05 ? 6% perf-profile.self.cycles-pp.__sock_wfree
0.14 ? 6% -0.0 0.11 ? 10% perf-profile.self.cycles-pp.__cond_resched
0.08 ? 10% -0.0 0.06 ? 10% perf-profile.self.cycles-pp.sock_do_ioctl
0.08 ? 16% +0.0 0.12 ? 13% perf-profile.self.cycles-pp.perf_tp_event
0.07 ? 11% +0.0 0.11 ? 15% perf-profile.self.cycles-pp.ip_copy_addrs
0.15 ? 9% +0.1 0.20 ? 10% perf-profile.self.cycles-pp.nr_iowait_cpu
0.03 ? 99% +0.1 0.09 ? 26% perf-profile.self.cycles-pp.__wake_up_common
0.00 +0.1 0.08 ? 28% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.08 ? 12% +0.1 0.17 ? 14% perf-profile.self.cycles-pp.available_idle_cpu
0.13 ? 11% +0.1 0.25 ? 18% perf-profile.self.cycles-pp.tcp_sendmsg



stress-ng.sock.ops_per_sec

11500 +-------------------------------------------------------------------+
|++.+++ +.+++++. + +.+++++.+++++.++++.++++ +++.++++.+++++.++|
11000 |-+ +.++++ + + +.++ |
10500 |-+ |
| |
10000 |-+ |
9500 |-+ O |
| O O O O O O O |
9000 |-+ |
8500 |O+ O |
| O O O O O O |
8000 |-+ O O O OO OO O OO O |
7500 |-+ O O |
| O |
7000 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

***************************************************************************************************
lkp-csl-2sp8: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
cluster/compiler/concurrency/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/testcase/ucode:
cs-localhost/gcc-9/8000/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2sp8/apachebench/0x5003006

commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")

2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
35891 -5.6% 33871 apachebench.requests_per_second
28.96 +5.8% 30.64 apachebench.time.elapsed_time
28.96 +5.8% 30.64 apachebench.time.elapsed_time.max
23.00 +8.7% 25.00 ? 2% apachebench.time.percent_of_cpu_this_job_got
222.89 +6.0% 236.22 apachebench.time_per_request
385478 -5.6% 363787 apachebench.transfer_rate
1.67 ? 22% -0.5 1.17 mpstat.cpu.all.irq%
1.02 +0.1 1.12 mpstat.cpu.all.sys%
0.07 ? 50% -59.4% 0.03 ? 43% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.06 ? 55% -50.8% 0.03 ? 42% perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
3037 ? 76% +195.7% 8981 ? 51% softirqs.CPU15.NET_RX
822.17 ?127% +1070.4% 9622 ?144% softirqs.CPU37.NET_RX
2388 ?102% +443.8% 12988 ? 58% softirqs.CPU67.NET_RX
11797 ? 4% +15.8% 13658 ? 4% softirqs.TIMER
12572413 ? 6% -16.9% 10443087 ? 5% cpuidle.C1.time
511677 ? 8% -12.7% 446730 ? 7% cpuidle.C1.usage
1.375e+09 ? 74% +110.9% 2.899e+09 cpuidle.C1E.time
4565230 ? 26% +46.7% 6695295 cpuidle.C1E.usage
97938 ? 27% -58.4% 40694 ? 39% cpuidle.POLL.usage
606.17 ?153% -98.5% 9.29 ?148% interrupts.CPU12.TLB:TLB_shootdowns
571.00 ?143% -96.7% 18.57 ?110% interrupts.CPU29.TLB:TLB_shootdowns
530.00 ?146% -91.1% 47.43 ?184% interrupts.CPU30.TLB:TLB_shootdowns
1657 ?190% -99.6% 6.86 ? 74% interrupts.CPU5.TLB:TLB_shootdowns
57.50 ? 67% -62.0% 21.86 ? 32% interrupts.CPU95.TLB:TLB_shootdowns
3736 ? 7% -32.5% 2521 ? 10% interrupts.RES:Rescheduling_interrupts
1792 ? 6% +11.4% 1996 turbostat.Bzy_MHz
501037 ? 8% -12.7% 437439 ? 7% turbostat.C1
0.41 ? 6% -0.1 0.32 ? 5% turbostat.C1%
4561770 ? 26% +46.7% 6690622 turbostat.C1E
1857351 ? 68% -98.2% 32984 ?139% turbostat.C6
105.14 +20.6% 126.78 turbostat.PkgWatt
1.397e+09 -4.4% 1.335e+09 perf-stat.i.branch-instructions
40257529 ? 15% -20.1% 32145859 ? 2% perf-stat.i.branch-misses
9.06 ? 57% +9.2 18.23 ? 5% perf-stat.i.cache-miss-rate%
3159646 ? 13% +223.7% 10228041 ? 11% perf-stat.i.cache-misses
4.25 ? 24% -31.2% 2.93 perf-stat.i.cpi
925.45 ? 40% -80.1% 184.12 ? 3% perf-stat.i.cpu-migrations
4096 ? 10% -68.6% 1284 ? 9% perf-stat.i.cycles-between-cache-misses
0.11 ? 68% -0.1 0.01 ? 3% perf-stat.i.dTLB-store-miss-rate%
9.43e+08 -3.9% 9.059e+08 perf-stat.i.dTLB-stores
6.644e+09 -4.3% 6.357e+09 perf-stat.i.instructions
12.04 ? 5% +22.6% 14.76 ? 9% perf-stat.i.major-faults
44.87 ? 2% -5.1% 42.58 perf-stat.i.metric.M/sec
42764 -4.5% 40848 perf-stat.i.minor-faults
70.15 ? 4% +15.5 85.61 perf-stat.i.node-load-miss-rate%
330592 ? 22% +926.0% 3391986 ? 14% perf-stat.i.node-load-misses
157887 ? 4% +36.3% 215209 ? 3% perf-stat.i.node-loads
56.69 ? 10% +24.5 81.14 perf-stat.i.node-store-miss-rate%
110581 ? 16% +786.3% 980040 ? 15% perf-stat.i.node-store-misses
120278 ? 7% -19.9% 96300 ? 5% perf-stat.i.node-stores
42776 -4.5% 40862 perf-stat.i.page-faults
3.14 ? 16% +8.9 12.04 ? 11% perf-stat.overall.cache-miss-rate%
3656 ? 8% -71.4% 1045 ? 14% perf-stat.overall.cycles-between-cache-misses
67.09 ? 6% +26.8 93.93 perf-stat.overall.node-load-miss-rate%
47.64 ? 10% +43.2 90.83 perf-stat.overall.node-store-miss-rate%
1.355e+09 -4.3% 1.296e+09 perf-stat.ps.branch-instructions
39024819 ? 15% -20.0% 31208731 ? 2% perf-stat.ps.branch-misses
3059351 ? 13% +224.2% 9917945 ? 11% perf-stat.ps.cache-misses
893.60 ? 40% -80.0% 178.29 ? 3% perf-stat.ps.cpu-migrations
9.144e+08 -3.9% 8.791e+08 perf-stat.ps.dTLB-stores
6.446e+09 -4.3% 6.17e+09 perf-stat.ps.instructions
11.65 ? 5% +22.9% 14.31 ? 9% perf-stat.ps.major-faults
41450 -4.4% 39628 perf-stat.ps.minor-faults
320695 ? 22% +925.8% 3289591 ? 14% perf-stat.ps.node-load-misses
152926 ? 4% +36.5% 208688 ? 4% perf-stat.ps.node-loads
107301 ? 16% +786.0% 950672 ? 15% perf-stat.ps.node-store-misses
116716 ? 7% -19.9% 93516 ? 5% perf-stat.ps.node-stores
41461 -4.4% 39642 perf-stat.ps.page-faults
2.80 ? 9% -0.6 2.19 ? 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
2.69 ? 10% -0.6 2.12 ? 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.68 ? 9% -0.4 0.33 ? 87% perf-profile.calltrace.cycles-pp.ret_from_fork
0.68 ? 9% -0.4 0.33 ? 87% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.83 ? 9% -0.1 0.69 ? 6% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.81 ? 9% -0.1 0.67 ? 6% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.49 ? 45% +0.2 0.68 ? 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet6_recvmsg.sock_read_iter.new_sync_read
0.73 ? 10% +0.3 1.05 ? 6% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg
0.73 ? 10% +0.3 1.06 ? 6% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_read_iter
1.69 ? 7% +0.3 2.02 ? 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg
2.45 ? 6% +0.4 2.80 ? 4% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
1.98 ? 6% +0.4 2.34 ? 6% perf-profile.calltrace.cycles-pp.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
2.46 ? 6% +0.4 2.82 ? 4% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
2.00 ? 6% +0.4 2.37 ? 6% perf-profile.calltrace.cycles-pp._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.sock_write_iter
0.52 ? 45% +0.4 0.88 ? 7% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg
0.49 ? 45% +0.4 0.86 ? 7% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked
0.48 ? 45% +0.4 0.85 ? 8% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
1.51 ? 9% +0.4 1.90 ? 7% perf-profile.calltrace.cycles-pp.read
1.43 ? 9% +0.4 1.82 ? 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
1.40 ? 10% +0.4 1.79 ? 7% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.41 ? 9% +0.4 1.80 ? 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.00 ? 10% +0.4 1.40 ? 6% perf-profile.calltrace.cycles-pp.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read
1.34 ? 10% +0.4 1.74 ? 8% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.11 ? 10% +0.4 1.51 ? 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read
1.14 ? 10% +0.4 1.54 ? 7% perf-profile.calltrace.cycles-pp.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
3.02 ? 7% +0.4 3.46 ? 4% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
2.95 ? 7% +0.4 3.40 ? 4% perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
3.00 ? 7% +0.4 3.44 ? 4% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
0.20 ?141% +0.4 0.65 ? 12% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
3.34 ? 6% +0.4 3.79 ? 4% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
3.35 ? 6% +0.4 3.80 ? 4% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
3.03 ? 7% +0.4 3.48 ? 4% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll
3.70 ? 6% +0.5 4.15 ? 4% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb
3.66 ? 6% +0.5 4.12 ? 4% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit
3.13 ? 6% +0.5 3.59 ? 4% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
3.23 ? 6% +0.5 3.69 ? 4% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
2.20 ? 7% +0.5 2.70 ? 6% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.13 ? 7% +0.5 2.63 ? 5% perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.79 ? 6% -0.2 1.55 ? 5% perf-profile.children.cycles-pp.__schedule
0.76 ? 10% -0.2 0.53 ? 15% perf-profile.children.cycles-pp.console_unlock
0.76 ? 10% -0.2 0.54 ? 14% perf-profile.children.cycles-pp.vprintk_emit
0.64 ? 9% -0.2 0.44 ? 14% perf-profile.children.cycles-pp.serial8250_console_write
0.61 ? 9% -0.2 0.41 ? 15% perf-profile.children.cycles-pp.uart_console_write
0.59 ? 9% -0.2 0.40 ? 14% perf-profile.children.cycles-pp.wait_for_xmitr
0.57 ? 10% -0.2 0.38 ? 15% perf-profile.children.cycles-pp.serial8250_console_putchar
0.50 ? 11% -0.2 0.34 ? 13% perf-profile.children.cycles-pp.io_serial_in
0.53 ? 16% -0.2 0.36 ? 23% perf-profile.children.cycles-pp.devkmsg_write.cold
0.53 ? 16% -0.2 0.36 ? 23% perf-profile.children.cycles-pp.devkmsg_emit
0.68 ? 9% -0.2 0.52 ? 14% perf-profile.children.cycles-pp.ret_from_fork
0.68 ? 9% -0.2 0.52 ? 14% perf-profile.children.cycles-pp.kthread
0.61 ? 11% -0.2 0.46 ? 17% perf-profile.children.cycles-pp.worker_thread
0.61 ? 11% -0.2 0.46 ? 18% perf-profile.children.cycles-pp.process_one_work
0.60 ? 11% -0.2 0.45 ? 19% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.60 ? 11% -0.1 0.45 ? 19% perf-profile.children.cycles-pp.memcpy_toio
0.84 ? 8% -0.1 0.70 ? 6% perf-profile.children.cycles-pp.schedule_idle
1.25 ? 6% -0.1 1.12 ? 5% perf-profile.children.cycles-pp.schedule_hrtimeout_range_clock
0.18 ? 31% -0.1 0.09 ? 9% perf-profile.children.cycles-pp.rcu_idle_exit
0.59 ? 5% -0.1 0.52 ? 5% perf-profile.children.cycles-pp.unmap_page_range
0.35 ? 5% -0.1 0.28 ? 7% perf-profile.children.cycles-pp._raw_spin_lock
0.41 ? 5% -0.1 0.34 ? 7% perf-profile.children.cycles-pp.dequeue_task_fair
0.21 ? 11% -0.1 0.15 ? 13% perf-profile.children.cycles-pp.set_next_entity
0.65 ? 2% -0.1 0.58 ? 5% perf-profile.children.cycles-pp.unmap_vmas
0.13 ? 29% -0.1 0.07 ? 11% perf-profile.children.cycles-pp.rcu_eqs_exit
0.30 ? 7% -0.1 0.25 ? 6% perf-profile.children.cycles-pp.asm_sysvec_call_function
0.21 ? 14% -0.1 0.16 ? 8% perf-profile.children.cycles-pp.__libc_fork
0.20 ? 16% -0.1 0.15 ? 10% perf-profile.children.cycles-pp.__do_sys_clone
0.20 ? 16% -0.1 0.15 ? 10% perf-profile.children.cycles-pp.kernel_clone
0.18 ? 17% -0.1 0.13 ? 13% perf-profile.children.cycles-pp.dup_mmap
0.35 ? 9% -0.1 0.29 ? 8% perf-profile.children.cycles-pp.pick_next_task_fair
0.18 ? 16% -0.1 0.13 ? 14% perf-profile.children.cycles-pp.dup_mm
0.19 ? 15% -0.1 0.14 ? 9% perf-profile.children.cycles-pp.copy_process
0.12 ? 21% -0.0 0.07 ? 12% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.34 ? 7% -0.0 0.29 ? 7% perf-profile.children.cycles-pp.dequeue_entity
0.11 ? 20% -0.0 0.07 ? 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.16 ? 5% -0.0 0.12 ? 8% perf-profile.children.cycles-pp.__sysvec_call_function
0.23 ? 4% -0.0 0.19 ? 5% perf-profile.children.cycles-pp.sysvec_call_function
0.10 ? 12% -0.0 0.07 ? 17% perf-profile.children.cycles-pp.sk_filter_trim_cap
0.08 ? 27% -0.0 0.06 ? 13% perf-profile.children.cycles-pp._find_next_bit
0.07 ? 19% +0.0 0.10 ? 13% perf-profile.children.cycles-pp.skb_entail
0.16 ? 5% +0.0 0.19 ? 12% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.19 ? 9% +0.0 0.23 ? 7% perf-profile.children.cycles-pp.common_file_perm
0.07 ? 18% +0.0 0.11 ? 11% perf-profile.children.cycles-pp.sk_page_frag_refill
0.07 ? 14% +0.0 0.11 ? 11% perf-profile.children.cycles-pp.skb_page_frag_refill
0.10 ? 14% +0.1 0.15 ? 16% perf-profile.children.cycles-pp.__skb_clone
0.21 ? 13% +0.1 0.26 ? 7% perf-profile.children.cycles-pp.__inet_lookup_established
0.02 ?142% +0.1 0.07 ? 26% perf-profile.children.cycles-pp.lockref_get_not_dead
0.02 ?141% +0.1 0.07 ? 9% perf-profile.children.cycles-pp.tcp_ack_update_rtt
0.03 ?102% +0.1 0.09 ? 20% perf-profile.children.cycles-pp.cpuacct_charge
0.12 ? 18% +0.1 0.18 ? 5% perf-profile.children.cycles-pp.kfree
0.13 ? 12% +0.1 0.20 ? 9% perf-profile.children.cycles-pp.__slab_free
0.00 +0.1 0.07 ? 20% perf-profile.children.cycles-pp.llist_reverse_order
0.37 ? 8% +0.1 0.45 ? 10% perf-profile.children.cycles-pp.kmem_cache_free
0.01 ?223% +0.1 0.09 ? 16% perf-profile.children.cycles-pp.llist_add_batch
0.40 ? 8% +0.1 0.48 ? 9% perf-profile.children.cycles-pp.sk_stream_alloc_skb
0.15 ? 8% +0.1 0.23 ? 14% perf-profile.children.cycles-pp.rcu_do_batch
0.38 ? 8% +0.1 0.47 ? 8% perf-profile.children.cycles-pp.__alloc_skb
0.20 ? 6% +0.1 0.30 ? 12% perf-profile.children.cycles-pp.rcu_core
0.00 +0.1 0.12 ? 20% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.55 ? 5% +0.1 0.68 ? 9% perf-profile.children.cycles-pp.__kfree_skb
0.69 ? 4% +0.1 0.83 ? 6% perf-profile.children.cycles-pp.tcp_clean_rtx_queue
0.98 ? 5% +0.2 1.13 ? 5% perf-profile.children.cycles-pp.tcp_ack
0.77 ? 15% +0.2 0.97 ? 9% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.2 0.23 ? 18% perf-profile.children.cycles-pp.sched_ttwu_pending
1.30 ? 14% +0.3 1.59 ? 6% perf-profile.children.cycles-pp.ktime_get
2.39 ? 7% +0.3 2.69 ? 6% perf-profile.children.cycles-pp.copyin
0.69 ? 10% +0.3 1.00 ? 6% perf-profile.children.cycles-pp._copy_to_iter
0.65 ? 9% +0.3 0.96 ? 6% perf-profile.children.cycles-pp.copyout
0.00 +0.3 0.31 ? 20% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
2.40 ? 7% +0.3 2.71 ? 5% perf-profile.children.cycles-pp._copy_from_iter_full
0.98 ? 9% +0.4 1.34 ? 6% perf-profile.children.cycles-pp.__skb_datagram_iter
0.98 ? 9% +0.4 1.35 ? 6% perf-profile.children.cycles-pp.skb_copy_datagram_iter
1.54 ? 10% +0.4 1.92 ? 8% perf-profile.children.cycles-pp.read
1.14 ? 10% +0.4 1.54 ? 7% perf-profile.children.cycles-pp.inet_recvmsg
3.20 ? 7% +0.4 3.61 ? 3% perf-profile.children.cycles-pp.tcp_v4_rcv
3.25 ? 7% +0.4 3.66 ? 3% perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
3.26 ? 7% +0.4 3.68 ? 3% perf-profile.children.cycles-pp.ip_local_deliver_finish
3.28 ? 7% +0.4 3.70 ? 3% perf-profile.children.cycles-pp.ip_local_deliver
3.51 ? 7% +0.4 3.93 ? 4% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
3.39 ? 7% +0.4 3.82 ? 4% perf-profile.children.cycles-pp.ip_rcv
2.35 ? 7% +0.5 2.83 ? 6% perf-profile.children.cycles-pp.new_sync_read
2.69 ? 6% +0.5 3.17 ? 5% perf-profile.children.cycles-pp.vfs_read
2.79 ? 6% +0.5 3.27 ? 5% perf-profile.children.cycles-pp.ksys_read
2.26 ? 7% +0.5 2.75 ? 6% perf-profile.children.cycles-pp.sock_read_iter
1.98 ? 7% +0.5 2.47 ? 5% perf-profile.children.cycles-pp.tcp_recvmsg
1.67 ? 7% +0.5 2.17 ? 6% perf-profile.children.cycles-pp.tcp_recvmsg_locked
3.10 ? 7% +0.6 3.72 ? 6% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
9.78 ? 7% +1.0 10.78 ? 4% perf-profile.children.cycles-pp.sock_write_iter
9.71 ? 7% +1.0 10.72 ? 4% perf-profile.children.cycles-pp.sock_sendmsg
9.56 ? 7% +1.0 10.58 ? 4% perf-profile.children.cycles-pp.tcp_sendmsg
9.41 ? 7% +1.0 10.43 ? 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.50 ? 11% -0.2 0.34 ? 13% perf-profile.self.cycles-pp.io_serial_in
0.59 ? 11% -0.1 0.45 ? 19% perf-profile.self.cycles-pp.memcpy_toio
0.16 ? 15% -0.1 0.11 ? 16% perf-profile.self.cycles-pp.update_rq_clock
0.11 ? 14% -0.0 0.07 ? 15% perf-profile.self.cycles-pp.set_next_entity
0.32 ? 8% -0.0 0.28 ? 10% perf-profile.self.cycles-pp.__schedule
0.14 ? 11% -0.0 0.09 ? 12% perf-profile.self.cycles-pp.update_load_avg
0.29 ? 14% -0.0 0.25 ? 4% perf-profile.self.cycles-pp.native_sched_clock
0.17 ? 4% -0.0 0.15 ? 5% perf-profile.self.cycles-pp.apr_brigade_cleanup
0.10 ? 11% -0.0 0.08 ? 17% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.17 ? 8% -0.0 0.15 ? 7% perf-profile.self.cycles-pp.ap_process_request_internal
0.07 ? 9% -0.0 0.05 ? 9% perf-profile.self.cycles-pp._find_next_bit
0.16 ? 6% +0.0 0.19 ? 8% perf-profile.self.cycles-pp.common_file_perm
0.07 ? 19% +0.0 0.10 ? 10% perf-profile.self.cycles-pp.sock_def_readable
0.04 ? 71% +0.0 0.07 ? 10% perf-profile.self.cycles-pp.tcp_rcv_space_adjust
0.07 ? 11% +0.0 0.11 ? 13% perf-profile.self.cycles-pp.poll_schedule_timeout
0.16 ? 8% +0.0 0.20 ? 8% perf-profile.self.cycles-pp.__check_object_size
0.05 ? 46% +0.0 0.09 ? 13% perf-profile.self.cycles-pp.skb_entail
0.07 ? 14% +0.0 0.11 ? 13% perf-profile.self.cycles-pp.skb_page_frag_refill
0.02 ?141% +0.0 0.06 ? 11% perf-profile.self.cycles-pp.__alloc_skb
0.07 ? 46% +0.1 0.12 ? 13% perf-profile.self.cycles-pp.__skb_clone
0.11 ? 14% +0.1 0.17 ? 14% perf-profile.self.cycles-pp.__ksize
0.12 ? 18% +0.1 0.17 ? 8% perf-profile.self.cycles-pp.kfree
0.02 ?142% +0.1 0.07 ? 26% perf-profile.self.cycles-pp.lockref_get_not_dead
0.18 ? 17% +0.1 0.24 ? 8% perf-profile.self.cycles-pp.__inet_lookup_established
0.00 +0.1 0.06 ? 14% perf-profile.self.cycles-pp.try_to_wake_up
0.03 ?102% +0.1 0.09 ? 20% perf-profile.self.cycles-pp.cpuacct_charge
0.01 ?223% +0.1 0.07 ? 15% perf-profile.self.cycles-pp.tcp_ack_update_rtt
0.20 ? 7% +0.1 0.26 ? 12% perf-profile.self.cycles-pp.kmem_cache_free
0.13 ? 12% +0.1 0.20 ? 9% perf-profile.self.cycles-pp.__slab_free
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.llist_reverse_order
0.12 ? 11% +0.1 0.20 ? 14% perf-profile.self.cycles-pp.tcp_rcv_established
0.01 ?223% +0.1 0.09 ? 16% perf-profile.self.cycles-pp.llist_add_batch
0.26 ? 14% +0.1 0.34 ? 3% perf-profile.self.cycles-pp.tcp_recvmsg_locked
0.30 ? 10% +0.1 0.41 ? 8% perf-profile.self.cycles-pp.tcp_sendmsg_locked
0.98 ? 18% +0.3 1.31 ? 8% perf-profile.self.cycles-pp.ktime_get
1.08 ? 18% +0.4 1.48 ? 9% perf-profile.self.cycles-pp.cpuidle_enter_state
1.23 ? 22% +0.5 1.77 ? 12% perf-profile.self.cycles-pp.menu_select
1.86 ? 7% +0.6 2.46 ? 7% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string



***************************************************************************************************
lkp-csl-2ap3: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap3/UDP_STREAM/netperf/0x5003006

commit:
2ea46c6fc9 ("cpumask/hotplug: Fix cpu_dying() state tracking")
5f94d1b650 ("sched/fair: don't use waker's cpu if the waker of sync wake-up is interrupt")

2ea46c6fc9452ac1 5f94d1b650d14e01f383e3d02e4
---------------- ---------------------------
%stddev %change %stddev
\ | \
62544 -52.4% 29784 ? 15% netperf.Throughput_Mbps
62544 -52.4% 29784 ? 15% netperf.Throughput_total_Mbps
998.17 ? 5% -16.2% 836.00 ? 10% netperf.time.minor_page_faults
81.17 +9.1% 88.57 netperf.time.percent_of_cpu_this_job_got
242.61 +9.5% 265.75 netperf.time.system_time
35803969 -52.4% 17050380 ? 15% netperf.workload
0.19 ? 6% -0.0 0.14 ? 5% mpstat.cpu.all.soft%
0.04 -0.0 0.04 ? 2% mpstat.cpu.all.usr%
1568380 ?162% -88.6% 178452 ? 11% numa-numastat.node3.local_node
1640558 ?153% -84.3% 257114 numa-numastat.node3.numa_hit
1572364 ? 2% -14.6% 1342693 vmstat.memory.cache
237206 -51.8% 114337 ? 15% vmstat.system.cs
8917350 ? 19% +840.2% 83837786 ? 13% cpuidle.C1.time
1687718 ? 18% +734.3% 14080429 ? 11% cpuidle.C1.usage
72623238 -89.0% 8015009 ?119% cpuidle.POLL.time
33405975 -90.8% 3070283 ?139% cpuidle.POLL.usage
427986 ? 7% -49.9% 214579 ? 2% numa-meminfo.node3.Active
427986 ? 7% -49.9% 214579 ? 2% numa-meminfo.node3.Active(anon)
702896 ? 5% -32.6% 473640 ? 3% numa-meminfo.node3.FilePages
463902 ? 7% -49.1% 235917 ? 2% numa-meminfo.node3.Shmem
106952 ? 7% -49.8% 53679 ? 2% numa-vmstat.node3.nr_active_anon
175755 ? 5% -32.6% 118442 ? 3% numa-vmstat.node3.nr_file_pages
116007 ? 7% -49.1% 59011 ? 2% numa-vmstat.node3.nr_shmem
106952 ? 7% -49.8% 53679 ? 2% numa-vmstat.node3.nr_zone_active_anon
433337 ? 7% -49.0% 220837 ? 2% meminfo.Active
433337 ? 7% -49.0% 220837 ? 2% meminfo.Active(anon)
1452704 ? 2% -15.7% 1223905 meminfo.Cached
841896 ? 3% -26.8% 616582 meminfo.Committed_AS
45250 -9.6% 40886 ? 2% meminfo.Mapped
483539 ? 7% -47.3% 254634 ? 2% meminfo.Shmem
8213 ? 16% -27.5% 5953 ? 10% softirqs.CPU12.RCU
6993 ? 23% -30.1% 4890 ? 7% softirqs.CPU124.RCU
8490 ? 15% -33.8% 5623 ? 15% softirqs.CPU190.RCU
8545 ? 14% -29.9% 5987 ? 12% softirqs.CPU191.RCU
8207 ? 14% -31.6% 5614 ? 11% softirqs.CPU45.RCU
8880 ? 16% -32.9% 5962 ? 12% softirqs.CPU51.RCU
8895 ? 11% -34.1% 5859 ? 14% softirqs.CPU83.RCU
8708 ? 20% -35.2% 5639 ? 12% softirqs.CPU95.RCU
17915040 -52.4% 8533057 ? 15% softirqs.NET_RX
1447713 ? 7% -26.6% 1062814 ? 5% softirqs.RCU
107830 ? 7% -48.8% 55213 ? 2% proc-vmstat.nr_active_anon
362869 ? 2% -15.7% 305817 proc-vmstat.nr_file_pages
79042 -5.3% 74879 proc-vmstat.nr_inactive_anon
11592 -11.7% 10240 ? 2% proc-vmstat.nr_mapped
120578 ? 7% -47.3% 63500 ? 2% proc-vmstat.nr_shmem
107830 ? 7% -48.8% 55213 ? 2% proc-vmstat.nr_zone_active_anon
79042 -5.3% 74879 proc-vmstat.nr_zone_inactive_anon
37297024 -50.6% 18413448 ? 14% proc-vmstat.numa_hit
37037152 -51.0% 18153607 ? 14% proc-vmstat.numa_local
291363 ? 7% -50.3% 144772 ? 2% proc-vmstat.pgactivate
1.147e+09 -52.3% 5.468e+08 ? 15% proc-vmstat.pgalloc_normal
1.147e+09 -52.3% 5.468e+08 ? 15% proc-vmstat.pgfree
0.02 ? 27% -71.7% 0.01 ? 6% perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.02 ? 21% -72.3% 0.00 ? 14% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.01 ? 24% -54.2% 0.00 ? 13% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ? 26% -53.5% 0.01 ? 37% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.02 ? 47% -63.7% 0.01 ? 23% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.00 ?223% +2300.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
0.01 ? 11% -31.2% 0.01 ? 11% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.03 ? 44% -70.5% 0.01 ? 13% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.03 ? 27% -67.1% 0.01 ? 53% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.04 ? 89% -75.9% 0.01 ? 10% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
0.10 ? 57% -59.0% 0.04 ?104% perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.04 ? 62% -74.1% 0.01 ? 35% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select
0.03 ? 52% -55.0% 0.02 ? 33% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.04 ? 65% +9892.1% 4.00 ? 93% perf-sched.sch_delay.max.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
0.00 ?223% +2300.0% 0.00 perf-sched.total_sch_delay.average.ms
2.53 ? 8% +97.1% 5.00 ? 2% perf-sched.total_wait_and_delay.average.ms
950222 ? 6% -50.3% 472509 perf-sched.total_wait_and_delay.count.ms
2.53 ? 8% +97.0% 4.99 ? 2% perf-sched.total_wait_time.average.ms
4.28 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
32.30 ? 8% +96.7% 63.54 ? 3% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.00 ? 17% +394.5% 0.01 ? 4% perf-sched.wait_and_delay.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
2112 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
556.83 ? 6% -47.8% 290.43 ? 2% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
938377 ? 6% -50.9% 460959 perf-sched.wait_and_delay.count.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
0.07 ? 16% +5743.6% 4.10 ? 90% perf-sched.wait_and_delay.max.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
129.86 ? 73% -63.7% 47.15 ?135% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.75 ? 2% -9.7% 0.68 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
32.30 ? 8% +96.7% 63.54 ? 3% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
0.00 ? 17% +209.9% 0.01 ? 6% perf-sched.wait_time.avg.ms.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
8.343e+08 -32.4% 5.641e+08 ? 7% perf-stat.i.branch-instructions
1.84 ? 24% +58.5 60.31 ? 27% perf-stat.i.cache-miss-rate%
4879389 ? 5% +1618.4% 83848142 ? 13% perf-stat.i.cache-misses
2.849e+08 ? 15% -44.8% 1.573e+08 ? 25% perf-stat.i.cache-references
238858 -51.8% 115114 ? 15% perf-stat.i.context-switches
3.79 ? 3% +67.9% 6.37 ? 5% perf-stat.i.cpi
1.499e+10 ? 3% +10.2% 1.652e+10 ? 2% perf-stat.i.cpu-cycles
3190 ? 8% -86.5% 430.75 ? 96% perf-stat.i.cycles-between-cache-misses
1.106e+09 -31.9% 7.533e+08 ? 7% perf-stat.i.dTLB-loads
6.313e+08 -33.1% 4.222e+08 ? 8% perf-stat.i.dTLB-stores
67.75 ? 6% -13.5 54.27 ? 7% perf-stat.i.iTLB-load-miss-rate%
6844433 -34.7% 4466144 ? 7% perf-stat.i.iTLB-load-misses
4.07e+09 -32.3% 2.757e+09 ? 7% perf-stat.i.instructions
0.27 ? 3% -38.6% 0.17 ? 8% perf-stat.i.ipc
0.08 ? 3% +10.2% 0.09 ? 2% perf-stat.i.metric.GHz
14.90 -31.8% 10.16 ? 7% perf-stat.i.metric.M/sec
88.72 ? 3% +9.7 98.42 ? 2% perf-stat.i.node-load-miss-rate%
390842 ? 54% +7043.8% 27921170 ? 13% perf-stat.i.node-load-misses
53149 ? 20% +140.2% 127659 ? 20% perf-stat.i.node-loads
84.06 ? 10% +13.9 97.91 ? 3% perf-stat.i.node-store-miss-rate%
108685 ? 5% +20162.1% 22022000 ? 13% perf-stat.i.node-store-misses
1.78 ? 23% +55.8 57.59 ? 30% perf-stat.overall.cache-miss-rate%
3.68 ? 3% +63.4% 6.02 ? 6% perf-stat.overall.cpi
3086 ? 8% -93.5% 201.38 ? 16% perf-stat.overall.cycles-between-cache-misses
67.82 ? 6% -13.3 54.47 ? 8% perf-stat.overall.iTLB-load-miss-rate%
0.27 ? 3% -38.6% 0.17 ? 7% perf-stat.overall.ipc
86.32 ? 4% +13.2 99.55 perf-stat.overall.node-load-miss-rate%
79.67 ? 11% +20.2 99.91 perf-stat.overall.node-store-miss-rate%
34248 +43.8% 49254 ? 8% perf-stat.overall.path-length
8.317e+08 -32.4% 5.625e+08 ? 7% perf-stat.ps.branch-instructions
4863939 ? 5% +1617.8% 83552434 ? 13% perf-stat.ps.cache-misses
2.839e+08 ? 15% -44.8% 1.568e+08 ? 25% perf-stat.ps.cache-references
238042 -51.8% 114736 ? 15% perf-stat.ps.context-switches
1.494e+10 ? 3% +10.2% 1.646e+10 ? 2% perf-stat.ps.cpu-cycles
1.102e+09 -31.9% 7.51e+08 ? 7% perf-stat.ps.dTLB-loads
6.293e+08 -33.1% 4.209e+08 ? 8% perf-stat.ps.dTLB-stores
6821496 -34.7% 4451472 ? 7% perf-stat.ps.iTLB-load-misses
4.057e+09 -32.3% 2.749e+09 ? 7% perf-stat.ps.instructions
389775 ? 54% +7038.0% 27822357 ? 13% perf-stat.ps.node-load-misses
53006 ? 20% +140.1% 127264 ? 20% perf-stat.ps.node-loads
108429 ? 5% +20137.9% 21943845 ? 13% perf-stat.ps.node-store-misses
1.226e+12 -32.3% 8.298e+11 ? 7% perf-stat.total.instructions
838643 ? 8% -67.5% 272512 ? 11% interrupts.CAL:Function_call_interrupts
67.33 ? 32% -60.3% 26.71 ? 36% interrupts.CPU0.RES:Rescheduling_interrupts
108.33 ? 36% +93.3% 209.43 ? 15% interrupts.CPU1.NMI:Non-maskable_interrupts
108.33 ? 36% +93.3% 209.43 ? 15% interrupts.CPU1.PMI:Performance_monitoring_interrupts
114.33 ? 46% +84.8% 211.29 ? 15% interrupts.CPU10.NMI:Non-maskable_interrupts
114.33 ? 46% +84.8% 211.29 ? 15% interrupts.CPU10.PMI:Performance_monitoring_interrupts
102.00 ? 18% +200.4% 306.43 ? 83% interrupts.CPU100.NMI:Non-maskable_interrupts
102.00 ? 18% +200.4% 306.43 ? 83% interrupts.CPU100.PMI:Performance_monitoring_interrupts
117.50 ? 22% +648.2% 879.14 ?122% interrupts.CPU101.NMI:Non-maskable_interrupts
117.50 ? 22% +648.2% 879.14 ?122% interrupts.CPU101.PMI:Performance_monitoring_interrupts
117.00 ? 24% +67.2% 195.57 ? 20% interrupts.CPU103.NMI:Non-maskable_interrupts
117.00 ? 24% +67.2% 195.57 ? 20% interrupts.CPU103.PMI:Performance_monitoring_interrupts
98.83 ? 32% +799.3% 888.86 ?184% interrupts.CPU104.NMI:Non-maskable_interrupts
98.83 ? 32% +799.3% 888.86 ?184% interrupts.CPU104.PMI:Performance_monitoring_interrupts
111.50 ? 23% +87.7% 209.29 ? 16% interrupts.CPU105.NMI:Non-maskable_interrupts
111.50 ? 23% +87.7% 209.29 ? 16% interrupts.CPU105.PMI:Performance_monitoring_interrupts
104.83 ? 34% +565.5% 697.71 ?133% interrupts.CPU112.NMI:Non-maskable_interrupts
104.83 ? 34% +565.5% 697.71 ?133% interrupts.CPU112.PMI:Performance_monitoring_interrupts
104.50 ? 36% +64.6% 172.00 ? 24% interrupts.CPU129.NMI:Non-maskable_interrupts
104.50 ? 36% +64.6% 172.00 ? 24% interrupts.CPU129.PMI:Performance_monitoring_interrupts
113.17 ? 25% +142.8% 274.71 ? 91% interrupts.CPU142.NMI:Non-maskable_interrupts
113.17 ? 25% +142.8% 274.71 ? 91% interrupts.CPU142.PMI:Performance_monitoring_interrupts
222.33 ?214% -98.2% 4.00 ? 61% interrupts.CPU145.RES:Rescheduling_interrupts
119.33 ? 32% +1189.9% 1539 ?187% interrupts.CPU16.NMI:Non-maskable_interrupts
119.33 ? 32% +1189.9% 1539 ?187% interrupts.CPU16.PMI:Performance_monitoring_interrupts
624.33 ?171% -82.9% 106.71 ? 24% interrupts.CPU163.NMI:Non-maskable_interrupts
624.33 ?171% -82.9% 106.71 ? 24% interrupts.CPU163.PMI:Performance_monitoring_interrupts
128.50 ? 18% +40.0% 179.86 ? 24% interrupts.CPU2.NMI:Non-maskable_interrupts
128.50 ? 18% +40.0% 179.86 ? 24% interrupts.CPU2.PMI:Performance_monitoring_interrupts
388.33 ?210% -97.8% 8.71 ? 62% interrupts.CPU3.RES:Rescheduling_interrupts
94.50 ? 37% +102.0% 190.86 ? 24% interrupts.CPU4.NMI:Non-maskable_interrupts
94.50 ? 37% +102.0% 190.86 ? 24% interrupts.CPU4.PMI:Performance_monitoring_interrupts
105.00 ? 32% +1495.6% 1675 ?173% interrupts.CPU5.NMI:Non-maskable_interrupts
105.00 ? 32% +1495.6% 1675 ?173% interrupts.CPU5.PMI:Performance_monitoring_interrupts
119.83 ? 35% +60.6% 192.43 ? 21% interrupts.CPU6.NMI:Non-maskable_interrupts
119.83 ? 35% +60.6% 192.43 ? 21% interrupts.CPU6.PMI:Performance_monitoring_interrupts
95.00 ? 41% +968.4% 1015 ?200% interrupts.CPU8.NMI:Non-maskable_interrupts
95.00 ? 41% +968.4% 1015 ?200% interrupts.CPU8.PMI:Performance_monitoring_interrupts
97.50 ? 48% +96.9% 192.00 ? 22% interrupts.CPU9.NMI:Non-maskable_interrupts
97.50 ? 48% +96.9% 192.00 ? 22% interrupts.CPU9.PMI:Performance_monitoring_interrupts
123.17 ? 29% +71.1% 210.71 ? 12% interrupts.CPU96.NMI:Non-maskable_interrupts
123.17 ? 29% +71.1% 210.71 ? 12% interrupts.CPU96.PMI:Performance_monitoring_interrupts
102.17 ? 17% +111.0% 215.57 ? 16% interrupts.CPU98.NMI:Non-maskable_interrupts
102.17 ? 17% +111.0% 215.57 ? 16% interrupts.CPU98.PMI:Performance_monitoring_interrupts
105.67 ? 24% +285.6% 407.43 ?118% interrupts.CPU99.NMI:Non-maskable_interrupts
105.67 ? 24% +285.6% 407.43 ?118% interrupts.CPU99.PMI:Performance_monitoring_interrupts
576.33 ? 7% -50.7% 284.14 ? 2% interrupts.IWI:IRQ_work_interrupts
30.77 -5.4 25.32 ? 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
30.57 -5.3 25.23 ? 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.27 ? 3% -2.8 8.43 ? 21% perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.24 ? 3% -2.8 8.42 ? 21% perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.14 -2.4 16.71 ? 7% perf-profile.calltrace.cycles-pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.10 -2.4 16.70 ? 7% perf-profile.calltrace.cycles-pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.14 ? 11% -2.3 1.81 ? 36% perf-profile.calltrace.cycles-pp.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
4.06 ? 11% -2.3 1.79 ? 36% perf-profile.calltrace.cycles-pp.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg.__sys_sendto
3.99 ? 11% -2.2 1.76 ? 36% perf-profile.calltrace.cycles-pp.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg.sock_sendmsg
3.85 ? 12% -2.1 1.70 ? 36% perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb.udp_sendmsg
3.46 ? 13% -1.9 1.55 ? 36% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb.udp_send_skb
3.42 ? 13% -1.9 1.54 ? 36% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_send_skb
3.38 ? 13% -1.9 1.52 ? 36% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
3.23 ? 13% -1.8 1.48 ? 36% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip_finish_output2
3.14 ? 13% -1.7 1.44 ? 36% perf-profile.calltrace.cycles-pp.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq.__local_bh_enable_ip
3.11 ? 13% -1.7 1.44 ? 36% perf-profile.calltrace.cycles-pp.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start.do_softirq
3.01 ? 14% -1.6 1.39 ? 36% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action.__softirqentry_text_start
2.93 ? 14% -1.6 1.37 ? 36% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll.net_rx_action
2.87 ? 14% -1.5 1.35 ? 36% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll
2.86 ? 14% -1.5 1.34 ? 36% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
2.85 ? 14% -1.5 1.34 ? 36% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
2.82 ? 14% -1.5 1.33 ? 36% perf-profile.calltrace.cycles-pp.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
2.00 ? 18% -1.4 0.58 ? 87% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
2.68 ? 15% -1.4 1.29 ? 36% perf-profile.calltrace.cycles-pp.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
2.65 ? 15% -1.4 1.27 ? 36% perf-profile.calltrace.cycles-pp.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
2.01 ? 18% -1.4 0.65 ? 69% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb
2.26 ? 17% -1.3 0.93 ? 36% perf-profile.calltrace.cycles-pp.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv
2.06 ? 18% -1.3 0.76 ? 69% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb
2.21 ? 17% -1.3 0.92 ? 36% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb
2.40 ? 16% -1.3 1.13 ? 38% perf-profile.calltrace.cycles-pp.__udp_enqueue_schedule_skb.udp_queue_rcv_one_skb.udp_unicast_rcv_skb.__udp4_lib_rcv.ip_protocol_deliver_rcu
1.31 ? 5% -1.0 0.33 ? 87% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg
1.29 ? 5% -1.0 0.33 ? 87% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp
2.14 ? 4% -1.0 1.18 ? 19% perf-profile.calltrace.cycles-pp.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
1.35 ? 5% -0.9 0.48 ? 64% perf-profile.calltrace.cycles-pp.schedule_timeout.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg
1.71 ? 4% -0.8 0.89 ? 19% perf-profile.calltrace.cycles-pp.__skb_wait_for_more_packets.__skb_recv_udp.udp_recvmsg.inet_recvmsg.__sys_recvfrom
0.84 ? 7% -0.4 0.40 ? 88% perf-profile.calltrace.cycles-pp.__consume_stateless_skb.udp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
0.51 ? 44% +0.7 1.19 ? 36% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
16.97 ? 10% +3.7 20.65 ? 13% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
67.79 +5.8 73.59 ? 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
67.79 +5.8 73.60 ? 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
67.73 +5.8 73.55 ? 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
68.07 +5.9 74.00 ? 2% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
7.05 ? 46% +6.8 13.84 ? 14% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
31.00 -5.5 25.53 ? 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
30.80 -5.4 25.44 ? 7% perf-profile.children.cycles-pp.do_syscall_64
4.59 ? 4% -4.5 0.07 ? 21% perf-profile.children.cycles-pp.poll_idle
11.28 ? 3% -2.8 8.43 ? 21% perf-profile.children.cycles-pp.__x64_sys_recvfrom
11.24 ? 3% -2.8 8.42 ? 21% perf-profile.children.cycles-pp.__sys_recvfrom
19.14 -2.4 16.71 ? 7% perf-profile.children.cycles-pp.__x64_sys_sendto
19.11 -2.4 16.70 ? 7% perf-profile.children.cycles-pp.__sys_sendto
4.14 ? 11% -2.3 1.81 ? 36% perf-profile.children.cycles-pp.udp_send_skb
4.06 ? 11% -2.3 1.79 ? 36% perf-profile.children.cycles-pp.ip_send_skb
4.00 ? 11% -2.2 1.76 ? 36% perf-profile.children.cycles-pp.ip_output
3.85 ? 12% -2.2 1.70 ? 36% perf-profile.children.cycles-pp.ip_finish_output2
3.48 ? 13% -1.9 1.55 ? 36% perf-profile.children.cycles-pp.__local_bh_enable_ip
3.42 ? 13% -1.9 1.54 ? 36% perf-profile.children.cycles-pp.do_softirq
3.24 ? 13% -1.8 1.48 ? 36% perf-profile.children.cycles-pp.net_rx_action
2.46 ? 2% -1.7 0.73 ? 22% perf-profile.children.cycles-pp.__schedule
4.89 ? 9% -1.7 3.18 ? 19% perf-profile.children.cycles-pp.__softirqentry_text_start
3.14 ? 13% -1.7 1.45 ? 36% perf-profile.children.cycles-pp.__napi_poll
3.11 ? 13% -1.7 1.44 ? 36% perf-profile.children.cycles-pp.process_backlog
3.01 ? 14% -1.6 1.39 ? 36% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
2.93 ? 14% -1.6 1.37 ? 36% perf-profile.children.cycles-pp.ip_rcv
2.87 ? 14% -1.5 1.35 ? 36% perf-profile.children.cycles-pp.ip_local_deliver
2.86 ? 14% -1.5 1.34 ? 36% perf-profile.children.cycles-pp.ip_local_deliver_finish
2.85 ? 14% -1.5 1.34 ? 36% perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
2.82 ? 14% -1.5 1.33 ? 36% perf-profile.children.cycles-pp.__udp4_lib_rcv
2.68 ? 15% -1.4 1.29 ? 36% perf-profile.children.cycles-pp.udp_unicast_rcv_skb
2.65 ? 15% -1.4 1.27 ? 36% perf-profile.children.cycles-pp.udp_queue_rcv_one_skb
2.26 ? 17% -1.3 0.93 ? 36% perf-profile.children.cycles-pp.sock_def_readable
2.22 ? 17% -1.3 0.93 ? 36% perf-profile.children.cycles-pp.__wake_up_common_lock
2.40 ? 15% -1.3 1.13 ? 38% perf-profile.children.cycles-pp.__udp_enqueue_schedule_skb
2.02 ? 17% -1.2 0.78 ? 38% perf-profile.children.cycles-pp.autoremove_wake_function
2.02 ? 18% -1.2 0.78 ? 38% perf-profile.children.cycles-pp.try_to_wake_up
2.07 ? 18% -1.2 0.91 ? 37% perf-profile.children.cycles-pp.__wake_up_common
2.14 ? 4% -1.0 1.18 ? 19% perf-profile.children.cycles-pp.__skb_recv_udp
1.12 ? 3% -0.9 0.20 ? 36% perf-profile.children.cycles-pp.schedule_idle
1.37 ? 5% -0.8 0.54 ? 20% perf-profile.children.cycles-pp.schedule
1.71 ? 4% -0.8 0.89 ? 19% perf-profile.children.cycles-pp.__skb_wait_for_more_packets
1.36 ? 4% -0.7 0.61 ? 19% perf-profile.children.cycles-pp.schedule_timeout
0.83 ? 18% -0.6 0.19 ? 34% perf-profile.children.cycles-pp.ttwu_do_activate
0.82 ? 19% -0.6 0.18 ? 36% perf-profile.children.cycles-pp.enqueue_task_fair
0.60 ? 3% -0.5 0.07 ? 9% perf-profile.children.cycles-pp.pick_next_task_fair
0.66 ? 19% -0.5 0.17 ? 39% perf-profile.children.cycles-pp.enqueue_entity
0.80 ? 17% -0.5 0.32 ? 20% perf-profile.children.cycles-pp.update_rq_clock
0.72 ? 6% -0.4 0.32 ? 21% perf-profile.children.cycles-pp.dequeue_task_fair
0.39 ? 5% -0.3 0.09 ? 20% perf-profile.children.cycles-pp.update_load_avg
0.82 ? 17% -0.3 0.52 ? 18% perf-profile.children.cycles-pp.sched_clock_cpu
0.59 ? 5% -0.3 0.29 ? 22% perf-profile.children.cycles-pp.dequeue_entity
0.72 ? 15% -0.3 0.45 ? 16% perf-profile.children.cycles-pp.sched_clock
0.69 ? 15% -0.3 0.42 ? 17% perf-profile.children.cycles-pp.native_sched_clock
0.84 ? 7% -0.3 0.57 ? 27% perf-profile.children.cycles-pp.__consume_stateless_skb
0.29 ? 6% -0.3 0.03 ? 88% perf-profile.children.cycles-pp.kmem_cache_free
0.61 ? 7% -0.2 0.40 ? 29% perf-profile.children.cycles-pp.__free_pages_ok
0.30 ? 24% -0.2 0.10 ? 37% perf-profile.children.cycles-pp.ip_route_output_flow
0.73 ? 7% -0.2 0.53 ? 9% perf-profile.children.cycles-pp._raw_spin_lock
0.29 ? 23% -0.2 0.10 ? 37% perf-profile.children.cycles-pp.ip_route_output_key_hash
0.30 ? 16% -0.2 0.12 ? 36% perf-profile.children.cycles-pp.__dev_queue_xmit
0.27 ? 23% -0.2 0.09 ? 37% perf-profile.children.cycles-pp.ip_route_output_key_hash_rcu
0.21 ? 21% -0.1 0.07 ? 51% perf-profile.children.cycles-pp.fib_table_lookup
0.64 ? 7% -0.1 0.50 ? 10% perf-profile.children.cycles-pp.read_tsc
0.21 ? 13% -0.1 0.07 ? 48% perf-profile.children.cycles-pp.dev_hard_start_xmit
0.19 ? 11% -0.1 0.06 ? 49% perf-profile.children.cycles-pp.__switch_to
0.19 ? 11% -0.1 0.07 ? 49% perf-profile.children.cycles-pp.loopback_xmit
0.40 ? 18% -0.1 0.28 ? 17% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.17 ? 10% -0.1 0.05 ? 42% perf-profile.children.cycles-pp.__switch_to_asm
0.19 ? 6% -0.1 0.08 ? 29% perf-profile.children.cycles-pp.move_addr_to_user
0.16 ? 7% -0.1 0.07 ? 12% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.13 ? 24% -0.1 0.06 ? 72% perf-profile.children.cycles-pp.__free_one_page
0.13 ? 20% -0.1 0.06 ? 46% perf-profile.children.cycles-pp.sockfd_lookup_light
0.12 ? 19% -0.1 0.05 ? 69% perf-profile.children.cycles-pp.__fget_light
0.18 ? 9% -0.1 0.11 ? 18% perf-profile.children.cycles-pp.call_cpuidle
0.18 ? 9% -0.1 0.13 ? 12% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.09 ? 11% -0.0 0.04 ? 67% perf-profile.children.cycles-pp.udp_rmem_release
0.09 ? 10% -0.0 0.05 ? 46% perf-profile.children.cycles-pp.trigger_load_balance
0.05 ? 47% +0.1 0.16 ? 21% perf-profile.children.cycles-pp.cpuacct_charge
0.00 +0.1 0.11 ? 26% perf-profile.children.cycles-pp.llist_reverse_order
0.00 +0.2 0.16 ? 13% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.00 +0.3 0.27 ? 59% perf-profile.children.cycles-pp.__smp_call_single_queue
0.00 +0.3 0.27 ? 59% perf-profile.children.cycles-pp.llist_add_batch
0.00 +0.4 0.36 ? 20% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.4 0.38 ? 42% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.5 0.55 ? 14% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
2.35 ? 25% +1.1 3.48 ? 19% perf-profile.children.cycles-pp.ktime_get
14.90 ? 4% +2.3 17.23 ? 10% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
67.79 +5.8 73.60 ? 2% perf-profile.children.cycles-pp.start_secondary
68.07 +5.9 74.00 ? 2% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
68.07 +5.9 74.00 ? 2% perf-profile.children.cycles-pp.cpu_startup_entry
68.06 +5.9 73.99 ? 2% perf-profile.children.cycles-pp.do_idle
7.12 ? 45% +6.8 13.97 ? 14% perf-profile.children.cycles-pp.menu_select
4.54 ? 4% -4.5 0.05 ? 65% perf-profile.self.cycles-pp.poll_idle
0.46 ? 20% -0.4 0.05 ? 66% perf-profile.self.cycles-pp.update_rq_clock
0.47 ? 9% -0.4 0.09 ? 18% perf-profile.self.cycles-pp.__schedule
0.65 ? 15% -0.3 0.39 ? 15% perf-profile.self.cycles-pp.native_sched_clock
0.55 ? 18% -0.2 0.32 ? 20% perf-profile.self.cycles-pp.do_idle
0.32 ? 25% -0.2 0.08 ? 30% perf-profile.self.cycles-pp.enqueue_entity
0.71 ? 7% -0.2 0.52 ? 9% perf-profile.self.cycles-pp._raw_spin_lock
0.35 ? 7% -0.2 0.18 ? 32% perf-profile.self.cycles-pp.__free_pages_ok
0.62 ? 7% -0.1 0.49 ? 10% perf-profile.self.cycles-pp.read_tsc
0.17 ? 9% -0.1 0.05 ? 70% perf-profile.self.cycles-pp.__switch_to
0.16 ? 22% -0.1 0.04 ? 89% perf-profile.self.cycles-pp.fib_table_lookup
0.16 ? 9% -0.1 0.05 ? 43% perf-profile.self.cycles-pp.__switch_to_asm
0.15 ? 8% -0.1 0.06 ? 18% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.12 ? 19% -0.1 0.05 ? 69% perf-profile.self.cycles-pp.__fget_light
0.16 ? 10% -0.1 0.11 ? 15% perf-profile.self.cycles-pp.call_cpuidle
0.12 ? 9% -0.1 0.06 ? 23% perf-profile.self.cycles-pp.__softirqentry_text_start
0.06 ? 52% +0.0 0.11 ? 30% perf-profile.self.cycles-pp.udp_queue_rcv_one_skb
0.00 +0.1 0.08 ? 24% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.02 ?142% +0.1 0.12 ? 30% perf-profile.self.cycles-pp.__wake_up_common
0.00 +0.1 0.11 ? 28% perf-profile.self.cycles-pp.sched_ttwu_pending
0.05 ? 47% +0.1 0.16 ? 21% perf-profile.self.cycles-pp.cpuacct_charge
0.00 +0.1 0.11 ? 26% perf-profile.self.cycles-pp.llist_reverse_order
0.00 +0.3 0.27 ? 59% perf-profile.self.cycles-pp.llist_add_batch
1.78 ? 35% +1.3 3.05 ? 23% perf-profile.self.cycles-pp.ktime_get
5.18 ? 60% +6.1 11.25 ? 19% perf-profile.self.cycles-pp.cpuidle_enter_state
4.81 ? 62% +6.7 11.54 ? 19% perf-profile.self.cycles-pp.menu_select





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

Thanks,
Oliver Sang


Attachments:
(No filename) (83.83 kB)
config-5.12.0-rc8-00063-g5f94d1b650d1 (175.54 kB)
job-script (8.53 kB)
job.yaml (5.81 kB)
reproduce (549.00 B)
Download all attachments