2021-05-27 02:28:40

by Rik van Riel

[permalink] [raw]
Subject: Re: [LKP] [sched/fair] c722f35b51: tbench.throughput-MB/sec -29.1% regression

On Thu, 2021-05-27 at 09:27 +0800, Xing, Zhengjun wrote:
> Hi Rik,
>
> Do you have time to look at this? Thanks.

Hello,

I will try to take a look at this on Friday.

However, even if I manage to reproduce it on one of
the systems I have access to, I'm still not sure how
exactly we would root cause the issue.

Is it due to
select_idle_sibling() doing a little bit
more work?

Is it because we invoke test_idle_cores() a little
earlier, widening the race window with CPUs going idle,
causing select_idle_cpu to do a lot more work?

Is it a locality thing where random placement on any
core in the LLC is somehow better than placement on
the same core as "prev" when there is no idle core?

Is it tbench running
faster when the woken up task is
placed on the runqueue behind the current task on the
"target" cpu, even though that CPU isn't currently
idle, because tbench happens to go to sleep fast?

In other words, I'm
not quite sure whether this is
a tbench (and other similar benchmark) specific thing,
or a kernel thing, or what instrumentation we would
want in select_idle_sibling / select_idle_cpu for us
to root cause issues like this more easily in the
future...

> On 5/19/2021 4:22 PM, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed a -29.1% regression of tbench.throughput-MB/sec due
> > to commit:
> >
> >
> > commit: c722f35b513f807629603bbf24640b1a48be21b5 ("sched/fair:
> > Bring back select_idle_smt(), but differently")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
> > master
> >
> >
> > in testcase: tbench
> > on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252
> > CPU @ 2.10GHz with 128G memory
> > with following parameters:
> >
> > nr_threads: 100%
> > cluster: cs-localhost
> > cpufreq_governor: performance
> > ucode: 0x5003006
> >
> >
> >
> >
> > If you fix the issue, kindly add following tag
> > Reported-by: kernel test robot <[email protected]>
> >
> >
> > Details are as below:
> > -----------------------------------------------------------------
> > --------------------------------->
> >
> >
> > To reproduce:
> >
> > git clone https://github.com/intel/lkp-tests.git
> > cd lkp-tests
> > bin/lkp install job.yaml # job file is
> > attached in this email
> > bin/lkp split-job --compatible job.yaml # generate the
> > yaml file for lkp run
> > bin/lkp run generated-yaml-file
> >
> > ===================================================================
> > ======================
> > cluster/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_gr
> > oup/testcase/ucode:
> > cs-localhost/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-
> > x86_64-20200603.cgz/lkp-csl-2sp8/tbench/0x5003006
> >
> > commit:
> > 6db12ee045 ("psi: allow unprivileged users with CAP_SYS_RESOURCE
> > to write psi files")
> > c722f35b51 ("sched/fair: Bring back select_idle_smt(), but
> > differently")
> >
> > 6db12ee0456d0e36 c722f35b513f807629603bbf246
> > ---------------- ---------------------------
> > %stddev %change %stddev
> > \ | \
> > 15027 -29.1% 10651 ± 5% tbench.throughput-
> > MB/sec
> > 1.893e+09 ± 2% -87.5% 2.36e+08 ±
> > 73% tbench.time.involuntary_context_switches
> > 51693 ± 21% +65.7% 85645
> > ± 7% tbench.time.minor_page_faults
> > 5200 -45.9% 2815
> > ± 9% tbench.time.percent_of_cpu_this_job_got
> > 32467 -48.9% 16595 ±
> > 10% tbench.time.system_time
> > 4990 -26.1% 3686 ± 6% tbench.time.user_time
> > 1.387e+08 ± 25% +760.7% 1.194e+09
> > ± 8% tbench.time.voluntary_context_switches
> > 3954 ± 8% +307.9% 16131 ± 8% uptime.idle
> > 93556034 ± 4% -24.5% 70597092 ± 5% numa-
> > numastat.node0.local_node
> > 93594173 ± 4% -24.5% 70643519 ± 5% numa-
> > numastat.node0.numa_hit
> > 1.011e+08 ± 2% -29.7% 71069877 ± 6% numa-
> > numastat.node1.local_node
> > 1.012e+08 ± 2% -29.7% 71110181 ± 6% numa-
> > numastat.node1.numa_hit
> > 47900004 ± 3% -21.4% 37635451 ± 6% numa-
> > vmstat.node0.numa_hit
> > 47817433 ± 3% -21.5% 37543255 ± 6% numa-
> > vmstat.node0.numa_local
> > 51712415 ± 2% -26.3% 38110581 ± 8% numa-
> > vmstat.node1.numa_hit
> > 51561304 ± 2% -26.4% 37969236 ± 8% numa-
> > vmstat.node1.numa_local
> > 2.17 ± 31% +861.5% 20.83 ± 9% vmstat.cpu.id
> > 88.67 -19.5% 71.33 ± 2% vmstat.cpu.sy
> > 132.50 -16.2% 111.00 vmstat.procs.r
> > 197679 +6.4% 210333 vmstat.system.in
> > 1.445e+09 ± 24% +838.9% 1.357e+10 ± 10% cpuidle.C1.time
> > 1.585e+08 ± 24% +803.8% 1.433e+09 ± 8% cpuidle.C1.usage
> > 238575 ± 17% +101.0% 479497 ± 9% cpuidle.C1E.usage
> > 16003682 ± 25% +636.0% 1.178e+08 ± 7% cpuidle.POLL.time
> > 1428997 ± 25% +572.1% 9604232 ± 8% cpuidle.POLL.usage
> > 22.00 +14.7 36.65 ± 4% mpstat.cpu.all.idle%
> > 0.81 +0.1 0.90 mpstat.cpu.all.irq%
> > 15.57 +4.6 20.20 ± 2% mpstat.cpu.all.soft%
> > 54.87 -18.2 36.68 ± 5% mpstat.cpu.all.sys%
> > 6.75 -1.2 5.57 ± 3% mpstat.cpu.all.usr%
> > 1.948e+08 -27.3% 1.417e+08 ± 4% proc-vmstat.numa_hit
> > 1.947e+08 -27.3% 1.416e+08 ± 4% proc-
> > vmstat.numa_local
> > 1.516e+09 -28.0% 1.092e+09 ± 5% proc-
> > vmstat.pgalloc_normal
> > 2867654 ± 2% +5.3% 3018753 proc-vmstat.pgfault
> > 1.512e+09 -28.1% 1.087e+09 ± 5% proc-vmstat.pgfree
> > 2663 -15.7% 2244 turbostat.Avg_MHz
> > 97.59 -15.8 81.80 ± 2% turbostat.Busy%
> > 1.585e+08 ± 24% +803.8% 1.433e+09 ± 8% turbostat.C1
> > 2.09 ± 24% +17.5 19.58 ± 10% turbostat.C1%
> > 237770 ± 17% +101.5% 479071 ± 9% turbostat.C1E
> > 2.38 ± 17% +663.9% 18.18 ± 10% turbostat.CPU%c1
> > 74.39 -6.2% 69.77 turbostat.RAMWatt
> > 0.01 ± 59% +886.2% 0.10 ± 48% perf-
> > sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_us
> > er_mode.entry_SYSCALL_64_after_hwframe.[unknown]
> > 0.00 ± 62% +2215.4% 0.05 ± 40% perf-
> > sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__rel
> > ease_sock.release_sock.tcp_sendmsg
> > 0.01 ± 24% +378.9% 0.05 ± 22% perf-
> > sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_
> > cache_alloc_node.__alloc_skb.sk_stream_alloc_skb
> > 0.05 ± 24% +47.3% 0.07 ± 12% perf-
> > sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do
> > _open.isra
> > 2347 ± 20% +28.2% 3010 perf-
> > sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
> > 1.41 ± 25% -34.8% 0.92 ± 19% perf-
> > sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.aa_sk
> > _perm.security_socket_sendmsg.sock_sendmsg
> > 790.43 ± 10% +30.2% 1029 perf-
> > sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
> > 215752 ± 29% -94.4% 12122 ± 46% perf-
> > sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_t
> > o_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
> > 2004 ± 12% -37.7% 1248 ± 6% perf-
> > sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
> > 0.06 ± 23% +762.8% 0.53 ± 42% perf-
> > sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_us
> > er_mode.entry_SYSCALL_64_after_hwframe.[unknown]
> > 0.02 ± 8% +421.8% 0.10 ± 32% perf-
> > sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__rel
> > ease_sock.release_sock.tcp_sendmsg
> > 0.03 ± 12% +191.3% 0.09 ± 18% perf-
> > sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_
> > cache_alloc_node.__alloc_skb.sk_stream_alloc_skb
> > 685.22 ± 7% +17.4% 804.65 ± 12% perf-
> > sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_
> > fork
> > 785.50 ± 10% +30.3% 1023 perf-
> > sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
> > 1.18 ± 35% -33.6% 0.78 ± 18% perf-
> > sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_u
> > ser_mode.asm_sysvec_apic_timer_interrupt.[unknown]
> > 1.58 ± 15% -34.9% 1.03 ± 14% perf-
> > sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.aa_sk
> > _perm.security_socket_sendmsg.sock_sendmsg
> > 51178 ± 93% +1451.5% 794049
> > ± 9% sched_debug.cfs_rq:/.MIN_vruntime.avg
> > 2407970 ± 59% +466.9% 13650279
> > ± 4% sched_debug.cfs_rq:/.MIN_vruntime.max
> > 330425 ± 71% +821.3% 3044331
> > ± 6% sched_debug.cfs_rq:/.MIN_vruntime.stddev
> > 6186 ± 5% -91.9% 499.13
> > ±165% sched_debug.cfs_rq:/.load.min
> > 4.01 ± 7% -39.0% 2.45 ±
> > 15% sched_debug.cfs_rq:/.load_avg.min
> > 51178 ± 93% +1451.5% 794049
> > ± 9% sched_debug.cfs_rq:/.max_vruntime.avg
> > 2407973 ± 59% +466.9% 13650279
> > ± 4% sched_debug.cfs_rq:/.max_vruntime.max
> > 330425 ± 71% +821.3% 3044331
> > ± 6% sched_debug.cfs_rq:/.max_vruntime.stddev
> > 23445823 -41.9% 13622977 ±
> > 11% sched_debug.cfs_rq:/.min_vruntime.avg
> > 25062917 ± 2% -32.7% 16866064
> > ± 8% sched_debug.cfs_rq:/.min_vruntime.max
> > 22902926 ± 2% -42.8% 13102629 ±
> > 11% sched_debug.cfs_rq:/.min_vruntime.min
> > 0.92 -19.0% 0.75
> > ± 3% sched_debug.cfs_rq:/.nr_running.avg
> > 0.74 ± 12% -89.7% 0.08
> > ±115% sched_debug.cfs_rq:/.nr_running.min
> > 0.07 ± 32% +432.3% 0.38 ±
> > 10% sched_debug.cfs_rq:/.nr_running.stddev
> > 1096 ± 2% -
> > 18.1% 898.07 sched_debug.cfs_rq:/.runnable_avg.avg
> > 203.00 ± 4% +8.1% 219.36
> > ± 2% sched_debug.cfs_rq:/.runnable_avg.stddev
> > 996603 ± 48% +225.1% 3240056 ±
> > 25% sched_debug.cfs_rq:/.spread0.max
> > 855.14 -35.7% 549.90
> > ± 6% sched_debug.cfs_rq:/.util_avg.avg
> > 1230 ± 6% -21.1% 971.21
> > ± 6% sched_debug.cfs_rq:/.util_avg.max
> > 142.21 ± 3% -38.9% 86.89 ±
> > 17% sched_debug.cfs_rq:/.util_avg.stddev
> > 630.23 ± 5% -86.6% 84.51 ±
> > 65% sched_debug.cfs_rq:/.util_est_enqueued.avg
> > 1148 ± 5% -57.8% 484.29 ±
> > 17% sched_debug.cfs_rq:/.util_est_enqueued.max
> > 162.67 ± 21% -92.9% 11.56
> > ±138% sched_debug.cfs_rq:/.util_est_enqueued.min
> > 189.55 ± 5% -60.4% 75.05 ±
> > 31% sched_debug.cfs_rq:/.util_est_enqueued.stddev
> > 13.27 ± 7% -56.2% 5.81 ±
> > 16% sched_debug.cpu.clock.stddev
> > 2627 -22.2% 2042 ± 3% sched_debug.cpu.curr-
> > >pid.avg
> > 1954 ± 10% -92.0% 157.14 ±137% sched_debug.cpu.curr-
> > >pid.min
> > 963.90 ± 6% +58.2% 1525 ± 4% sched_debug.cpu.curr-
> > >pid.stddev
> > 0.00 ± 7% -47.7% 0.00 ±
> > 15% sched_debug.cpu.next_balance.stddev
> > 1.30 -14.2% 1.12
> > ± 2% sched_debug.cpu.nr_running.avg
> > 2.50 ± 6% +79.0% 4.47
> > ± 6% sched_debug.cpu.nr_running.max
> > 0.76 ± 10% -89.8% 0.08
> > ±115% sched_debug.cpu.nr_running.min
> > 0.51 ± 3% +73.6% 0.89
> > ± 6% sched_debug.cpu.nr_running.stddev
> > 8.53 ± 3% +154.8% 21.73 ± 6% perf-stat.i.MPKI
> > 2.178e+10 -13.8% 1.878e+10 ± 3% perf-stat.i.branch-
> > instructions
> > 1.69 +0.2 1.90 perf-stat.i.branch-
> > miss-rate%
> > 3.675e+08 -3.5% 3.545e+08 perf-stat.i.branch-
> > misses
> > 18.30 ± 7% -13.5 4.80 ± 36% perf-stat.i.cache-
> > miss-rate%
> > 1.546e+08 ± 9% -48.7% 79255656 ± 18% perf-stat.i.cache-
> > misses
> > 9.224e+08 ± 2% +113.6% 1.971e+09 ± 4% perf-stat.i.cache-
> > references
> > 2.553e+11 -14.5% 2.183e+11 perf-stat.i.cpu-
> > cycles
> > 240694 ± 23% +737.9% 2016725 ± 7% perf-stat.i.cpu-
> > migrations
> > 1700 ± 10% +78.2% 3029 ± 11% perf-stat.i.cycles-
> > between-cache-misses
> > 0.02 ± 19% +0.2 0.18 ± 7% perf-stat.i.dTLB-
> > load-miss-rate%
> > 6572698 ± 19% +582.9% 44886023 ± 7% perf-stat.i.dTLB-
> > load-misses
> > 3.024e+10 -14.7% 2.58e+10 ± 3% perf-stat.i.dTLB-
> > loads
> > 0.01 ± 16% +0.1 0.06 ± 8% perf-stat.i.dTLB-
> > store-miss-rate%
> > 1539288 ± 16% +464.4% 8687418 ± 8% perf-stat.i.dTLB-
> > store-misses
> > 1.709e+10 -14.7% 1.457e+10 ± 3% perf-stat.i.dTLB-
> > stores
> > 73.36 -12.9 60.46 ± 2% perf-stat.i.iTLB-
> > load-miss-rate%
> > 1.847e+08 ± 2% -10.9% 1.645e+08 ± 2% perf-stat.i.iTLB-
> > load-misses
> > 67816199 ± 2% +59.3% 1.08e+08 ± 3% perf-stat.i.iTLB-
> > loads
> > 1.073e+11 -13.7% 9.264e+10 ± 3% perf-
> > stat.i.instructions
> > 2.66 -14.5% 2.27 perf-
> > stat.i.metric.GHz
> > 1188 ± 4% -48.1% 616.51 ± 13% perf-
> > stat.i.metric.K/sec
> > 729.51 -12.6% 637.79 ± 2% perf-
> > stat.i.metric.M/sec
> > 3886 ± 2% +5.4% 4097 perf-stat.i.minor-
> > faults
> > 31516592 ± 8% -29.4% 22240687 ± 13% perf-stat.i.node-
> > load-misses
> > 5444441 ± 5% -16.3% 4559306 ± 4% perf-stat.i.node-
> > loads
> > 96.59 -4.9 91.73 perf-stat.i.node-
> > store-miss-rate%
> > 432854 ± 9% +96.5% 850464 ± 17% perf-stat.i.node-
> > stores
> > 3887 ± 2% +5.4% 4098 perf-stat.i.page-
> > faults
> > 8.60 ± 3% +147.7% 21.30 ± 6% perf-
> > stat.overall.MPKI
> > 1.69 +0.2 1.89 perf-
> > stat.overall.branch-miss-rate%
> > 16.76 ± 9% -12.7 4.07 ± 23% perf-
> > stat.overall.cache-miss-rate%
> > 1666 ± 9% +69.9% 2830 ± 14% perf-
> > stat.overall.cycles-between-cache-misses
> > 0.02 ± 18% +0.2 0.17 ± 8% perf-
> > stat.overall.dTLB-load-miss-rate%
> > 0.01 ± 16% +0.1 0.06 ± 9% perf-
> > stat.overall.dTLB-store-miss-rate%
> > 73.13 -12.8 60.37 ± 2% perf-
> > stat.overall.iTLB-load-miss-rate%
> > 95.70 -4.9 90.84 perf-
> > stat.overall.node-store-miss-rate%
> > 2.175e+10 -13.7% 1.876e+10 ± 3% perf-stat.ps.branch-
> > instructions
> > 3.67e+08 -3.5% 3.54e+08 perf-stat.ps.branch-
> > misses
> > 1.543e+08 ± 9% -48.7% 79211813 ± 18% perf-stat.ps.cache-
> > misses
> > 9.214e+08 ± 2% +113.5% 1.967e+09 ± 4% perf-stat.ps.cache-
> > references
> > 2.549e+11 -14.5% 2.18e+11 perf-stat.ps.cpu-
> > cycles
> > 240897 ± 23% +735.5% 2012779 ± 7% perf-stat.ps.cpu-
> > migrations
> > 6575045 ± 19% +581.3% 44797852 ± 7% perf-stat.ps.dTLB-
> > load-misses
> > 3.02e+10 -14.7% 2.577e+10 ± 3% perf-stat.ps.dTLB-
> > loads
> > 1539214 ± 16% +463.3% 8670426 ± 8% perf-stat.ps.dTLB-
> > store-misses
> > 1.707e+10 -14.7% 1.456e+10 ± 3% perf-stat.ps.dTLB-
> > stores
> > 1.844e+08 ± 2% -10.9% 1.643e+08 ± 2% perf-stat.ps.iTLB-
> > load-misses
> > 67743849 ± 2% +59.2% 1.079e+08 ± 3% perf-stat.ps.iTLB-
> > loads
> > 1.072e+11 -13.7% 9.253e+10 ± 3% perf-
> > stat.ps.instructions
> > 3886 ± 2% +5.5% 4098 perf-stat.ps.minor-
> > faults
> > 31470314 ± 8% -29.4% 22222434 ± 13% perf-stat.ps.node-
> > load-misses
> > 5442735 ± 5% -16.2% 4560730 ± 3% perf-stat.ps.node-
> > loads
> > 432399 ± 9% +96.4% 849120 ± 17% perf-stat.ps.node-
> > stores
> > 3886 ± 2% +5.5% 4098 perf-stat.ps.page-
> > faults
> > 7.73e+13 -13.7% 6.672e+13 ± 3% perf-
> > stat.total.instructions
> > 0.00 +4.1e+103% 40.83 ±117% interrupts.102:PCI-
> > MSI.31981635-edge.i40e-eth0-TxRx-66
> > 163522 ± 5% +659.4% 1241719 ±
> > 40% interrupts.CAL:Function_call_interrupts
> > 2717 ± 28% +275.9% 10215 ±
> > 52% interrupts.CPU0.CAL:Function_call_interrupts
> > 30618 ± 41% +290.0% 119405
> > ± 9% interrupts.CPU0.RES:Rescheduling_interrupts
> > 1686 ± 25% +483.4% 9836 ±
> > 44% interrupts.CPU1.CAL:Function_call_interrupts
> > 26633 ± 50% +342.3% 117798 ±
> > 10% interrupts.CPU1.RES:Rescheduling_interrupts
> > 1633 ± 25% +563.1% 10833 ±
> > 50% interrupts.CPU10.CAL:Function_call_interrupts
> > 27398 ± 51% +319.3% 114895 ±
> > 10% interrupts.CPU10.RES:Rescheduling_interrupts
> > 1857 ± 21% +446.8% 10154 ±
> > 50% interrupts.CPU11.CAL:Function_call_interrupts
> > 26206 ± 53% +312.9% 108207
> > ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
> > 1423 ± 15% +614.7% 10176 ±
> > 52% interrupts.CPU12.CAL:Function_call_interrupts
> > 26880 ± 58% +322.7% 113610
> > ± 8% interrupts.CPU12.RES:Rescheduling_interrupts
> > 1457 ± 18% +585.6% 9988 ±
> > 52% interrupts.CPU13.CAL:Function_call_interrupts
> > 26741 ± 57% +330.3% 115064 ±
> > 10% interrupts.CPU13.RES:Rescheduling_interrupts
> > 1440 ± 8% +626.7% 10466 ±
> > 52% interrupts.CPU14.CAL:Function_call_interrupts
> > 27280 ± 58% +321.3% 114920 ±
> > 10% interrupts.CPU14.RES:Rescheduling_interrupts
> > 1547 ± 13% +550.6% 10065 ±
> > 51% interrupts.CPU15.CAL:Function_call_interrupts
> > 24652 ± 60% +366.8% 115077 ±
> > 10% interrupts.CPU15.RES:Rescheduling_interrupts
> > 1613 ± 14% +518.7% 9981 ±
> > 52% interrupts.CPU16.CAL:Function_call_interrupts
> > 26360 ± 60% +338.3% 115544
> > ± 9% interrupts.CPU16.RES:Rescheduling_interrupts
> > 1783 ± 22% +476.1% 10273 ±
> > 50% interrupts.CPU17.CAL:Function_call_interrupts
> > 29465 ± 57% +290.9% 115190 ±
> > 10% interrupts.CPU17.RES:Rescheduling_interrupts
> > 1692 ± 15% +539.1% 10816 ±
> > 49% interrupts.CPU18.CAL:Function_call_interrupts
> > 25242 ± 56% +355.0% 114843
> > ± 8% interrupts.CPU18.RES:Rescheduling_interrupts
> > 1546 ± 14% +576.6% 10463 ±
> > 48% interrupts.CPU19.CAL:Function_call_interrupts
> > 27541 ± 53% +318.0% 115135 ±
> > 10% interrupts.CPU19.RES:Rescheduling_interrupts
> > 1539 ± 20% +597.8% 10744 ±
> > 50% interrupts.CPU2.CAL:Function_call_interrupts
> > 26770 ± 52% +333.8% 116123 ±
> > 10% interrupts.CPU2.RES:Rescheduling_interrupts
> > 1904 ± 25% +432.4% 10138 ±
> > 52% interrupts.CPU20.CAL:Function_call_interrupts
> > 25460 ± 61% +353.1% 115361
> > ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
> > 1762 ± 23% +469.0% 10028 ±
> > 47% interrupts.CPU21.CAL:Function_call_interrupts
> > 26522 ± 54% +331.6% 114472
> > ± 8% interrupts.CPU21.RES:Rescheduling_interrupts
> > 1684 ± 19% +506.5% 10214 ±
> > 49% interrupts.CPU22.CAL:Function_call_interrupts
> > 26964 ± 52% +320.4% 113356
> > ± 9% interrupts.CPU22.RES:Rescheduling_interrupts
> > 1664 ± 16% +552.7% 10863 ±
> > 47% interrupts.CPU23.CAL:Function_call_interrupts
> > 25881 ± 53% +331.9% 111768
> > ± 9% interrupts.CPU23.RES:Rescheduling_interrupts
> > 2767 ± 70% +456.6% 15402 ±
> > 58% interrupts.CPU24.CAL:Function_call_interrupts
> > 34637 ± 23% +245.6% 119696
> > ± 7% interrupts.CPU24.RES:Rescheduling_interrupts
> > 1644 ± 19% +790.7% 14649 ±
> > 62% interrupts.CPU25.CAL:Function_call_interrupts
> > 31698 ± 34% +271.3% 117692
> > ± 9% interrupts.CPU25.RES:Rescheduling_interrupts
> > 1600 ± 20% +885.1% 15766 ±
> > 63% interrupts.CPU26.CAL:Function_call_interrupts
> > 32053 ± 35% +263.6% 116533
> > ± 8% interrupts.CPU26.RES:Rescheduling_interrupts
> > 1906 ± 40% +773.1% 16646 ±
> > 65% interrupts.CPU27.CAL:Function_call_interrupts
> > 32311 ± 37% +257.4% 115489
> > ± 8% interrupts.CPU27.RES:Rescheduling_interrupts
> > 1847 ± 15% +729.0% 15312 ±
> > 64% interrupts.CPU28.CAL:Function_call_interrupts
> > 30973 ± 38% +271.9% 115189
> > ± 8% interrupts.CPU28.RES:Rescheduling_interrupts
> > 1637 ± 24% +825.0% 15146 ±
> > 64% interrupts.CPU29.CAL:Function_call_interrupts
> > 32830 ± 34% +250.8% 115184
> > ± 6% interrupts.CPU29.RES:Rescheduling_interrupts
> > 1433 ± 14% +603.8% 10090 ±
> > 50% interrupts.CPU3.CAL:Function_call_interrupts
> > 26983 ± 49% +320.2% 113392 ±
> > 10% interrupts.CPU3.RES:Rescheduling_interrupts
> > 1597 ± 14% +863.9% 15400 ±
> > 62% interrupts.CPU30.CAL:Function_call_interrupts
> > 31376 ± 38% +260.7% 113167
> > ± 8% interrupts.CPU30.RES:Rescheduling_interrupts
> > 1648 ± 16% +828.7% 15304 ±
> > 62% interrupts.CPU31.CAL:Function_call_interrupts
> > 32435 ± 33% +258.4% 116241
> > ± 8% interrupts.CPU31.RES:Rescheduling_interrupts
> > 2081 ± 28% +635.8% 15314 ±
> > 64% interrupts.CPU32.CAL:Function_call_interrupts
> > 30678 ± 34% +276.2% 115409
> > ± 8% interrupts.CPU32.RES:Rescheduling_interrupts
> > 1508 ± 19% +951.6% 15860 ±
> > 65% interrupts.CPU33.CAL:Function_call_interrupts
> > 32920 ± 39% +251.5% 115730
> > ± 8% interrupts.CPU33.RES:Rescheduling_interrupts
> > 1529 ± 22% +936.7% 15857 ±
> > 65% interrupts.CPU34.CAL:Function_call_interrupts
> > 31214 ± 27% +266.4% 114381
> > ± 8% interrupts.CPU34.RES:Rescheduling_interrupts
> > 1543 ± 19% +933.2% 15950 ±
> > 60% interrupts.CPU35.CAL:Function_call_interrupts
> > 30887 ± 33% +269.9% 114244
> > ± 7% interrupts.CPU35.RES:Rescheduling_interrupts
> > 1566 ± 17% +890.1% 15505 ±
> > 64% interrupts.CPU36.CAL:Function_call_interrupts
> > 31545 ± 41% +254.5% 111828
> > ± 9% interrupts.CPU36.RES:Rescheduling_interrupts
> > 1518 ± 17% +889.4% 15021 ±
> > 61% interrupts.CPU37.CAL:Function_call_interrupts
> > 31749 ± 38% +266.0% 116188
> > ± 9% interrupts.CPU37.RES:Rescheduling_interrupts
> > 1507 ± 18% +946.1% 15770 ±
> > 62% interrupts.CPU38.CAL:Function_call_interrupts
> > 32670 ± 36% +257.7% 116863
> > ± 7% interrupts.CPU38.RES:Rescheduling_interrupts
> > 1662 ± 22% +818.7% 15277 ±
> > 63% interrupts.CPU39.CAL:Function_call_interrupts
> > 32667 ± 37% +255.3% 116081
> > ± 9% interrupts.CPU39.RES:Rescheduling_interrupts
> > 1908 ± 49% +447.9% 10454 ±
> > 49% interrupts.CPU4.CAL:Function_call_interrupts
> > 26121 ± 56% +341.3% 115279
> > ± 7% interrupts.CPU4.RES:Rescheduling_interrupts
> > 1599 ± 30% +883.5% 15734 ±
> > 66% interrupts.CPU40.CAL:Function_call_interrupts
> > 31759 ± 37% +259.8% 114285
> > ± 8% interrupts.CPU40.RES:Rescheduling_interrupts
> > 1535 ± 19% +889.5% 15194 ±
> > 62% interrupts.CPU41.CAL:Function_call_interrupts
> > 31350 ± 37% +263.1% 113826
> > ± 9% interrupts.CPU41.RES:Rescheduling_interrupts
> > 1565 ± 19% +959.8% 16588 ±
> > 63% interrupts.CPU42.CAL:Function_call_interrupts
> > 30781 ± 33% +269.8% 113842
> > ± 7% interrupts.CPU42.RES:Rescheduling_interrupts
> > 1581 ± 11% +929.0% 16272 ±
> > 63% interrupts.CPU43.CAL:Function_call_interrupts
> > 31679 ± 33% +268.0% 116568
> > ± 7% interrupts.CPU43.RES:Rescheduling_interrupts
> > 1520 ± 22% +914.5% 15422 ±
> > 65% interrupts.CPU44.CAL:Function_call_interrupts
> > 31967 ± 41% +264.6% 116553
> > ± 9% interrupts.CPU44.RES:Rescheduling_interrupts
> > 1503 ± 16% +931.3% 15503 ±
> > 63% interrupts.CPU45.CAL:Function_call_interrupts
> > 30829 ± 32% +272.3% 114770
> > ± 8% interrupts.CPU45.RES:Rescheduling_interrupts
> > 1492 ± 15% +948.3% 15643 ±
> > 64% interrupts.CPU46.CAL:Function_call_interrupts
> > 32136 ± 37% +256.1% 114426
> > ± 8% interrupts.CPU46.RES:Rescheduling_interrupts
> > 6530 ± 4% +186.1% 18685 ±
> > 49% interrupts.CPU47.CAL:Function_call_interrupts
> > 32843 ± 36% +239.2% 111397
> > ± 8% interrupts.CPU47.RES:Rescheduling_interrupts
> > 1501 ± 21% +552.3% 9795 ±
> > 49% interrupts.CPU48.CAL:Function_call_interrupts
> > 27770 ± 49% +306.1% 112779 ±
> > 10% interrupts.CPU48.RES:Rescheduling_interrupts
> > 1716 ± 12% +479.8% 9951 ±
> > 47% interrupts.CPU49.CAL:Function_call_interrupts
> > 27441 ± 50% +306.8% 111620 ±
> > 12% interrupts.CPU49.RES:Rescheduling_interrupts
> > 1536 ± 20% +561.4% 10164 ±
> > 45% interrupts.CPU5.CAL:Function_call_interrupts
> > 26043 ± 58% +336.2% 113601
> > ± 8% interrupts.CPU5.RES:Rescheduling_interrupts
> > 1466 ± 15% +652.1% 11028 ±
> > 52% interrupts.CPU50.CAL:Function_call_interrupts
> > 25157 ± 58% +346.9% 112432 ±
> > 11% interrupts.CPU50.RES:Rescheduling_interrupts
> > 1480 ± 14% +607.5% 10472 ±
> > 49% interrupts.CPU51.CAL:Function_call_interrupts
> > 25280 ± 53% +326.7% 107877 ±
> > 10% interrupts.CPU51.RES:Rescheduling_interrupts
> > 1718 ± 28% +513.8% 10546 ±
> > 49% interrupts.CPU52.CAL:Function_call_interrupts
> > 25402 ± 61% +343.7% 112698
> > ± 9% interrupts.CPU52.RES:Rescheduling_interrupts
> > 1471 ± 18% +584.4% 10067 ±
> > 51% interrupts.CPU53.CAL:Function_call_interrupts
> > 26502 ± 51% +317.7% 110702
> > ± 9% interrupts.CPU53.RES:Rescheduling_interrupts
> > 1520 ± 10% +577.5% 10301 ±
> > 49% interrupts.CPU54.CAL:Function_call_interrupts
> > 26868 ± 53% +311.4% 110540 ±
> > 11% interrupts.CPU54.RES:Rescheduling_interrupts
> > 1378 ± 15% +629.6% 10059 ±
> > 48% interrupts.CPU55.CAL:Function_call_interrupts
> > 26538 ± 55% +328.1% 113615
> > ± 9% interrupts.CPU55.RES:Rescheduling_interrupts
> > 1432 ± 12% +613.7% 10226 ±
> > 50% interrupts.CPU56.CAL:Function_call_interrupts
> > 27378 ± 54% +318.3% 114536
> > ± 9% interrupts.CPU56.RES:Rescheduling_interrupts
> > 1578 ± 23% +551.4% 10283 ±
> > 51% interrupts.CPU57.CAL:Function_call_interrupts
> > 25737 ± 62% +337.5% 112612 ±
> > 10% interrupts.CPU57.RES:Rescheduling_interrupts
> > 1483 ± 22% +630.0% 10829 ±
> > 52% interrupts.CPU58.CAL:Function_call_interrupts
> > 27343 ± 53% +315.8% 113705
> > ± 9% interrupts.CPU58.RES:Rescheduling_interrupts
> > 1460 ± 18% +602.5% 10261 ±
> > 51% interrupts.CPU59.CAL:Function_call_interrupts
> > 25911 ± 58% +298.8% 103334
> > ± 7% interrupts.CPU59.RES:Rescheduling_interrupts
> > 1569 ± 8% +530.0% 9889 ±
> > 50% interrupts.CPU6.CAL:Function_call_interrupts
> > 27162 ± 52% +310.3% 111462 ±
> > 10% interrupts.CPU6.RES:Rescheduling_interrupts
> > 1587 ± 31% +530.4% 10005 ±
> > 51% interrupts.CPU60.CAL:Function_call_interrupts
> > 26060 ± 56% +330.3% 112129
> > ± 9% interrupts.CPU60.RES:Rescheduling_interrupts
> > 1414 ± 16% +608.1% 10012 ±
> > 49% interrupts.CPU61.CAL:Function_call_interrupts
> > 25916 ± 59% +345.2% 115375 ±
> > 10% interrupts.CPU61.RES:Rescheduling_interrupts
> > 1501 ± 18% +601.8% 10534 ±
> > 53% interrupts.CPU62.CAL:Function_call_interrupts
> > 27209 ± 60% +317.1% 113499
> > ± 9% interrupts.CPU62.RES:Rescheduling_interrupts
> > 1411 ± 18% +612.3% 10052 ±
> > 51% interrupts.CPU63.CAL:Function_call_interrupts
> > 25584 ± 59% +345.4% 113944
> > ± 9% interrupts.CPU63.RES:Rescheduling_interrupts
> > 1890 ± 17% +437.8% 10169 ±
> > 50% interrupts.CPU64.CAL:Function_call_interrupts
> > 26884 ± 60% +322.0% 113444 ±
> > 10% interrupts.CPU64.RES:Rescheduling_interrupts
> > 1729 ± 25% +475.6% 9955 ±
> > 48% interrupts.CPU65.CAL:Function_call_interrupts
> > 26952 ± 54% +321.5% 113608 ±
> > 10% interrupts.CPU65.RES:Rescheduling_interrupts
> > 1897 ± 16% +481.0% 11020 ±
> > 46% interrupts.CPU66.CAL:Function_call_interrupts
> > 25947 ± 58% +327.7% 110971
> > ± 9% interrupts.CPU66.RES:Rescheduling_interrupts
> > 1761 ± 18% +467.0% 9987 ±
> > 50% interrupts.CPU67.CAL:Function_call_interrupts
> > 27578 ± 52% +307.5% 112380 ±
> > 10% interrupts.CPU67.RES:Rescheduling_interrupts
> > 1784 ± 18% +451.1% 9835 ±
> > 49% interrupts.CPU68.CAL:Function_call_interrupts
> > 25578 ± 60% +340.7% 112722 ±
> > 10% interrupts.CPU68.RES:Rescheduling_interrupts
> > 1837 ± 17% +436.5% 9857 ±
> > 49% interrupts.CPU69.CAL:Function_call_interrupts
> > 26266 ± 59% +329.5% 112809
> > ± 9% interrupts.CPU69.RES:Rescheduling_interrupts
> > 1465 ± 15% +583.8% 10020 ±
> > 49% interrupts.CPU7.CAL:Function_call_interrupts
> > 25720 ± 57% +351.1% 116026
> > ± 8% interrupts.CPU7.RES:Rescheduling_interrupts
> > 1810 ± 21% +425.7% 9515 ±
> > 44% interrupts.CPU70.CAL:Function_call_interrupts
> > 27634 ± 51% +304.4% 111767 ±
> > 10% interrupts.CPU70.RES:Rescheduling_interrupts
> > 1891 ± 12% +415.1% 9740 ±
> > 48% interrupts.CPU71.CAL:Function_call_interrupts
> > 26205 ± 55% +323.3% 110934
> > ± 9% interrupts.CPU71.RES:Rescheduling_interrupts
> > 2946 ± 93% +429.1% 15592 ±
> > 63% interrupts.CPU72.CAL:Function_call_interrupts
> > 33476 ± 29% +241.2% 114214
> > ± 8% interrupts.CPU72.RES:Rescheduling_interrupts
> > 1797 ± 28% +730.7% 14931 ±
> > 64% interrupts.CPU73.CAL:Function_call_interrupts
> > 31188 ± 31% +269.5% 115229
> > ± 9% interrupts.CPU73.RES:Rescheduling_interrupts
> > 1649 ± 23% +918.0% 16794 ±
> > 64% interrupts.CPU74.CAL:Function_call_interrupts
> > 30456 ± 37% +274.1% 113927
> > ± 9% interrupts.CPU74.RES:Rescheduling_interrupts
> > 1875 ± 31% +785.5% 16610 ±
> > 62% interrupts.CPU75.CAL:Function_call_interrupts
> > 31503 ± 38% +259.7% 113310
> > ± 8% interrupts.CPU75.RES:Rescheduling_interrupts
> > 1833 ± 17% +766.5% 15885 ±
> > 64% interrupts.CPU76.CAL:Function_call_interrupts
> > 29718 ± 38% +278.7% 112554
> > ± 9% interrupts.CPU76.RES:Rescheduling_interrupts
> > 1650 ± 15% +857.9% 15811 ±
> > 66% interrupts.CPU77.CAL:Function_call_interrupts
> > 31625 ± 29% +252.2% 111391
> > ± 7% interrupts.CPU77.RES:Rescheduling_interrupts
> > 1740 ± 16% +798.0% 15632 ±
> > 62% interrupts.CPU78.CAL:Function_call_interrupts
> > 32549 ± 34% +239.6% 110549
> > ± 9% interrupts.CPU78.RES:Rescheduling_interrupts
> > 1675 ± 19% +830.7% 15595 ±
> > 64% interrupts.CPU79.CAL:Function_call_interrupts
> > 31016 ± 36% +266.4% 113656
> > ± 8% interrupts.CPU79.RES:Rescheduling_interrupts
> > 1437 ± 11% +635.4% 10570 ±
> > 49% interrupts.CPU8.CAL:Function_call_interrupts
> > 27602 ± 60% +322.8% 116699 ±
> > 10% interrupts.CPU8.RES:Rescheduling_interrupts
> > 1683 ± 27% +802.2% 15187 ±
> > 65% interrupts.CPU80.CAL:Function_call_interrupts
> > 31149 ± 43% +268.5% 114795
> > ± 8% interrupts.CPU80.RES:Rescheduling_interrupts
> > 1583 ± 24% +911.9% 16020 ±
> > 66% interrupts.CPU81.CAL:Function_call_interrupts
> > 31699 ± 38% +260.9% 114418
> > ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
> > 1752 ± 36% +827.9% 16262 ±
> > 66% interrupts.CPU82.CAL:Function_call_interrupts
> > 29360 ± 36% +288.9% 114167
> > ± 8% interrupts.CPU82.RES:Rescheduling_interrupts
> > 1558 ± 13% +932.4% 16090 ±
> > 64% interrupts.CPU83.CAL:Function_call_interrupts
> > 30230 ± 36% +270.2% 111904
> > ± 8% interrupts.CPU83.RES:Rescheduling_interrupts
> > 1455 ± 17% +958.3% 15407 ±
> > 64% interrupts.CPU84.CAL:Function_call_interrupts
> > 30911 ± 34% +254.7% 109645 ±
> > 10% interrupts.CPU84.RES:Rescheduling_interrupts
> > 1603 ± 10% +827.8% 14881 ±
> > 62% interrupts.CPU85.CAL:Function_call_interrupts
> > 31085 ± 37% +265.4% 113578
> > ± 8% interrupts.CPU85.RES:Rescheduling_interrupts
> > 1575 ± 17% +904.7% 15827 ±
> > 64% interrupts.CPU86.CAL:Function_call_interrupts
> > 33065 ± 38% +248.7% 115301
> > ± 8% interrupts.CPU86.RES:Rescheduling_interrupts
> > 1498 ± 18% +920.9% 15295 ±
> > 64% interrupts.CPU87.CAL:Function_call_interrupts
> > 30477 ± 42% +274.6% 114181
> > ± 9% interrupts.CPU87.RES:Rescheduling_interrupts
> > 1418 ± 17% +960.6% 15044 ±
> > 64% interrupts.CPU88.CAL:Function_call_interrupts
> > 31735 ± 32% +255.3% 112741
> > ± 9% interrupts.CPU88.RES:Rescheduling_interrupts
> > 1459 ± 21% +984.3% 15826 ±
> > 65% interrupts.CPU89.CAL:Function_call_interrupts
> > 31037 ± 41% +260.3% 111826
> > ± 9% interrupts.CPU89.RES:Rescheduling_interrupts
> > 1505 ± 18% +562.6% 9975 ±
> > 47% interrupts.CPU9.CAL:Function_call_interrupts
> > 27015 ± 64% +327.1% 115392
> > ± 9% interrupts.CPU9.RES:Rescheduling_interrupts
> > 1592 ± 24% +935.1% 16487 ±
> > 66% interrupts.CPU90.CAL:Function_call_interrupts
> > 31256 ± 37% +257.9% 111877
> > ± 8% interrupts.CPU90.RES:Rescheduling_interrupts
> > 1576 ± 19% +882.7% 15489 ±
> > 66% interrupts.CPU91.CAL:Function_call_interrupts
> > 31276 ± 37% +266.3% 114573
> > ± 8% interrupts.CPU91.RES:Rescheduling_interrupts
> > 1706 ± 18% +786.3% 15121 ±
> > 66% interrupts.CPU92.CAL:Function_call_interrupts
> > 31400 ± 43% +260.4% 113170
> > ± 8% interrupts.CPU92.RES:Rescheduling_interrupts
> > 1347 ± 18% +1012.1% 14981 ±
> > 66% interrupts.CPU93.CAL:Function_call_interrupts
> > 29869 ± 35% +277.4% 112727
> > ± 8% interrupts.CPU93.RES:Rescheduling_interrupts
> > 1372 ± 12% +937.5% 14243 ±
> > 67% interrupts.CPU94.CAL:Function_call_interrupts
> > 30286 ± 40% +271.0% 112366
> > ± 7% interrupts.CPU94.RES:Rescheduling_interrupts
> > 1739 ± 27% +735.8% 14538 ±
> > 64% interrupts.CPU95.CAL:Function_call_interrupts
> > 30315 ± 39% +266.8% 111198
> > ± 8% interrupts.CPU95.RES:Rescheduling_interrupts
> > 2791109 ± 14% +291.3% 10921566
> > ± 8% interrupts.RES:Rescheduling_interrupts
> > 1.73 ± 27% -1.5 0.26 ±100% perf-
> > profile.calltrace.cycles-
> > pp.kill_pid_info.kill_something_info.__x64_sys_kill.do_syscall_64.e
> > ntry_SYSCALL_64_after_hwframe
> > 2.22 ± 22% -1.2 0.97 perf-
> > profile.calltrace.cycles-pp.kill
> > 2.08 ± 23% -1.2 0.84 perf-
> > profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
> > 2.03 ± 24% -1.2 0.80 perf-
> > profile.calltrace.cycles-
> > pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
> > 1.99 ± 24% -1.2 0.76 perf-
> > profile.calltrace.cycles-
> > pp.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
> > 1.85 ± 26% -1.2 0.62 perf-
> > profile.calltrace.cycles-
> > pp.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_6
> > 4_after_hwframe.kill
> > 45.78 -0.9 44.84 perf-
> > profile.calltrace.cycles-pp.__send
> > 45.45 -0.9 44.56 perf-
> > profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__send
> > 44.92 -0.6 44.37 perf-
> > profile.calltrace.cycles-
> > pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__send
> > 44.85 -0.5 44.31 perf-
> > profile.calltrace.cycles-
> > pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe.__
> > send
> > 44.78 -0.5 44.25 perf-
> > profile.calltrace.cycles-
> > pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_aft
> > er_hwframe.__send
> > 44.54 -0.5 44.03 perf-
> > profile.calltrace.cycles-
> > pp.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_S
> > YSCALL_64_after_hwframe
> > 1.47 ± 4% -0.2 1.29 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.__dev_queue_xmit.ip_finish_output2.ip_output.__ip_queue_xmit.__t
> > cp_transmit_skb
> > 2.48 -0.1 2.33 perf-
> > profile.calltrace.cycles-
> > pp.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol
> > _deliver_rcu
> > 1.66 -0.1 1.57 perf-
> > profile.calltrace.cycles-
> > pp.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.tc
> > p_v4_rcv
> > 0.74 ± 3% -0.1 0.64 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.__kfree_skb.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.__sys_re
> > cvfrom
> > 0.62 ± 5% -0.1 0.54 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.skb_release_all.__kfree_skb.tcp_recvmsg_locked.tcp_recvmsg.inet_
> > recvmsg
> > 0.61 ± 4% -0.1 0.53 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.skb_release_head_state.skb_release_all.__kfree_skb.tcp_recvmsg_l
> > ocked.tcp_recvmsg
> > 0.88 ± 3% -0.1 0.79 perf-
> > profile.calltrace.cycles-
> > pp.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_output2.ip_output
> > .__ip_queue_xmit
> > 0.80 ± 4% -0.1 0.72 perf-
> > profile.calltrace.cycles-
> > pp.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.ip_finish_out
> > put2.ip_output
> > 1.27 ± 3% -0.1 1.20 perf-
> > profile.calltrace.cycles-
> > pp._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
> > .__sys_sendto
> > 1.14 ± 3% -0.1 1.08 perf-
> > profile.calltrace.cycles-
> > pp.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_
> > sendmsg
> > 0.96 ± 2% -0.1 0.90 perf-
> > profile.calltrace.cycles-
> > pp.sk_stream_alloc_skb.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.
> > __sys_sendto
> > 1.59 -0.1 1.53 perf-
> > profile.calltrace.cycles-
> > pp.skb_copy_datagram_iter.tcp_recvmsg_locked.tcp_recvmsg.inet_recvm
> > sg.__sys_recvfrom
> > 1.57 -0.1 1.51 perf-
> > profile.calltrace.cycles-
> > pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg_locked.tc
> > p_recvmsg.inet_recvmsg
> > 1.06 ± 3% -0.1 1.01 perf-
> > profile.calltrace.cycles-
> > pp.copy_user_enhanced_fast_string.copyin._copy_from_iter_full.tcp_s
> > endmsg_locked.tcp_sendmsg
> > 0.90 ± 2% -0.1 0.85 perf-
> > profile.calltrace.cycles-
> > pp.__alloc_skb.sk_stream_alloc_skb.tcp_sendmsg_locked.tcp_sendmsg.s
> > ock_sendmsg
> > 0.95 -0.0 0.91 perf-
> > profile.calltrace.cycles-
> > pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_rec
> > vmsg_locked.tcp_recvmsg
> > 1.76 ± 3% +0.1 1.82 perf-
> > profile.calltrace.cycles-
> > pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__sc
> > hedule.schedule_idle
> > 1.81 ± 3% +0.1 1.88 perf-
> > profile.calltrace.cycles-
> > pp.perf_tp_event.perf_trace_sched_switch.__schedule.schedule_idle.d
> > o_idle
> > 0.61 ± 2% +0.1 0.68 perf-
> > profile.calltrace.cycles-
> > pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_
> > startup_64_no_verify
> > 1.94 ± 3% +0.1 2.01 perf-
> > profile.calltrace.cycles-
> > pp.perf_trace_sched_switch.__schedule.schedule_idle.do_idle.cpu_sta
> > rtup_entry
> > 0.58 ± 2% +0.1 0.67 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.pick_next_task_fair.__schedule.schedule_idle.do_idle.cpu_startup
> > _entry
> > 0.89 ± 6% +0.2 1.07 ± 5% perf-
> > profile.calltrace.cycles-
> > pp.update_load_avg.enqueue_entity.enqueue_task_fair.ttwu_do_activat
> > e.try_to_wake_up
> > 0.92 ± 3% +0.2 1.10 ± 5% perf-
> > profile.calltrace.cycles-
> > pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.sche
> > dule
> > 2.55 +0.2 2.78 perf-
> > profile.calltrace.cycles-
> > pp.select_task_rq_fair.try_to_wake_up.__wake_up_common.__wake_up_co
> > mmon_lock.sock_def_readable
> > 6.25 ± 3% +0.2 6.48 perf-
> > profile.calltrace.cycles-
> > pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
> > 2.10 ± 3% +0.2 2.34 perf-
> > profile.calltrace.cycles-
> > pp._raw_spin_lock.try_to_wake_up.__wake_up_common.__wake_up_common_
> > lock.sock_def_readable
> > 1.96 ± 3% +0.2 2.20 perf-
> > profile.calltrace.cycles-
> > pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up._
> > _wake_up_common.__wake_up_common_lock
> > 3.94 ± 2% +0.2 4.19 perf-
> > profile.calltrace.cycles-
> > pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
> > .__wake_up_common
> > 4.83 ± 2% +0.3 5.11 perf-
> > profile.calltrace.cycles-
> > pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__wake_up_comm
> > on.__wake_up_common_lock
> > 4.86 ± 2% +0.3 5.15 perf-
> > profile.calltrace.cycles-
> > pp.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_commo
> > n_lock.sock_def_readable
> > 5.42 ± 2% +0.3 5.72 perf-
> > profile.calltrace.cycles-
> > pp.set_task_cpu.try_to_wake_up.__wake_up_common.__wake_up_common_lo
> > ck.sock_def_readable
> > 3.86 ± 2% +0.5 4.31 perf-
> > profile.calltrace.cycles-
> > pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_seconda
> > ry
> > 3.90 ± 2% +0.5 4.36 perf-
> > profile.calltrace.cycles-
> > pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondar
> > y_startup_64_no_verify
> > 0.18 ±141% +0.5 0.66 ± 5% perf-
> > profile.calltrace.cycles-
> > pp._raw_spin_lock.__schedule.schedule_idle.do_idle.cpu_startup_entr
> > y
> > 0.00 +0.5 0.53 ± 2% perf-
> > profile.calltrace.cycles-
> > pp.set_next_entity.pick_next_task_fair.__schedule.schedule_idle.do_
> > idle
> > 7.74 ± 2% +0.6 8.32 perf-
> > profile.calltrace.cycles-
> > pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.schedule_ti
> > meout
> > 33.75 +0.6 34.34 perf-
> > profile.calltrace.cycles-
> > pp.__softirqentry_text_start.do_softirq.__local_bh_enable_ip.ip_fin
> > ish_output2.ip_output
> > 0.00 +0.6 0.59 ± 6% perf-
> > profile.calltrace.cycles-
> > pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.sched
> > ule_idle.do_idle
> > 23.16 +0.6 23.75 perf-
> > profile.calltrace.cycles-
> > pp.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.ent
> > ry_SYSCALL_64_after_hwframe
> > 23.06 +0.6 23.66 perf-
> > profile.calltrace.cycles-
> > pp.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_sy
> > scall_64
> > 8.59 ± 2% +0.6 9.21 perf-
> > profile.calltrace.cycles-
> > pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.wait_woke
> > n
> > 22.39 +0.6 23.03 perf-
> > profile.calltrace.cycles-
> > pp.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64
> > _sys_recvfrom
> > 33.98 +0.7 34.64 perf-
> > profile.calltrace.cycles-
> > pp.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit
> > .__tcp_transmit_skb
> > 33.89 +0.7 34.56 perf-
> > profile.calltrace.cycles-
> > pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip
> > _queue_xmit
> > 16.58 ± 2% +0.7 17.32 perf-
> > profile.calltrace.cycles-
> > pp.__schedule.schedule.schedule_timeout.wait_woken.sk_wait_data
> > 31.96 +0.7 32.70 perf-
> > profile.calltrace.cycles-
> > pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.__napi_poll.
> > net_rx_action
> > 16.76 ± 2% +0.8 17.51 perf-
> > profile.calltrace.cycles-
> > pp.schedule.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_lo
> > cked
> > 16.91 ± 2% +0.8 17.66 perf-
> > profile.calltrace.cycles-
> > pp.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked.tcp_
> > recvmsg
> > 17.24 ± 2% +0.8 18.00 perf-
> > profile.calltrace.cycles-
> > pp.wait_woken.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg.inet_recv
> > msg
> > 18.29 +0.8 19.07 perf-
> > profile.calltrace.cycles-
> > pp.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg.inet_recvmsg.__sys_r
> > ecvfrom
> > 31.40 +0.9 32.29 perf-
> > profile.calltrace.cycles-
> > pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_bac
> > klog.__napi_poll
> > 31.33 +0.9 32.22 perf-
> > profile.calltrace.cycles-
> > pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_
> > skb_one_core.process_backlog
> > 31.29 +0.9 32.18 perf-
> > profile.calltrace.cycles-
> > pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
> > .ip_rcv.__netif_receive_skb_one_core
> > 31.15 +0.9 32.05 perf-
> > profile.calltrace.cycles-
> > pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_lo
> > cal_deliver.ip_rcv
> > 29.63 +1.0 30.61 perf-
> > profile.calltrace.cycles-
> > pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_delive
> > r_finish.ip_local_deliver
> > 29.48 +1.0 30.49 perf-
> > profile.calltrace.cycles-
> > pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver
> > _rcu.ip_local_deliver_finish
> > 0.00 +1.1 1.05 perf-
> > profile.calltrace.cycles-
> > pp.available_idle_cpu.select_idle_cpu.select_idle_sibling.select_ta
> > sk_rq_fair.try_to_wake_up
> > 24.86 ± 2% +1.2 26.07 perf-
> > profile.calltrace.cycles-
> > pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv
> > _established.tcp_v4_do_rcv
> > 24.61 ± 2% +1.2 25.82 perf-
> > profile.calltrace.cycles-
> > pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_r
> > eadable.tcp_rcv_established
> > 25.65 ± 2% +1.2 26.87 perf-
> > profile.calltrace.cycles-
> > pp.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.i
> > p_protocol_deliver_rcu
> > 25.34 ± 2% +1.2 26.58 perf-
> > profile.calltrace.cycles-
> > pp.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_
> > v4_do_rcv.tcp_v4_rcv
> > 14.20 ± 2% +1.6 15.78 perf-
> > profile.calltrace.cycles-
> > pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup
> > _entry
> > 14.62 ± 2% +1.6 16.26 perf-
> > profile.calltrace.cycles-
> > pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.star
> > t_secondary
> > 14.64 ± 2% +1.6 16.28 perf-
> > profile.calltrace.cycles-
> > pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondar
> > y_startup_64_no_verify
> > 0.00 +2.0 2.03 perf-
> > profile.calltrace.cycles-
> > pp.select_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_w
> > ake_up.__wake_up_common
> > 19.87 ± 2% +2.3 22.16 perf-
> > profile.calltrace.cycles-
> > pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_n
> > o_verify
> > 19.89 ± 2% +2.3 22.19 perf-
> > profile.calltrace.cycles-
> > pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
> > 19.89 ± 2% +2.3 22.19 perf-
> > profile.calltrace.cycles-
> > pp.start_secondary.secondary_startup_64_no_verify
> > 20.12 ± 2% +2.3 22.45 perf-
> > profile.calltrace.cycles-pp.secondary_startup_64_no_verify
> > 0.00 +2.4 2.36 perf-
> > profile.calltrace.cycles-
> > pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.__wake_up
> > _common.__wake_up_common_lock
> > 74.35 -2.1 72.23 perf-
> > profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> > 73.04 -1.8 71.27 perf-
> > profile.children.cycles-pp.do_syscall_64
> > 2.25 ± 22% -1.3 0.99 perf-
> > profile.children.cycles-pp.kill
> > 1.99 ± 24% -1.2 0.76 perf-
> > profile.children.cycles-pp.__x64_sys_kill
> > 1.85 ± 26% -1.2 0.62 perf-
> > profile.children.cycles-pp.kill_something_info
> > 1.73 ± 27% -1.2 0.50 perf-
> > profile.children.cycles-pp.kill_pid_info
> > 1.71 ± 28% -1.2 0.48 perf-
> > profile.children.cycles-pp.group_send_sig_info
> > 1.59 ± 30% -1.2 0.38 perf-
> > profile.children.cycles-pp.security_task_kill
> > 1.58 ± 30% -1.2 0.37 perf-
> > profile.children.cycles-pp.apparmor_task_kill
> > 45.86 -1.0 44.90 perf-
> > profile.children.cycles-pp.__send
> > 1.28 ± 21% -0.9 0.39 ± 2% perf-
> > profile.children.cycles-pp.aa_sk_perm
> > 0.85 ± 30% -0.6 0.24 ± 3% perf-
> > profile.children.cycles-pp.aa_get_task_label
> > 0.91 ± 19% -0.6 0.32 ± 3% perf-
> > profile.children.cycles-pp.security_socket_recvmsg
> > 0.93 ± 19% -0.6 0.34 ± 3% perf-
> > profile.children.cycles-pp.sock_recvmsg
> > 44.86 -0.5 44.31 perf-
> > profile.children.cycles-pp.__x64_sys_sendto
> > 44.79 -0.5 44.25 perf-
> > profile.children.cycles-pp.__sys_sendto
> > 44.54 -0.5 44.03 perf-
> > profile.children.cycles-pp.sock_sendmsg
> > 1.09 ± 8% -0.3 0.75 perf-
> > profile.children.cycles-pp.syscall_exit_to_user_mode
> > 0.91 ± 9% -0.3 0.59 perf-
> > profile.children.cycles-pp.exit_to_user_mode_prepare
> > 0.44 ± 22% -0.3 0.14 ± 2% perf-
> > profile.children.cycles-pp.security_socket_sendmsg
> > 1.53 ± 4% -0.2 1.35 ± 2% perf-
> > profile.children.cycles-pp.__dev_queue_xmit
> > 2.59 -0.1 2.44 perf-
> > profile.children.cycles-pp.tcp_ack
> > 1.46 ± 2% -0.1 1.34 perf-
> > profile.children.cycles-pp.__kfree_skb
> > 1.74 ± 2% -0.1 1.64 perf-
> > profile.children.cycles-pp.tcp_clean_rtx_queue
> > 0.72 ± 4% -0.1 0.62 ± 2% perf-
> > profile.children.cycles-pp.skb_release_all
> > 0.70 ± 4% -0.1 0.61 ± 2% perf-
> > profile.children.cycles-pp.skb_release_head_state
> > 0.82 ± 4% -0.1 0.74 perf-
> > profile.children.cycles-pp.loopback_xmit
> > 0.90 ± 4% -0.1 0.82 perf-
> > profile.children.cycles-pp.dev_hard_start_xmit
> > 1.28 ± 3% -0.1 1.20 perf-
> > profile.children.cycles-pp._copy_from_iter_full
> > 0.69 ± 4% -0.1 0.62 ± 2% perf-
> > profile.children.cycles-pp._raw_spin_lock_bh
> > 0.54 ± 6% -0.1 0.46 ± 2% perf-
> > profile.children.cycles-pp.switch_mm_irqs_off
> > 0.53 ± 4% -0.1 0.45 ± 2% perf-
> > profile.children.cycles-pp.dst_release
> > 1.58 -0.1 1.51 perf-
> > profile.children.cycles-pp.__skb_datagram_iter
> > 0.96 ± 2% -0.1 0.90 perf-
> > profile.children.cycles-pp.sk_stream_alloc_skb
> > 1.60 -0.1 1.53 perf-
> > profile.children.cycles-pp.skb_copy_datagram_iter
> > 0.40 ± 3% -0.1 0.34 ± 4% perf-
> > profile.children.cycles-pp.tcp_send_mss
> > 0.36 ± 4% -0.1 0.31 ± 3% perf-
> > profile.children.cycles-pp.tcp_current_mss
> > 0.91 ± 2% -0.1 0.86 perf-
> > profile.children.cycles-pp.__alloc_skb
> > 0.23 ± 5% -0.0 0.18 ± 4% perf-
> > profile.children.cycles-pp.irqtime_account_irq
> > 0.27 ± 5% -0.0 0.23 perf-
> > profile.children.cycles-pp.load_new_mm_cr3
> > 0.17 ± 2% -0.0 0.13 ± 4% perf-
> > profile.children.cycles-pp.tcp_ack_update_rtt
> > 0.53 -0.0 0.49 ± 2% perf-
> > profile.children.cycles-pp.sched_clock_cpu
> > 0.61 ± 4% -0.0 0.58 ± 3% perf-
> > profile.children.cycles-pp.release_sock
> > 0.33 ± 8% -0.0 0.29 ± 2% perf-
> > profile.children.cycles-pp.next_token
> > 0.95 -0.0 0.92 perf-
> > profile.children.cycles-pp._copy_to_iter
> > 0.49 ± 2% -0.0 0.46 ± 2% perf-
> > profile.children.cycles-pp.switch_fpu_return
> > 0.41 ± 3% -0.0 0.38 ± 3% perf-
> > profile.children.cycles-pp.tcp_event_new_data_sent
> > 0.39 ± 3% -0.0 0.36 ± 2% perf-
> > profile.children.cycles-pp.netif_rx
> > 0.31 ± 2% -0.0 0.28 perf-
> > profile.children.cycles-pp.reweight_entity
> > 0.21 ± 4% -0.0 0.18 ± 4% perf-
> > profile.children.cycles-pp.kmem_cache_alloc_node
> > 0.33 ± 2% -0.0 0.30 ± 3% perf-
> > profile.children.cycles-pp.__entry_text_start
> > 0.20 ± 4% -0.0 0.17 ± 3% perf-
> > profile.children.cycles-pp.enqueue_to_backlog
> > 0.38 -0.0 0.35 ± 2% perf-
> > profile.children.cycles-pp.__check_object_size
> > 0.38 ± 3% -0.0 0.35 perf-
> > profile.children.cycles-pp.netif_rx_internal
> > 0.32 ± 3% -0.0 0.29 perf-
> > profile.children.cycles-pp.syscall_return_via_sysret
> > 0.16 ± 4% -0.0 0.13 ± 5% perf-
> > profile.children.cycles-pp.ip_local_out
> > 0.28 ± 3% -0.0 0.25 ± 2% perf-
> > profile.children.cycles-pp.ttwu_do_wakeup
> > 0.13 ± 5% -0.0 0.11 ± 6% perf-
> > profile.children.cycles-pp.__virt_addr_valid
> > 0.13 ± 3% -0.0 0.11 ± 6% perf-
> > profile.children.cycles-pp.nf_hook_slow
> > 0.31 -0.0 0.29 ± 2% perf-
> > profile.children.cycles-pp.simple_copy_to_iter
> > 0.11 ± 6% -0.0 0.09 ± 5% perf-
> > profile.children.cycles-pp.import_single_range
> > 0.11 ± 9% -0.0 0.09 ± 4% perf-
> > profile.children.cycles-pp.sock_put
> > 0.26 ± 2% -0.0 0.24 ± 3% perf-
> > profile.children.cycles-pp.check_preempt_curr
> > 0.09 ± 5% -0.0 0.07 ± 7% perf-
> > profile.children.cycles-pp.__might_fault
> > 0.13 ± 5% -0.0 0.11 ± 4% perf-
> > profile.children.cycles-pp.copy_fpregs_to_fpstate
> > 0.17 ± 2% -0.0 0.15 ± 3% perf-
> > profile.children.cycles-pp.tcp_rearm_rto
> > 0.13 ± 6% -0.0 0.11 ± 5% perf-
> > profile.children.cycles-pp.__cond_resched
> > 0.10 ± 7% -0.0 0.09 ± 4% perf-
> > profile.children.cycles-pp.tcp_event_data_recv
> > 0.11 ± 6% -0.0 0.09 ± 5% perf-
> > profile.children.cycles-pp.check_kill_permission
> > 0.23 ± 3% -0.0 0.22 perf-
> > profile.children.cycles-pp.__ksize
> > 0.10 ± 4% -0.0 0.08 ± 5% perf-
> > profile.children.cycles-pp.tcp_update_pacing_rate
> > 0.07 ± 7% -0.0 0.05 ± 8% perf-
> > profile.children.cycles-pp.server
> > 0.08 ± 4% +0.0 0.09 ± 4% perf-
> > profile.children.cycles-pp.attach_entity_load_avg
> > 0.07 ± 5% +0.0 0.09 ± 5% perf-
> > profile.children.cycles-pp.is_module_text_address
> > 0.10 ± 8% +0.0 0.12 ± 5% perf-
> > profile.children.cycles-pp.set_next_task_idle
> > 0.09 ± 5% +0.0 0.11 ± 4% perf-
> > profile.children.cycles-pp.__update_idle_core
> > 0.29 +0.0 0.32 ± 2% perf-
> > profile.children.cycles-pp.tcp_cleanup_rbuf
> > 0.30 +0.0 0.33 perf-
> > profile.children.cycles-pp.kmem_cache_free
> > 0.13 ± 5% +0.0 0.15 ± 3% perf-
> > profile.children.cycles-pp.perf_instruction_pointer
> > 0.08 ± 7% +0.0 0.10 ± 6% perf-
> > profile.children.cycles-pp.cpus_share_cache
> > 0.10 ± 4% +0.0 0.13 ± 5% perf-
> > profile.children.cycles-pp.pick_next_task_idle
> > 0.20 ± 3% +0.0 0.23 ± 2% perf-
> > profile.children.cycles-pp.tick_nohz_idle_exit
> > 0.55 ± 2% +0.0 0.58 perf-
> > profile.children.cycles-pp.update_rq_clock
> > 0.32 ± 4% +0.0 0.34 ± 2% perf-
> > profile.children.cycles-pp.tick_nohz_get_sleep_length
> > 0.23 ± 2% +0.0 0.26 ± 2% perf-
> > profile.children.cycles-pp._find_next_bit
> > 0.19 ± 5% +0.0 0.22 ± 3% perf-
> > profile.children.cycles-pp.tick_nohz_next_event
> > 0.49 ± 2% +0.0 0.52 ± 2% perf-
> > profile.children.cycles-pp.finish_task_switch
> > 0.33 ± 3% +0.0 0.37 perf-
> > profile.children.cycles-pp.migrate_task_rq_fair
> > 0.22 ± 5% +0.0 0.26 ± 4% perf-
> > profile.children.cycles-pp.start_kernel
> > 0.19 ± 2% +0.0 0.23 ± 2% perf-
> > profile.children.cycles-pp.cpumask_next
> > 0.29 ± 4% +0.0 0.34 ± 2% perf-
> > profile.children.cycles-pp.__slab_free
> > 0.60 +0.0 0.65 ± 2% perf-
> > profile.children.cycles-pp.set_next_entity
> > 0.90 +0.0 0.94 perf-
> > profile.children.cycles-pp._raw_spin_lock_irqsave
> > 0.62 ± 2% +0.1 0.69 perf-
> > profile.children.cycles-pp.menu_select
> > 2.01 +0.1 2.10 perf-
> > profile.children.cycles-pp.select_idle_cpu
> > 1.16 +0.1 1.27 perf-
> > profile.children.cycles-pp.available_idle_cpu
> > 2.62 +0.2 2.86 perf-
> > profile.children.cycles-pp.select_task_rq_fair
> > 4.07 ± 2% +0.2 4.32 perf-
> > profile.children.cycles-pp.enqueue_entity
> > 5.00 ± 2% +0.3 5.28 perf-
> > profile.children.cycles-pp.ttwu_do_activate
> > 4.97 ± 2% +0.3 5.26 perf-
> > profile.children.cycles-pp.enqueue_task_fair
> > 5.60 ± 3% +0.3 5.90 perf-
> > profile.children.cycles-pp.set_task_cpu
> > 2.54 ± 4% +0.4 2.91 ± 4% perf-
> > profile.children.cycles-pp.update_load_avg
> > 3.37 ± 2% +0.4 3.76 perf-
> > profile.children.cycles-pp._raw_spin_lock
> > 2.61 ± 3% +0.4 3.02 ± 2% perf-
> > profile.children.cycles-pp.native_queued_spin_lock_slowpath
> > 1.46 ± 8% +0.4 1.88 ± 8% perf-
> > profile.children.cycles-pp.update_cfs_group
> > 3.95 ± 2% +0.5 4.42 perf-
> > profile.children.cycles-pp.schedule_idle
> > 17.05 +0.5 17.54 perf-
> > profile.children.cycles-pp.schedule
> > 7.76 ± 2% +0.6 8.34 perf-
> > profile.children.cycles-pp.dequeue_entity
> > 23.16 +0.6 23.76 perf-
> > profile.children.cycles-pp.inet_recvmsg
> > 23.07 +0.6 23.67 perf-
> > profile.children.cycles-pp.tcp_recvmsg
> > 8.61 ± 2% +0.6 9.23 perf-
> > profile.children.cycles-pp.dequeue_task_fair
> > 22.41 +0.6 23.05 perf-
> > profile.children.cycles-pp.tcp_recvmsg_locked
> > 34.18 +0.7 34.86 perf-
> > profile.children.cycles-pp.__local_bh_enable_ip
> > 33.99 +0.7 34.68 perf-
> > profile.children.cycles-pp.do_softirq
> > 34.00 +0.7 34.69 perf-
> > profile.children.cycles-pp.__softirqentry_text_start
> > 33.40 +0.7 34.14 perf-
> > profile.children.cycles-pp.net_rx_action
> > 33.18 +0.8 33.94 perf-
> > profile.children.cycles-pp.__napi_poll
> > 16.91 ± 2% +0.8 17.66 perf-
> > profile.children.cycles-pp.schedule_timeout
> > 17.24 ± 2% +0.8 18.00 perf-
> > profile.children.cycles-pp.wait_woken
> > 33.09 +0.8 33.86 perf-
> > profile.children.cycles-pp.process_backlog
> > 18.30 +0.8 19.07 perf-
> > profile.children.cycles-pp.sk_wait_data
> > 32.81 +0.8 33.61 perf-
> > profile.children.cycles-pp.__netif_receive_skb_one_core
> > 32.47 +0.8 33.30 perf-
> > profile.children.cycles-pp.ip_rcv
> > 31.91 +0.9 32.81 perf-
> > profile.children.cycles-pp.ip_local_deliver_finish
> > 31.97 +0.9 32.87 perf-
> > profile.children.cycles-pp.ip_local_deliver
> > 31.86 +0.9 32.77 perf-
> > profile.children.cycles-pp.ip_protocol_deliver_rcu
> > 31.72 +0.9 32.63 perf-
> > profile.children.cycles-pp.tcp_v4_rcv
> > 20.81 +0.9 21.74 perf-
> > profile.children.cycles-pp.__schedule
> > 30.18 +1.0 31.16 perf-
> > profile.children.cycles-pp.tcp_v4_do_rcv
> > 30.03 +1.0 31.04 perf-
> > profile.children.cycles-pp.tcp_rcv_established
> > 25.54 ± 2% +1.2 26.76 perf-
> > profile.children.cycles-pp.__wake_up_common
> > 25.29 ± 2% +1.2 26.52 perf-
> > profile.children.cycles-pp.try_to_wake_up
> > 26.34 ± 2% +1.2 27.58 perf-
> > profile.children.cycles-pp.sock_def_readable
> > 26.04 ± 2% +1.2 27.29 perf-
> > profile.children.cycles-pp.__wake_up_common_lock
> > 14.36 ± 2% +1.6 15.96 perf-
> > profile.children.cycles-pp.intel_idle
> > 14.80 ± 2% +1.7 16.46 perf-
> > profile.children.cycles-pp.cpuidle_enter_state
> > 14.80 ± 2% +1.7 16.47 perf-
> > profile.children.cycles-pp.cpuidle_enter
> > 19.89 ± 2% +2.3 22.19 perf-
> > profile.children.cycles-pp.start_secondary
> > 20.10 ± 2% +2.3 22.43 perf-
> > profile.children.cycles-pp.do_idle
> > 20.12 ± 2% +2.3 22.45 perf-
> > profile.children.cycles-pp.secondary_startup_64_no_verify
> > 20.12 ± 2% +2.3 22.45 perf-
> > profile.children.cycles-pp.cpu_startup_entry
> > 0.00 +2.4 2.43 perf-
> > profile.children.cycles-pp.select_idle_sibling
> > 1.18 ± 22% -0.9 0.30 ± 2% perf-
> > profile.self.cycles-pp.aa_sk_perm
> > 0.84 ± 30% -0.6 0.23 ± 4% perf-
> > profile.self.cycles-pp.aa_get_task_label
> > 0.66 ± 31% -0.6 0.10 ± 4% perf-
> > profile.self.cycles-pp.apparmor_task_kill
> > 0.61 ± 5% -0.1 0.55 ± 3% perf-
> > profile.self.cycles-pp._raw_spin_lock_bh
> > 0.52 ± 4% -0.1 0.45 ± 2% perf-
> > profile.self.cycles-pp.dst_release
> > 0.33 ± 2% -0.1 0.26 ± 2% perf-
> > profile.self.cycles-pp.select_task_rq_fair
> > 0.43 ± 6% -0.1 0.38 ± 2% perf-
> > profile.self.cycles-pp.__tcp_transmit_skb
> > 0.70 ± 3% -0.1 0.65 ± 2% perf-
> > profile.self.cycles-pp.tcp_sendmsg_locked
> > 0.27 ± 5% -0.0 0.23 perf-
> > profile.self.cycles-pp.load_new_mm_cr3
> > 0.49 ± 2% -0.0 0.45 ± 2% perf-
> > profile.self.cycles-pp.switch_fpu_return
> > 0.16 ± 4% -0.0 0.12 ± 3% perf-
> > profile.self.cycles-pp.tcp_ack_update_rtt
> > 0.33 ± 3% -0.0 0.30 ± 3% perf-
> > profile.self.cycles-pp.tcp_clean_rtx_queue
> > 0.18 ± 6% -0.0 0.15 ± 2% perf-
> > profile.self.cycles-pp.__ip_queue_xmit
> > 0.34 ± 2% -0.0 0.30 ± 3% perf-
> > profile.self.cycles-pp.tcp_write_xmit
> > 0.25 ± 7% -0.0 0.22 ± 6% perf-
> > profile.self.cycles-pp.switch_mm_irqs_off
> > 0.13 ± 7% -0.0 0.10 ± 12% perf-
> > profile.self.cycles-pp.tcp_current_mss
> > 0.25 ± 5% -0.0 0.22 ± 3% perf-
> > profile.self.cycles-pp.ip_finish_output2
> > 0.10 ± 7% -0.0 0.07 ± 8% perf-
> > profile.self.cycles-pp.exit_to_user_mode_prepare
> > 0.13 ± 5% -0.0 0.10 ± 3% perf-
> > profile.self.cycles-pp.do_user_addr_fault
> > 0.13 ± 5% -0.0 0.11 ± 6% perf-
> > profile.self.cycles-pp.__virt_addr_valid
> > 0.21 ± 6% -0.0 0.18 ± 3% perf-
> > profile.self.cycles-pp.tcp_wfree
> > 0.32 ± 3% -0.0 0.29 ± 2% perf-
> > profile.self.cycles-pp.syscall_return_via_sysret
> > 0.23 ± 4% -0.0 0.21 ± 2% perf-
> > profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
> > 0.11 ± 8% -0.0 0.09 perf-
> > profile.self.cycles-pp.import_single_range
> > 0.15 ± 3% -0.0 0.14 ± 3% perf-
> > profile.self.cycles-pp.tcp_rearm_rto
> > 0.09 ± 8% -0.0 0.07 ± 5% perf-
> > profile.self.cycles-pp.sched_clock_cpu
> > 0.22 -0.0 0.20 ± 3% perf-
> > profile.self.cycles-pp.__softirqentry_text_start
> > 0.11 ± 5% -0.0 0.09 ± 5% perf-
> > profile.self.cycles-pp.__netif_receive_skb_one_core
> > 0.07 ± 8% -0.0 0.05 ± 8% perf-
> > profile.self.cycles-pp.tcp_v4_do_rcv
> > 0.10 ± 4% -0.0 0.08 ± 7% perf-
> > profile.self.cycles-pp.perf_callchain
> > 0.13 ± 5% -0.0 0.11 ± 3% perf-
> > profile.self.cycles-pp.copy_fpregs_to_fpstate
> > 0.13 ± 5% -0.0 0.11 ± 4% perf-
> > profile.self.cycles-pp.kmem_cache_alloc_node
> > 0.09 -0.0 0.08 ± 6% perf-
> > profile.self.cycles-pp.__napi_poll
> > 0.08 ± 6% -0.0 0.06 ± 6% perf-
> > profile.self.cycles-pp.__x64_sys_sendto
> > 0.12 ± 3% -0.0 0.11 ± 3% perf-
> > profile.self.cycles-pp.__sys_sendto
> > 0.11 ± 6% -0.0 0.09 ± 5% perf-
> > profile.self.cycles-pp.__x64_sys_recvfrom
> > 0.14 ± 3% -0.0 0.12 ± 3% perf-
> > profile.self.cycles-pp.__kmalloc_node_track_caller
> > 0.07 ± 7% -0.0 0.05 ± 8% perf-
> > profile.self.cycles-pp.ftrace_ops_trampoline
> > 0.09 ± 5% -0.0 0.08 ± 5% perf-
> > profile.self.cycles-pp.tcp_update_pacing_rate
> > 0.06 ± 8% +0.0 0.07 perf-
> > profile.self.cycles-pp.cpuidle_enter_state
> > 0.06 +0.0 0.07 ± 6% perf-
> > profile.self.cycles-pp.rcu_nocb_flush_deferred_wakeup
> > 0.08 ± 7% +0.0 0.10 ± 4% perf-
> > profile.self.cycles-pp.cpus_share_cache
> > 0.05 ± 8% +0.0 0.07 ± 5% perf-
> > profile.self.cycles-pp.perf_instruction_pointer
> > 0.28 +0.0 0.30 ± 3% perf-
> > profile.self.cycles-pp.tcp_cleanup_rbuf
> > 0.18 ± 5% +0.0 0.20 perf-
> > profile.self.cycles-pp.migrate_task_rq_fair
> > 0.24 ± 3% +0.0 0.27 ± 2% perf-
> > profile.self.cycles-pp.menu_select
> > 0.41 ± 3% +0.0 0.44 ± 2% perf-
> > profile.self.cycles-pp.update_rq_clock
> > 0.22 ± 4% +0.0 0.26 ± 2% perf-
> > profile.self.cycles-pp._find_next_bit
> > 0.16 ± 2% +0.0 0.20 ± 3% perf-
> > profile.self.cycles-pp.do_idle
> > 0.29 ± 3% +0.0 0.34 ± 2% perf-
> > profile.self.cycles-pp.__slab_free
> > 0.87 +0.0 0.92 perf-
> > profile.self.cycles-pp._raw_spin_lock_irqsave
> > 1.24 +0.0 1.29 ± 2% perf-
> > profile.self.cycles-pp.__schedule
> > 1.15 +0.1 1.26 perf-
> > profile.self.cycles-pp.available_idle_cpu
> > 0.00 +0.1 0.14 ± 3% perf-
> > profile.self.cycles-pp.select_idle_sibling
> > 0.43 ± 7% +0.2 0.61 ± 8% perf-
> > profile.self.cycles-pp.set_task_cpu
> > 1.78 ± 4% +0.3 2.12 ± 5% perf-
> > profile.self.cycles-pp.update_load_avg
> > 2.61 ± 3% +0.4 3.01 ± 2% perf-
> > profile.self.cycles-pp.native_queued_spin_lock_slowpath
> > 1.46 ± 8% +0.4 1.88 ± 8% perf-
> > profile.self.cycles-pp.update_cfs_group
> > 14.36 ± 2% +1.6 15.96 perf-
> > profile.self.cycles-pp.intel_idle
> > 20074513 ± 7% -24.9% 15075828 ± 7% softirqs.CPU0.NET_RX
> > 38385 ± 66% +476.2% 221177 ± 9% softirqs.CPU0.SCHED
> > 32352 ± 12% -78.4% 6999 ± 45% softirqs.CPU0.TIMER
> > 20359880 ± 5% -26.9% 14879840 ± 7% softirqs.CPU1.NET_RX
> > 36748 ± 67% +492.7% 217810 ± 10% softirqs.CPU1.SCHED
> > 31252 ± 13% -85.5% 4524 ± 59% softirqs.CPU1.TIMER
> > 20405448 ± 5% -25.5% 15210284 ± 7% softirqs.CPU10.NET_RX
> > 36443 ± 69% +499.5% 218467 ± 9% softirqs.CPU10.SCHED
> > 30609 ± 11% -84.5% 4741 ± 63% softirqs.CPU10.TIMER
> > 20394573 ± 4% -29.0% 14484845 ± 10% softirqs.CPU11.NET_RX
> > 36636 ± 70% +464.1% 206683 ± 7% softirqs.CPU11.SCHED
> > 30678 ± 11% -84.8% 4653 ± 67% softirqs.CPU11.TIMER
> > 20405829 ± 4% -26.1% 15089688 ± 7% softirqs.CPU12.NET_RX
> > 37061 ± 70% +494.1% 220185 ± 8% softirqs.CPU12.SCHED
> > 31218 ± 10% -84.2% 4919 ± 66% softirqs.CPU12.TIMER
> > 20444043 ± 5% -26.2% 15095203 ± 7% softirqs.CPU13.NET_RX
> > 36503 ± 70% +497.6% 218132 ± 9% softirqs.CPU13.SCHED
> > 30651 ± 11% -84.6% 4710 ± 62% softirqs.CPU13.TIMER
> > 20419827 ± 5% -25.6% 15186252 ± 7% softirqs.CPU14.NET_RX
> > 36704 ± 70% +494.3% 218149 ± 9% softirqs.CPU14.SCHED
> > 30681 ± 11% -84.3% 4806 ± 65% softirqs.CPU14.TIMER
> > 20510872 ± 5% -25.7% 15232936 ± 7% softirqs.CPU15.NET_RX
> > 36871 ± 69% +493.3% 218761 ± 9% softirqs.CPU15.SCHED
> > 31009 ± 12% -84.5% 4793 ± 63% softirqs.CPU15.TIMER
> > 20418436 ± 5% -25.4% 15224178 ± 7% softirqs.CPU16.NET_RX
> > 36854 ± 69% +495.1% 219327 ± 9% softirqs.CPU16.SCHED
> > 31013 ± 11% -84.7% 4758 ± 64% softirqs.CPU16.TIMER
> > 20372676 ± 4% -25.9% 15095272 ± 7% softirqs.CPU17.NET_RX
> > 36811 ± 70% +494.0% 218669 ± 9% softirqs.CPU17.SCHED
> > 31390 ± 13% -77.9% 6929 ± 67% softirqs.CPU17.TIMER
> > 20416587 ± 5% -26.0% 15118161 ± 7% softirqs.CPU18.NET_RX
> > 36394 ± 70% +502.6% 219293 ± 9% softirqs.CPU18.SCHED
> > 30912 ± 11% -84.5% 4784 ± 63% softirqs.CPU18.TIMER
> > 20432227 ± 4% -25.8% 15168258 ± 7% softirqs.CPU19.NET_RX
> > 36602 ± 70% +497.8% 218796 ± 9% softirqs.CPU19.SCHED
> > 30843 ± 11% -82.9% 5259 ± 56% softirqs.CPU19.TIMER
> > 20304573 ± 6% -26.0% 15034216 ± 7% softirqs.CPU2.NET_RX
> > 36968 ± 69% +495.1% 220012 ± 9% softirqs.CPU2.SCHED
> > 30526 ± 12% -84.4% 4764 ± 62% softirqs.CPU2.TIMER
> > 20444514 ± 5% -25.5% 15240754 ± 7% softirqs.CPU20.NET_RX
> > 36759 ± 70% +495.9% 219056 ± 9% softirqs.CPU20.SCHED
> > 30846 ± 11% -84.3% 4831 ± 61% softirqs.CPU20.TIMER
> > 20253081 ± 6% -25.1% 15174237 ± 7% softirqs.CPU21.NET_RX
> > 36699 ± 70% +495.2% 218452 ± 8% softirqs.CPU21.SCHED
> > 30719 ± 12% -83.5% 5060 ± 65% softirqs.CPU21.TIMER
> > 20460332 ± 4% -25.8% 15186744 ± 7% softirqs.CPU22.NET_RX
> > 36652 ± 70% +497.1% 218854 ± 9% softirqs.CPU22.SCHED
> > 30899 ± 11% -84.4% 4811 ± 63% softirqs.CPU22.TIMER
> > 20389322 ± 5% -26.5% 14979439 ± 8% softirqs.CPU23.NET_RX
> > 36717 ± 70% +492.5% 217548 ± 8% softirqs.CPU23.SCHED
> > 30864 ± 11% -83.1% 5216 ± 60% softirqs.CPU23.TIMER
> > 21350836 ± 3% -30.0% 14945967 ± 6% softirqs.CPU24.NET_RX
> > 47117 ± 46% +359.7% 216593 ± 8% softirqs.CPU24.SCHED
> > 31954 ± 20% -84.0% 5123 ± 66% softirqs.CPU24.TIMER
> > 21494795 ± 3% -28.6% 15347973 ± 6% softirqs.CPU25.NET_RX
> > 46755 ± 49% +362.6% 216302 ± 8% softirqs.CPU25.SCHED
> > 30392 ± 9% -82.7% 5245 ± 47% softirqs.CPU25.TIMER
> > 21529925 ± 3% -28.8% 15338064 ± 6% softirqs.CPU26.NET_RX
> > 46989 ± 48% +360.6% 216437 ± 8% softirqs.CPU26.SCHED
> > 30126 ± 9% -84.3% 4715 ± 60% softirqs.CPU26.TIMER
> > 21609177 ± 3% -28.9% 15374089 ± 6% softirqs.CPU27.NET_RX
> > 47211 ± 48% +358.1% 216275 ± 8% softirqs.CPU27.SCHED
> > 30806 ± 9% -84.4% 4806 ± 59% softirqs.CPU27.TIMER
> > 21434518 ± 3% -28.0% 15426523 ± 6% softirqs.CPU28.NET_RX
> > 46072 ± 49% +369.3% 216211 ± 8% softirqs.CPU28.SCHED
> > 29926 ± 10% -83.7% 4871 ± 60% softirqs.CPU28.TIMER
> > 21436007 ± 3% -28.3% 15360927 ± 6% softirqs.CPU29.NET_RX
> > 46751 ± 48% +363.2% 216561 ± 8% softirqs.CPU29.SCHED
> > 29983 ± 9% -83.7% 4876 ± 58% softirqs.CPU29.TIMER
> > 20556029 ± 5% -27.1% 14991636 ± 9% softirqs.CPU3.NET_RX
> > 36617 ± 69% +485.7% 214485 ± 9% softirqs.CPU3.SCHED
> > 30777 ± 11% -82.7% 5311 ± 54% softirqs.CPU3.TIMER
> > 21514774 ± 3% -29.0% 15275932 ± 6% softirqs.CPU30.NET_RX
> > 46934 ± 48% +361.6% 216667 ± 8% softirqs.CPU30.SCHED
> > 30153 ± 9% -84.1% 4799 ± 58% softirqs.CPU30.TIMER
> > 21317864 ± 4% -28.1% 15318620 ± 6% softirqs.CPU31.NET_RX
> > 46511 ± 49% +365.4% 216450 ± 8% softirqs.CPU31.SCHED
> > 29862 ± 10% -83.9% 4819 ± 60% softirqs.CPU31.TIMER
> > 21191794 ± 5% -27.4% 15375714 ± 6% softirqs.CPU32.NET_RX
> > 45002 ± 47% +380.8% 216377 ± 8% softirqs.CPU32.SCHED
> > 30188 ± 13% -84.0% 4825 ± 60% softirqs.CPU32.TIMER
> > 21594693 ± 3% -28.5% 15432125 ± 6% softirqs.CPU33.NET_RX
> > 46875 ± 48% +361.2% 216187 ± 8% softirqs.CPU33.SCHED
> > 30026 ± 10% -84.0% 4803 ± 60% softirqs.CPU33.TIMER
> > 21459214 ± 3% -28.2% 15402703 ± 6% softirqs.CPU34.NET_RX
> > 45836 ± 47% +371.2% 215962 ± 8% softirqs.CPU34.SCHED
> > 29899 ± 10% -84.1% 4764 ± 58% softirqs.CPU34.TIMER
> > 21334265 ± 4% -28.2% 15311475 ± 6% softirqs.CPU35.NET_RX
> > 46008 ± 47% +369.7% 216108 ± 8% softirqs.CPU35.SCHED
> > 30033 ± 9% -83.8% 4872 ± 61% softirqs.CPU35.TIMER
> > 21434454 ± 3% -29.8% 15051817 ± 6% softirqs.CPU36.NET_RX
> > 47102 ± 48% +353.6% 213647 ± 8% softirqs.CPU36.SCHED
> > 30159 ± 9% -84.3% 4724 ± 59% softirqs.CPU36.TIMER
> > 21446302 ± 3% -28.5% 15324037 ± 6% softirqs.CPU37.NET_RX
> > 46540 ± 48% +365.2% 216506 ± 8% softirqs.CPU37.SCHED
> > 29955 ± 9% -84.3% 4688 ± 60% softirqs.CPU37.TIMER
> > 21408080 ± 4% -28.1% 15398099 ± 6% softirqs.CPU38.NET_RX
> > 46514 ± 47% +364.5% 216055 ± 8% softirqs.CPU38.SCHED
> > 30224 ± 10% -84.2% 4763 ± 59% softirqs.CPU38.TIMER
> > 21488127 ± 3% -28.2% 15418075 ± 6% softirqs.CPU39.NET_RX
> > 46334 ± 49% +366.6% 216186 ± 8% softirqs.CPU39.SCHED
> > 29912 ± 10% -84.1% 4757 ± 59% softirqs.CPU39.TIMER
> > 20494991 ± 5% -25.8% 15203629 ± 7% softirqs.CPU4.NET_RX
> > 36606 ± 70% +500.9% 219950 ± 8% softirqs.CPU4.SCHED
> > 30735 ± 11% -83.9% 4963 ± 64% softirqs.CPU4.TIMER
> > 21463890 ± 3% -28.3% 15380723 ± 6% softirqs.CPU40.NET_RX
> > 46863 ± 48% +361.7% 216368 ± 8% softirqs.CPU40.SCHED
> > 30014 ± 9% -84.1% 4763 ± 59% softirqs.CPU40.TIMER
> > 21482678 ± 3% -28.7% 15323146 ± 6% softirqs.CPU41.NET_RX
> > 46944 ± 48% +361.1% 216451 ± 8% softirqs.CPU41.SCHED
> > 30167 ± 9% -83.9% 4864 ± 59% softirqs.CPU41.TIMER
> > 21294399 ± 4% -28.3% 15278181 ± 6% softirqs.CPU42.NET_RX
> > 45452 ± 47% +376.3% 216501 ± 8% softirqs.CPU42.SCHED
> > 29866 ± 10% -82.6% 5185 ± 54% softirqs.CPU42.TIMER
> > 21365880 ± 3% -28.1% 15358507 ± 6% softirqs.CPU43.NET_RX
> > 46603 ± 49% +364.4% 216428 ± 8% softirqs.CPU43.SCHED
> > 30073 ± 9% -84.0% 4809 ± 59% softirqs.CPU43.TIMER
> > 21520046 ± 3% -28.2% 15449623 ± 6% softirqs.CPU44.NET_RX
> > 46912 ± 48% +361.0% 216249 ± 8% softirqs.CPU44.SCHED
> > 29983 ± 10% -83.8% 4865 ± 58% softirqs.CPU44.TIMER
> > 21519719 ± 3% -28.4% 15408388 ± 6% softirqs.CPU45.NET_RX
> > 46866 ± 48% +361.2% 216163 ± 8% softirqs.CPU45.SCHED
> > 29935 ± 9% -83.0% 5082 ± 55% softirqs.CPU45.TIMER
> > 21419897 ± 3% -28.3% 15362156 ± 6% softirqs.CPU46.NET_RX
> > 46939 ± 48% +361.6% 216649 ± 8% softirqs.CPU46.SCHED
> > 29931 ± 10% -83.4% 4981 ± 56% softirqs.CPU46.TIMER
> > 21420535 ± 3% -28.7% 15271884 ± 6% softirqs.CPU47.NET_RX
> > 45846 ± 47% +370.1% 215519 ± 8% softirqs.CPU47.SCHED
> > 29858 ± 10% -84.1% 4740 ± 60% softirqs.CPU47.TIMER
> > 20312914 ± 5% -25.7% 15093130 ± 7% softirqs.CPU48.NET_RX
> > 36150 ± 69% +505.3% 218836 ± 9% softirqs.CPU48.SCHED
> > 30533 ± 12% -84.7% 4657 ± 65% softirqs.CPU48.TIMER
> > 20470745 ± 4% -28.8% 14574011 ± 8% softirqs.CPU49.NET_RX
> > 36483 ± 70% +488.7% 214762 ± 11% softirqs.CPU49.SCHED
> > 30893 ± 11% -85.0% 4630 ± 54% softirqs.CPU49.TIMER
> > 20156700 ± 7% -24.7% 15172321 ± 7% softirqs.CPU5.NET_RX
> > 35518 ± 68% +517.8% 219449 ± 9% softirqs.CPU5.SCHED
> > 30265 ± 13% -83.0% 5157 ± 53% softirqs.CPU5.TIMER
> > 20423756 ± 5% -26.0% 15104088 ± 7% softirqs.CPU50.NET_RX
> > 36375 ± 70% +499.0% 217904 ± 9% softirqs.CPU50.SCHED
> > 30692 ± 11% -83.8% 4974 ± 56% softirqs.CPU50.TIMER
> > 20528259 ± 5% -27.9% 14802498 ± 11% softirqs.CPU51.NET_RX
> > 36471 ± 71% +481.1% 211940 ± 9% softirqs.CPU51.SCHED
> > 30830 ± 11% -84.8% 4699 ± 64% softirqs.CPU51.TIMER
> > 20406992 ± 4% -25.5% 15212005 ± 7% softirqs.CPU52.NET_RX
> > 36584 ± 70% +498.1% 218812 ± 9% softirqs.CPU52.SCHED
> > 30640 ± 11% -83.2% 5158 ± 60% softirqs.CPU52.TIMER
> > 20215189 ± 6% -25.1% 15148729 ± 7% softirqs.CPU53.NET_RX
> > 35623 ± 68% +515.2% 219165 ± 9% softirqs.CPU53.SCHED
> > 30368 ± 13% -82.7% 5247 ± 54% softirqs.CPU53.TIMER
> > 20363531 ± 5% -25.9% 15088299 ± 7% softirqs.CPU54.NET_RX
> > 36829 ± 70% +495.7% 219407 ± 9% softirqs.CPU54.SCHED
> > 30604 ± 12% -84.6% 4718 ± 63% softirqs.CPU54.TIMER
> > 20446122 ± 5% -26.0% 15123333 ± 7% softirqs.CPU55.NET_RX
> > 36648 ± 70% +496.4% 218572 ± 9% softirqs.CPU55.SCHED
> > 30692 ± 11% -84.3% 4807 ± 61% softirqs.CPU55.TIMER
> > 20460809 ± 5% -26.2% 15103440 ± 6% softirqs.CPU56.NET_RX
> > 36546 ± 71% +499.2% 218984 ± 9% softirqs.CPU56.SCHED
> > 30796 ± 11% -85.0% 4612 ± 60% softirqs.CPU56.TIMER
> > 20579289 ± 5% -25.7% 15290133 ± 7% softirqs.CPU57.NET_RX
> > 36635 ± 70% +497.9% 219057 ± 9% softirqs.CPU57.SCHED
> > 30824 ± 11% -84.1% 4887 ± 61% softirqs.CPU57.TIMER
> > 20340209 ± 5% -25.1% 15226056 ± 7% softirqs.CPU58.NET_RX
> > 36661 ± 70% +497.4% 219034 ± 9% softirqs.CPU58.SCHED
> > 30643 ± 12% -83.8% 4958 ± 67% softirqs.CPU58.TIMER
> > 20407456 ± 5% -31.2% 14032788 ± 14% softirqs.CPU59.NET_RX
> > 36581 ± 70% +445.7% 199631 ± 7% softirqs.CPU59.SCHED
> > 30705 ± 11% -84.8% 4662 ± 66% softirqs.CPU59.TIMER
> > 20391632 ± 5% -26.0% 15081271 ± 7% softirqs.CPU6.NET_RX
> > 36990 ± 70% +492.1% 219031 ± 9% softirqs.CPU6.SCHED
> > 31051 ± 10% -84.9% 4685 ± 66% softirqs.CPU6.TIMER
> > 20313212 ± 5% -25.8% 15082404 ± 7% softirqs.CPU60.NET_RX
> > 36711 ± 70% +497.9% 219487 ± 9% softirqs.CPU60.SCHED
> > 30611 ± 12% -77.9% 6758 ± 62% softirqs.CPU60.TIMER
> > 20370636 ± 5% -25.5% 15182632 ± 7% softirqs.CPU61.NET_RX
> > 36456 ± 69% +499.3% 218484 ± 9% softirqs.CPU61.SCHED
> > 30655 ± 11% -83.6% 5031 ± 56% softirqs.CPU61.TIMER
> > 20447745 ± 5% -25.8% 15175825 ± 7% softirqs.CPU62.NET_RX
> > 36878 ± 70% +491.8% 218266 ± 9% softirqs.CPU62.SCHED
> > 30914 ± 12% -83.8% 5002 ± 63% softirqs.CPU62.TIMER
> > 20585734 ± 4% -26.3% 15163240 ± 7% softirqs.CPU63.NET_RX
> > 36766 ± 69% +493.6% 218247 ± 9% softirqs.CPU63.SCHED
> > 30783 ± 11% -84.3% 4848 ± 63% softirqs.CPU63.TIMER
> > 20529909 ± 5% -26.0% 15201926 ± 7% softirqs.CPU64.NET_RX
> > 36996 ± 69% +492.3% 219138 ± 9% softirqs.CPU64.SCHED
> > 31589 ± 12% -84.3% 4944 ± 59% softirqs.CPU64.TIMER
> > 20469049 ± 5% -26.2% 15105347 ± 7% softirqs.CPU65.NET_RX
> > 36816 ± 70% +495.3% 219168 ± 9% softirqs.CPU65.SCHED
> > 31163 ± 11% -84.5% 4843 ± 62% softirqs.CPU65.TIMER
> > 20451275 ± 5% -26.1% 15117422 ± 7% softirqs.CPU66.NET_RX
> > 36621 ± 70% +498.4% 219158 ± 9% softirqs.CPU66.SCHED
> > 31143 ± 11% -84.5% 4812 ± 63% softirqs.CPU66.TIMER
> > 20535600 ± 4% -26.1% 15183175 ± 7% softirqs.CPU67.NET_RX
> > 36716 ± 70% +496.0% 218827 ± 9% softirqs.CPU67.SCHED
> > 31123 ± 11% -84.5% 4811 ± 62% softirqs.CPU67.TIMER
> > 20606503 ± 4% -25.9% 15263709 ± 7% softirqs.CPU68.NET_RX
> > 36843 ± 71% +493.6% 218696 ± 9% softirqs.CPU68.SCHED
> > 31375 ± 10% -84.5% 4850 ± 61% softirqs.CPU68.TIMER
> > 20565997 ± 4% -26.1% 15207162 ± 7% softirqs.CPU69.NET_RX
> > 36903 ± 70% +493.4% 218980 ± 9% softirqs.CPU69.SCHED
> > 31377 ± 10% -84.6% 4840 ± 63% softirqs.CPU69.TIMER
> > 20398774 ± 5% -26.3% 15029635 ± 7% softirqs.CPU7.NET_RX
> > 36601 ± 69% +495.4% 217915 ± 9% softirqs.CPU7.SCHED
> > 30791 ± 11% -84.9% 4640 ± 66% softirqs.CPU7.TIMER
> > 20433298 ± 4% -26.2% 15079571 ± 7% softirqs.CPU70.NET_RX
> > 36471 ± 70% +499.8% 218755 ± 9% softirqs.CPU70.SCHED
> > 31125 ± 11% -84.6% 4791 ± 61% softirqs.CPU70.TIMER
> > 20514519 ± 4% -26.6% 15064099 ± 7% softirqs.CPU71.NET_RX
> > 36787 ± 70% +493.4% 218283 ± 9% softirqs.CPU71.SCHED
> > 31230 ± 11% -84.3% 4914 ± 64% softirqs.CPU71.TIMER
> > 21343935 ± 3% -29.8% 14976558 ± 6% softirqs.CPU72.NET_RX
> > 47439 ± 47% +356.6% 216628 ± 8% softirqs.CPU72.SCHED
> > 33558 ± 25% -86.3% 4611 ± 62% softirqs.CPU72.TIMER
> > 21417052 ± 3% -28.3% 15349903 ± 6% softirqs.CPU73.NET_RX
> > 46794 ± 48% +362.8% 216549 ± 8% softirqs.CPU73.SCHED
> > 30149 ± 10% -83.8% 4893 ± 57% softirqs.CPU73.TIMER
> > 21398667 ± 3% -28.4% 15321598 ± 6% softirqs.CPU74.NET_RX
> > 117019 ± 29% -36.5% 74286 ± 24% softirqs.CPU74.RCU
> > 46710 ± 47% +363.0% 216256 ± 8% softirqs.CPU74.SCHED
> > 30250 ± 10% -84.3% 4756 ± 61% softirqs.CPU74.TIMER
> > 21158934 ± 6% -27.3% 15387291 ± 6% softirqs.CPU75.NET_RX
> > 116793 ± 29% -36.3% 74379 ± 24% softirqs.CPU75.RCU
> > 46862 ± 48% +361.5% 216265 ± 8% softirqs.CPU75.SCHED
> > 29925 ± 11% -83.8% 4847 ± 57% softirqs.CPU75.TIMER
> > 21327908 ± 4% -27.8% 15394469 ± 6% softirqs.CPU76.NET_RX
> > 45803 ± 47% +372.3% 216338 ± 8% softirqs.CPU76.SCHED
> > 30109 ± 10% -83.7% 4920 ± 57% softirqs.CPU76.TIMER
> > 21477994 ± 4% -28.6% 15330910 ± 6% softirqs.CPU77.NET_RX
> > 46487 ± 47% +365.6% 216466 ± 8% softirqs.CPU77.SCHED
> > 30216 ± 10% -84.4% 4726 ± 59% softirqs.CPU77.TIMER
> > 21460057 ± 4% -29.1% 15225183 ± 6% softirqs.CPU78.NET_RX
> > 46884 ± 48% +362.1% 216676 ± 8% softirqs.CPU78.SCHED
> > 30421 ± 10% -84.4% 4730 ± 60% softirqs.CPU78.TIMER
> > 21541321 ± 3% -28.9% 15319139 ± 6% softirqs.CPU79.NET_RX
> > 46875 ± 48% +362.6% 216857 ± 8% softirqs.CPU79.SCHED
> > 30230 ± 9% -82.5% 5291 ± 76% softirqs.CPU79.TIMER
> > 20493021 ± 4% -26.0% 15160469 ± 7% softirqs.CPU8.NET_RX
> > 36579 ± 70% +498.1% 218781 ± 9% softirqs.CPU8.SCHED
> > 30682 ± 11% -84.4% 4790 ± 59% softirqs.CPU8.TIMER
> > 21172364 ± 7% -27.3% 15402317 ± 6% softirqs.CPU80.NET_RX
> > 46254 ± 47% +368.0% 216468 ± 8% softirqs.CPU80.SCHED
> > 29465 ± 12% -83.2% 4953 ± 56% softirqs.CPU80.TIMER
> > 21567300 ± 3% -28.5% 15430307 ± 6% softirqs.CPU81.NET_RX
> > 46695 ± 48% +362.7% 216072 ± 8% softirqs.CPU81.SCHED
> > 30093 ± 9% -84.0% 4801 ± 60% softirqs.CPU81.TIMER
> > 21491215 ± 3% -28.3% 15417714 ± 6% softirqs.CPU82.NET_RX
> > 46133 ± 47% +368.7% 216210 ± 8% softirqs.CPU82.SCHED
> > 29921 ± 10% -76.8% 6934 ± 67% softirqs.CPU82.TIMER
> > 21460124 ± 3% -28.5% 15348172 ± 6% softirqs.CPU83.NET_RX
> > 46491 ± 48% +365.6% 216476 ± 8% softirqs.CPU83.SCHED
> > 29970 ± 10% -83.7% 4888 ± 56% softirqs.CPU83.TIMER
> > 21445385 ± 4% -30.8% 14839411 ± 9% softirqs.CPU84.NET_RX
> > 46941 ± 48% +350.7% 211545 ± 9% softirqs.CPU84.SCHED
> > 30037 ± 10% -84.3% 4718 ± 60% softirqs.CPU84.TIMER
> > 21551635 ± 3% -28.9% 15333781 ± 6% softirqs.CPU85.NET_RX
> > 46814 ± 48% +362.1% 216323 ± 8% softirqs.CPU85.SCHED
> > 29996 ± 10% -83.9% 4817 ± 59% softirqs.CPU85.TIMER
> > 21556404 ± 3% -28.6% 15382397 ± 6% softirqs.CPU86.NET_RX
> > 46934 ± 48% +360.7% 216231 ± 8% softirqs.CPU86.SCHED
> > 30053 ± 9% -83.7% 4900 ± 55% softirqs.CPU86.TIMER
> > 21549266 ± 3% -28.5% 15417462 ± 6% softirqs.CPU87.NET_RX
> > 46429 ± 49% +366.0% 216351 ± 8% softirqs.CPU87.SCHED
> > 29937 ± 10% -83.9% 4806 ± 58% softirqs.CPU87.TIMER
> > 21537823 ± 3% -28.7% 15361608 ± 6% softirqs.CPU88.NET_RX
> > 46887 ± 48% +360.1% 215706 ± 8% softirqs.CPU88.SCHED
> > 29981 ± 9% -84.1% 4781 ± 59% softirqs.CPU88.TIMER
> > 21559950 ± 3% -29.0% 15300783 ± 6% softirqs.CPU89.NET_RX
> > 46878 ± 48% +361.6% 216403 ± 8% softirqs.CPU89.SCHED
> > 30017 ± 9% -84.1% 4783 ± 59% softirqs.CPU89.TIMER
> > 20571529 ± 5% -25.7% 15286193 ± 7% softirqs.CPU9.NET_RX
> > 36585 ± 70% +498.6% 219007 ± 9% softirqs.CPU9.SCHED
> > 30724 ± 11% -83.7% 5009 ± 58% softirqs.CPU9.TIMER
> > 21116816 ± 5% -27.5% 15303522 ± 6% softirqs.CPU90.NET_RX
> > 46158 ± 48% +368.6% 216275 ± 8% softirqs.CPU90.SCHED
> > 29583 ± 11% -83.6% 4842 ± 58% softirqs.CPU90.TIMER
> > 21247319 ± 4% -27.6% 15382623 ± 6% softirqs.CPU91.NET_RX
> > 46803 ± 48% +362.0% 216238 ± 8% softirqs.CPU91.SCHED
> > 29690 ± 10% -83.6% 4856 ± 58% softirqs.CPU91.TIMER
> > 21660224 ± 3% -28.6% 15460344 ± 6% softirqs.CPU92.NET_RX
> > 46935 ± 48% +359.9% 215852 ± 8% softirqs.CPU92.SCHED
> > 30143 ± 9% -84.0% 4814 ± 59% softirqs.CPU92.TIMER
> > 21653985 ± 3% -28.9% 15405532 ± 6% softirqs.CPU93.NET_RX
> > 46972 ± 48% +359.5% 215821 ± 8% softirqs.CPU93.SCHED
> > 30034 ± 10% -83.5% 4964 ± 59% softirqs.CPU93.TIMER
> > 21602751 ± 3% -28.8% 15385232 ± 6% softirqs.CPU94.NET_RX
> > 46892 ± 48% +362.2% 216728 ± 8% softirqs.CPU94.SCHED
> > 29959 ± 10% -83.6% 4921 ± 58% softirqs.CPU94.TIMER
> > 21277877 ± 4% -28.4% 15224899 ± 6% softirqs.CPU95.NET_RX
> > 45607 ± 48% +373.5% 215952 ± 8% softirqs.CPU95.SCHED
> > 29711 ± 10% -83.0% 5065 ± 55% softirqs.CPU95.TIMER
> > 2.009e+09 -27.4% 1.46e+09 ± 5% softirqs.NET_RX
> > 3996567 ± 14% +421.3% 20835185 ± 8% softirqs.SCHED
> > 2929735 ± 3% -83.8% 474988 ± 53% softirqs.TIMER
> >
> >
> >
> >
> > tbench.throughput-
> > MB_sec
> >
> >
> > 16000 +--------------------------------------------------------
> > -----------+
> > | .+. .+.
> > |
> > 15000 |.+.+..+. .+.+.+..+ +.+.+..+.+ +. .+..+.
> > .+.+. .+.+.+.+..+.+.|
> > | + + + +..+
> > |
> > 14000 |-
> > + |
> >
> > |
> > |
> > 13000 |-
> > + |
> >
> > |
> > |
> > 12000 |-
> > + |
> >
> > | O O O O
> > O O |
> > 11000 |-+ O O
> > O O O O |
> > | O O O O O O O O O O O O
> > O O O |
> > 10000 |-+
> > O O O |
> >
> > |
> > |
> > 9000 +--------------------------------------------------------
> > -----------+
> >
> >
> >
> >
> >
> > tbench.time.user_time
> >
> >
> >
> > 5200 +-----------------------------------------------------------
> > ---------+
> > 5000 |-+.+.. .+.+ +.. .+.+.. + + + : .+.
> > .+.. .|
> > |.+ +.+. .+..+ + + +.+ +. .+ +
> > .+ : .+ + +.+ |
> > 4800 |-
> > + + + + + +.+.
> > |
> > 4600 |-
> > +
> > |
> > |
> > |
> > 4400 |-
> > +
> > |
> > 4200 |-
> > +
> > |
> > 4000 |-+ O O O
> > O O |
> > |
> > O O O O O
> > |
> > 3800 |-+ O O O
> > O |
> > 3600 |-+ O O O O O O
> > O O |
> > | O O O
> > O O |
> > 3400 |-
> > + O O O
> > |
> > 3200 +-----------------------------------------------------------
> > ---------+
> >
> >
> >
> >
> >
> > tbench.time.system_time
> >
> >
> >
> > 34000 +--------------------------------------------------------
> > -----------+
> > 32000 |.+ .+ : .+.. .+.+.
> > .+. .+. .+ +.+..+.+.|
> > | +.+..+ :
> > .+..+.+.+.+ + + +..+ +.+.+..+ |
> > 30000 |-
> > + + |
> >
> > 28000 |-
> > + |
> >
> > |
> > |
> > 26000 |-
> > + |
> >
> > 24000 |-
> > + |
> >
> > 22000 |-
> > + |
> >
> > |
> > |
> > 20000 |-
> > + O O O |
> >
> > 18000 |-
> > O O O O O O |
> >
> > | O O O O O
> > O O O O O |
> > 16000 |-+ O O O O
> > O O O O O O |
> > 14000 +--------------------------------------------------------
> > -----------+
> >
> >
> >
> >
> >
> > tbench.time.percent_of_cpu_this_job_got
> >
> >
> >
> > 5500 +-----------------------------------------------------------
> > ---------+
> > |. +.+ .+. .+.+..
> > .+ .+ .+.+. .+.+.|
> > 5000 |-+. + + .+.+.+.+.+. + + + .+. +
> > .+.+.+. +.+. |
> > | +..+ +. + +
> > |
> > |
> > |
> > 4500 |-
> > +
> > |
> > |
> > |
> > 4000 |-
> > +
> > |
> > |
> > |
> > 3500 |-
> > +
> > |
> > |
> > O |
> > | O O O
> > O O |
> > 3000 |-O O O O
> > O O O |
> > | O O O O O O O O O O O O
> > O O O |
> > 2500 +-----------------------------------------------------------
> > ---------+
> >
> >
> >
> >
> >
> > tbench.time.voluntary_context_switches
> >
> >
> >
> > 1.4e+09 +------------------------------------------------------
> > -----------+
> > | O
> > O O O O O |
> > 1.2e+09 |-O O O O O O O O O O O O
> > O O O O |
> > | O O O O O
> > |
> > 1e+09 |-
> > + O O O |
> >
> > |
> > |
> > 8e+08 |-
> > + |
> >
> > |
> > |
> > 6e+08 |-
> > + |
> >
> > |
> > |
> > 4e+08 |-
> > + |
> >
> > | .+.+.. +. +. +.
> > |
> > 2e+08 |.+ + +.+.+.+.+.. .+. .+. + +..
> > + +.+.+. .+..+. .|
> > | +.+ + +.+ + + +.+
> > +.+ |
> > 0 +------------------------------------------------------
> > -----------+
> >
> >
> >
> >
> >
> > tbench.time.involuntary_context_switches
> >
> >
> >
> > 2e+09 +------------------------------------------------------
> > -----------+
> > 1.8e+09 |.+. .+ + .+.+.+.+.+..+.+.+ +.+. .+..+.
> > .+. .+ +..+.+.+.|
> > | +.+. + + + +.+
> > |
> > 1.6e+09 |-
> > + |
> >
> > 1.4e+09 |-
> > + |
> >
> > |
> > |
> > 1.2e+09 |-
> > + |
> >
> > 1e+09 |-
> > + |
> >
> > 8e+08 |-
> > + |
> >
> > |
> > |
> > 6e+08 |-
> > + O O O |
> >
> > 4e+08 |-
> > O O O O O O |
> >
> > | O O O O O
> > O O O |
> > 2e+08 |-+ O O O O O O O O O
> > O O O |
> > 0 +------------------------------------------------------
> > -----------+
> >
> >
> >
> >
> > [*] bisect-good sample
> > [O] bisect-bad sample
> >
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and
> > are provided
> > for informational purposes only. Any difference in system hardware
> > or software
> > design or configuration may affect actual performance.
> >
> >
> > ---
> > 0DAY/LKP+ Test Infrastructure Open Source
> > Technology Center
> > https://lists.01.org/hyperkitty/list/[email protected] Intel
> > Corporation
> >
> > Thanks,
> > Oliver Sang
> >
> >
> >
> > _______________________________________________
> > LKP mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
>
--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2021-09-03 07:51:34

by Xing, Zhengjun

[permalink] [raw]
Subject: Re: [LKP] Re: [sched/fair] c722f35b51: tbench.throughput-MB/sec -29.1% regression

Hi Rik,

    Do you have time to look at this? I re-test it in v5.13 and v5.14,
the regression still existed. Thanks.

On 5/27/2021 10:00 AM, Rik van Riel wrote:
> Hello,
>
> I will try to take a look at this on Friday.
>
> However, even if I manage to reproduce it on one of
> the systems I have access to, I'm still not sure how
> exactly we would root cause the issue.
>
> Is it due to
> select_idle_sibling() doing a little bit
> more work?
>
> Is it because we invoke test_idle_cores() a little
> earlier, widening the race window with CPUs going idle,
> causing select_idle_cpu to do a lot more work?
>
> Is it a locality thing where random placement on any
> core in the LLC is somehow better than placement on
> the same core as "prev" when there is no idle core?
>
> Is it tbench running
> faster when the woken up task is
> placed on the runqueue behind the current task on the
> "target" cpu, even though that CPU isn't currently
> idle, because tbench happens to go to sleep fast?
>
> In other words, I'm
> not quite sure whether this is
> a tbench (and other similar benchmark) specific thing,
> or a kernel thing, or what instrumentation we would
> want in select_idle_sibling / select_idle_cpu for us
> to root cause issues like this more easily in the
> future...

--
Zhengjun Xing