Hello,
kernel test robot noticed a 47.4% improvement of hackbench.throughput due to commit:
commit: a53ce18cacb477dd0513c607f187d16f0fa96f71 ("sched/fair: Sanitize vruntime of entity being migrated")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480+ (Sapphire Rapids) with 256G memory
with following parameters:
nr_threads: 100%
iterations: 4
mode: process
ipc: socket
cpufreq_governor: performance
In addition to that, the commit also has performance change on the following tests:
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput 8.3% improvement |
| test machine | 104 threads 2 sockets (Skylake) with 192G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=socket |
| | iterations=4 |
| | mode=process |
| | nr_threads=100% |
+------------------+------------------------------------------------------------------------------------------------+
This means that the performance regression reported at the link below
has been resolved:
https://lore.kernel.org/oe-lkp/[email protected]/
Details are as below:
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-11/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-spr-2sp1/hackbench
commit:
v6.3-rc3
a53ce18cac ("sched/fair: Sanitize vruntime of entity being migrated")
v6.3-rc3 a53ce18cacb477dd0513c607f18
---------------- ---------------------------
%stddev %change %stddev
\ | \
424.18 ? 8% -28.9% 301.49 ? 4% uptime.boot
1.856e+09 ? 9% +34.5% 2.496e+09 ? 4% cpuidle..time
30711384 ? 38% +191.5% 89520869 ? 13% cpuidle..usage
2.23 ? 9% +2.2 4.45 ? 5% mpstat.cpu.all.idle%
4.63 ? 14% -1.4 3.28 ? 8% mpstat.cpu.all.irq%
0.03 ? 5% +0.0 0.05 ? 3% mpstat.cpu.all.soft%
1674225 ? 13% +81.9% 3045512 ? 26% numa-numastat.node0.local_node
1761309 ? 11% +80.9% 3187010 ? 25% numa-numastat.node0.numa_hit
1799559 ? 6% -23.5% 1377395 ? 13% numa-numastat.node1.numa_hit
3608584 ? 6% +88.5% 6802087 ? 32% vmstat.memory.cache
3439 ? 3% -31.6% 2354 ? 6% vmstat.procs.r
5879204 ? 11% -48.9% 3002210 ? 7% vmstat.system.cs
1029853 ? 14% -44.5% 571234 ? 7% vmstat.system.in
1873999 ? 69% +230.1% 6187004 ? 35% numa-meminfo.node0.FilePages
403862 ? 62% +858.3% 3870409 ? 59% numa-meminfo.node0.Inactive
403862 ? 62% +858.3% 3870302 ? 59% numa-meminfo.node0.Inactive(anon)
208254 ?104% +476.4% 1200340 ? 41% numa-meminfo.node0.Mapped
3743471 ? 37% +123.1% 8351788 ? 26% numa-meminfo.node0.MemUsed
139050 ?194% +2453.0% 3550021 ? 63% numa-meminfo.node0.Shmem
320664 ? 4% -23.7% 244546 ? 8% numa-meminfo.node1.Active
320608 ? 4% -23.7% 244527 ? 8% numa-meminfo.node1.Active(anon)
1561370 ? 77% -73.9% 406842 ? 10% numa-meminfo.node1.FilePages
585464 ? 12% -43.1% 332958 ? 21% numa-meminfo.node1.Shmem
453627 ? 15% +47.4% 668513 ? 10% hackbench.throughput
290970 ? 9% +49.8% 435960 ? 5% hackbench.throughput_avg
453627 ? 15% +47.4% 668513 ? 10% hackbench.throughput_best
227755 ? 15% +60.8% 366142 ? 3% hackbench.throughput_worst
368.43 ? 10% -33.4% 245.40 ? 5% hackbench.time.elapsed_time
368.43 ? 10% -33.4% 245.40 ? 5% hackbench.time.elapsed_time.max
6.778e+08 ? 23% -80.6% 1.313e+08 ? 17% hackbench.time.involuntary_context_switches
1134672 ? 2% -8.1% 1042275 hackbench.time.minor_page_faults
73764 ? 9% -34.2% 48540 ? 5% hackbench.time.system_time
3182 ? 14% -32.8% 2139 ? 5% hackbench.time.user_time
1.491e+09 ? 19% -64.8% 5.252e+08 ? 11% hackbench.time.voluntary_context_switches
335830 ? 2% -21.0% 265441 ? 7% meminfo.Active
335774 ? 2% -21.0% 265390 ? 7% meminfo.Active(anon)
484840 ? 8% +26.2% 611689 ? 7% meminfo.AnonPages
3440431 ? 7% +92.0% 6604853 ? 33% meminfo.Cached
3977740 ? 9% +87.3% 7449378 ? 29% meminfo.Committed_AS
878902 ? 33% +382.5% 4240713 ? 52% meminfo.Inactive
878746 ? 33% +382.6% 4240551 ? 52% meminfo.Inactive(anon)
432270 ? 64% +244.7% 1489964 ? 35% meminfo.Mapped
7065839 ? 5% +54.0% 10881593 ? 21% meminfo.Memused
729581 ? 34% +433.7% 3893991 ? 56% meminfo.Shmem
8813899 +32.3% 11664510 ? 16% meminfo.max_used_kB
10133249 ? 47% +173.5% 27716728 ? 22% turbostat.C1
0.08 ? 48% +0.3 0.42 ? 5% turbostat.C1%
19370237 ? 38% +207.9% 59632248 ? 9% turbostat.C1E
1.29 ? 11% +1.6 2.87 ? 4% turbostat.C1E%
0.84 ? 9% +0.3 1.13 ? 10% turbostat.C6%
2.19 ? 10% +96.9% 4.30 ? 5% turbostat.CPU%c1
0.30 ? 4% +26.6% 0.37 turbostat.IPC
3.866e+08 ? 24% -63.2% 1.421e+08 ? 13% turbostat.IRQ
83.42 ? 7% +19.6 102.99 ? 5% turbostat.PKG_%
445148 ? 44% +227.6% 1458182 ? 19% turbostat.POLL
0.00 ?223% +0.0 0.02 turbostat.POLL%
468176 ? 69% +231.8% 1553376 ? 35% numa-vmstat.node0.nr_file_pages
100706 ? 62% +867.6% 974455 ? 59% numa-vmstat.node0.nr_inactive_anon
51728 ?104% +486.5% 303394 ? 40% numa-vmstat.node0.nr_mapped
54.50 ? 72% -100.0% 0.00 numa-vmstat.node0.nr_mlock
34438 ?194% +2496.3% 894131 ? 63% numa-vmstat.node0.nr_shmem
100705 ? 62% +867.6% 974453 ? 59% numa-vmstat.node0.nr_zone_inactive_anon
1761001 ? 11% +81.0% 3186765 ? 25% numa-vmstat.node0.numa_hit
1673917 ? 13% +81.9% 3045267 ? 26% numa-vmstat.node0.numa_local
81949 ? 4% -24.2% 62153 ? 9% numa-vmstat.node1.nr_active_anon
392315 ? 77% -73.8% 102913 ? 10% numa-vmstat.node1.nr_file_pages
148338 ? 12% -43.1% 84442 ? 22% numa-vmstat.node1.nr_shmem
81949 ? 4% -24.2% 62153 ? 9% numa-vmstat.node1.nr_zone_active_anon
1799224 ? 6% -23.5% 1376976 ? 13% numa-vmstat.node1.numa_hit
82970 ? 3% -19.5% 66816 ? 7% proc-vmstat.nr_active_anon
121543 ? 8% +25.5% 152565 ? 7% proc-vmstat.nr_anon_pages
6381647 -1.5% 6286796 proc-vmstat.nr_dirty_background_threshold
12778899 -1.5% 12588965 proc-vmstat.nr_dirty_threshold
859674 ? 6% +92.1% 1651708 ? 33% proc-vmstat.nr_file_pages
64194818 -1.5% 63244918 proc-vmstat.nr_free_pages
220555 ? 31% +380.5% 1059808 ? 53% proc-vmstat.nr_inactive_anon
108806 ? 61% +242.0% 372136 ? 35% proc-vmstat.nr_mapped
77.67 ? 64% -100.0% 0.00 proc-vmstat.nr_mlock
51225 ? 3% +6.5% 54531 ? 3% proc-vmstat.nr_page_table_pages
181960 ? 32% +435.3% 973991 ? 56% proc-vmstat.nr_shmem
82970 ? 3% -19.5% 66816 ? 7% proc-vmstat.nr_zone_active_anon
220555 ? 31% +380.5% 1059808 ? 53% proc-vmstat.nr_zone_inactive_anon
124918 ? 29% +298.2% 497443 ? 42% proc-vmstat.numa_hint_faults_local
140811 ? 7% -30.3% 98111 ? 21% proc-vmstat.numa_pages_migrated
5979653 ? 12% -25.7% 4445814 ? 14% proc-vmstat.pgfree
140811 ? 7% -30.3% 98111 ? 21% proc-vmstat.pgmigrate_success
246196 ? 6% -16.3% 206181 ? 4% proc-vmstat.pgreuse
2815744 ? 10% -32.4% 1902336 ? 5% proc-vmstat.unevictable_pgs_scanned
0.59 ? 30% -54.4% 0.27 ? 58% sched_debug.cfs_rq:/.h_nr_running.min
39916437 ? 15% -50.7% 19661197 ? 13% sched_debug.cfs_rq:/.min_vruntime.avg
61775081 ? 22% -54.5% 28130592 ? 15% sched_debug.cfs_rq:/.min_vruntime.max
30282948 ? 11% -50.1% 15100710 ? 13% sched_debug.cfs_rq:/.min_vruntime.min
7448178 ? 33% -54.9% 3357079 ? 27% sched_debug.cfs_rq:/.min_vruntime.stddev
0.56 ? 26% -59.6% 0.22 ? 52% sched_debug.cfs_rq:/.nr_running.min
159.58 ? 10% +49.2% 238.14 ? 9% sched_debug.cfs_rq:/.removed.load_avg.max
81.37 ? 10% +52.2% 123.82 ? 9% sched_debug.cfs_rq:/.removed.runnable_avg.max
81.34 ? 10% +52.2% 123.79 ? 9% sched_debug.cfs_rq:/.removed.util_avg.max
12026 ? 10% -34.0% 7936 ? 26% sched_debug.cfs_rq:/.runnable_avg.avg
4004 ? 4% -30.3% 2791 ? 22% sched_debug.cfs_rq:/.runnable_avg.stddev
-16690851 -57.9% -7025073 sched_debug.cfs_rq:/.spread0.avg
-26445854 -55.8% -11696209 sched_debug.cfs_rq:/.spread0.min
7504656 ? 33% -54.8% 3394445 ? 26% sched_debug.cfs_rq:/.spread0.stddev
842.85 ? 6% -29.5% 593.87 ? 9% sched_debug.cfs_rq:/.util_avg.avg
278.11 ? 27% -60.8% 109.14 ? 40% sched_debug.cfs_rq:/.util_avg.min
186.96 ? 10% +51.5% 283.32 ? 8% sched_debug.cfs_rq:/.util_avg.stddev
492.57 ? 10% -40.4% 293.75 ? 18% sched_debug.cfs_rq:/.util_est_enqueued.avg
282.49 ? 9% -21.4% 222.01 ? 14% sched_debug.cfs_rq:/.util_est_enqueued.stddev
1244882 ? 7% -27.0% 909203 ? 11% sched_debug.cpu.avg_idle.max
237794 ? 8% -31.0% 164110 ? 19% sched_debug.cpu.avg_idle.stddev
231775 ? 10% -30.3% 161544 ? 8% sched_debug.cpu.clock.avg
232661 ? 10% -30.3% 162117 ? 8% sched_debug.cpu.clock.max
230602 ? 10% -30.2% 160904 ? 9% sched_debug.cpu.clock.min
224411 ? 9% -29.5% 158297 ? 8% sched_debug.cpu.clock_task.avg
228363 ? 10% -29.8% 160387 ? 8% sched_debug.cpu.clock_task.max
202929 ? 11% -32.3% 137479 ? 10% sched_debug.cpu.clock_task.min
4395639 ? 27% -66.1% 1490705 ? 15% sched_debug.cpu.nr_switches.avg
6927510 ? 26% -65.7% 2373854 ? 15% sched_debug.cpu.nr_switches.max
2411132 ? 20% -65.0% 843610 ? 4% sched_debug.cpu.nr_switches.min
1196317 ? 46% -59.2% 487745 ? 39% sched_debug.cpu.nr_switches.stddev
230571 ? 10% -30.2% 160893 ? 9% sched_debug.cpu_clk
225627 ? 10% -30.9% 155952 ? 9% sched_debug.ktime
231609 ? 10% -30.1% 161933 ? 9% sched_debug.sched_clk
0.58 ?113% +2.9 3.50 ? 65% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg
0.61 ?112% +3.0 3.57 ? 65% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_write_iter
0.48 ? 9% +0.1 0.62 ? 10% perf-profile.children.cycles-pp.task_tick_fair
0.17 ? 14% +0.1 0.31 ? 18% perf-profile.children.cycles-pp.__libc_start_main
0.17 ? 14% +0.1 0.31 ? 18% perf-profile.children.cycles-pp.main
0.17 ? 14% +0.1 0.31 ? 18% perf-profile.children.cycles-pp.run_builtin
0.16 ? 14% +0.1 0.31 ? 19% perf-profile.children.cycles-pp.cmd_record
0.58 ? 9% +0.1 0.72 ? 10% perf-profile.children.cycles-pp.tick_sched_timer
0.57 ? 9% +0.1 0.71 ? 10% perf-profile.children.cycles-pp.tick_sched_handle
0.69 ? 9% +0.1 0.84 ? 10% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.68 ? 9% +0.1 0.82 ? 10% perf-profile.children.cycles-pp.hrtimer_interrupt
0.53 ? 10% +0.1 0.68 ? 10% perf-profile.children.cycles-pp.scheduler_tick
0.62 ? 9% +0.2 0.78 ? 10% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.56 ? 10% +0.2 0.71 ? 10% perf-profile.children.cycles-pp.update_process_times
0.74 ? 10% +0.2 0.89 ? 10% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.78 ? 9% +0.2 0.94 ? 10% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.2 0.16 ? 47% perf-profile.children.cycles-pp.perf_output_copy
0.00 +0.2 0.17 ? 37% perf-profile.children.cycles-pp.cmd_sched
0.06 ? 76% +0.2 0.24 ? 35% perf-profile.children.cycles-pp.select_idle_cpu
0.00 +0.2 0.21 ? 49% perf-profile.children.cycles-pp.__output_copy
0.16 ? 14% +0.2 0.41 ? 40% perf-profile.children.cycles-pp.record__mmap_read_evlist
0.15 ? 15% +0.2 0.40 ? 40% perf-profile.children.cycles-pp.perf_mmap__push
0.11 ? 20% +0.3 0.36 ? 44% perf-profile.children.cycles-pp.record__pushfn
0.11 ? 20% +0.3 0.36 ? 44% perf-profile.children.cycles-pp.writen
0.16 ? 14% +0.3 0.42 ? 40% perf-profile.children.cycles-pp.__cmd_record
0.07 ? 21% +0.3 0.33 ? 49% perf-profile.children.cycles-pp.__generic_file_write_iter
0.00 +0.3 0.26 ? 50% perf-profile.children.cycles-pp.perf_output_sample
0.07 ? 21% +0.3 0.33 ? 49% perf-profile.children.cycles-pp.generic_perform_write
0.08 ? 18% +0.3 0.34 ? 48% perf-profile.children.cycles-pp.generic_file_write_iter
0.15 ? 77% +0.3 0.41 ? 41% perf-profile.children.cycles-pp.select_idle_sibling
0.42 ? 58% +0.5 0.92 ? 38% perf-profile.children.cycles-pp.prepare_to_wait
0.84 ? 30% +0.8 1.65 ? 54% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
41.82 ? 3% +2.3 44.10 perf-profile.children.cycles-pp.ksys_write
41.40 ? 3% +2.3 43.74 ? 2% perf-profile.children.cycles-pp.vfs_write
0.02 ?141% +0.1 0.10 ? 15% perf-profile.self.cycles-pp.select_idle_cpu
7.36 ? 4% +20.0% 8.84 ? 2% perf-stat.i.MPKI
3.852e+10 ? 4% +31.1% 5.051e+10 ? 3% perf-stat.i.branch-instructions
0.69 ? 3% -0.2 0.48 perf-stat.i.branch-miss-rate%
23.53 ? 9% -11.5 12.05 ? 5% perf-stat.i.cache-miss-rate%
1.273e+09 ? 6% +74.9% 2.227e+09 ? 4% perf-stat.i.cache-references
7571717 ? 13% -58.7% 3125278 ? 7% perf-stat.i.context-switches
2.73 ? 6% -38.2% 1.69 perf-stat.i.cpi
237085 -4.5% 226321 perf-stat.i.cpu-clock
1403312 ? 8% -51.7% 677427 ? 10% perf-stat.i.cpu-migrations
2285 ? 11% -22.0% 1781 ? 5% perf-stat.i.cycles-between-cache-misses
0.15 ? 13% -0.1 0.07 ? 14% perf-stat.i.dTLB-load-miss-rate%
73162040 ? 14% -38.4% 45064668 ? 15% perf-stat.i.dTLB-load-misses
5.431e+10 ? 4% +37.3% 7.457e+10 ? 3% perf-stat.i.dTLB-loads
3.134e+10 ? 4% +40.4% 4.4e+10 ? 3% perf-stat.i.dTLB-stores
1.971e+11 ? 4% +34.6% 2.654e+11 ? 3% perf-stat.i.instructions
0.46 ? 5% +36.2% 0.63 perf-stat.i.ipc
550.07 ? 5% -16.6% 458.59 ? 3% perf-stat.i.metric.K/sec
534.60 ? 3% +42.4% 761.34 ? 3% perf-stat.i.metric.M/sec
11814 ? 7% +18.6% 14017 ? 7% perf-stat.i.minor-faults
70.52 ? 8% -11.0 59.48 ? 3% perf-stat.i.node-load-miss-rate%
78468492 ? 13% -37.6% 48927418 ? 5% perf-stat.i.node-load-misses
11951 ? 7% +18.5% 14158 ? 7% perf-stat.i.page-faults
237085 -4.5% 226321 perf-stat.i.task-clock
5.88 ? 5% +39.5% 8.21 ? 2% perf-stat.overall.MPKI
0.46 ? 4% -0.0 0.43 perf-stat.overall.branch-miss-rate%
17.23 ? 11% -6.0 11.21 ? 7% perf-stat.overall.cache-miss-rate%
2.02 ? 3% -20.8% 1.60 perf-stat.overall.cpi
2017 ? 9% -13.4% 1747 ? 4% perf-stat.overall.cycles-between-cache-misses
0.12 ? 11% -0.1 0.06 ? 13% perf-stat.overall.dTLB-load-miss-rate%
0.50 ? 3% +26.1% 0.62 perf-stat.overall.ipc
4.183e+10 ? 4% +17.7% 4.923e+10 ? 3% perf-stat.ps.branch-instructions
1.938e+08 ? 2% +8.6% 2.105e+08 ? 2% perf-stat.ps.branch-misses
2.161e+08 ? 8% +9.5% 2.367e+08 ? 2% perf-stat.ps.cache-misses
1.26e+09 ? 5% +68.2% 2.12e+09 ? 5% perf-stat.ps.cache-references
5887952 ? 10% -48.8% 3014282 ? 7% perf-stat.ps.context-switches
949854 ? 9% -33.3% 633638 ? 10% perf-stat.ps.cpu-migrations
70607155 ? 11% -36.8% 44647998 ? 16% perf-stat.ps.dTLB-load-misses
5.895e+10 ? 4% +22.8% 7.239e+10 ? 3% perf-stat.ps.dTLB-loads
3.38e+10 ? 4% +26.0% 4.258e+10 ? 3% perf-stat.ps.dTLB-stores
2.143e+11 ? 4% +20.5% 2.581e+11 ? 3% perf-stat.ps.instructions
49.89 ? 12% +55.0% 77.32 ? 10% perf-stat.ps.major-faults
7212 ? 6% +57.8% 11382 ? 10% perf-stat.ps.minor-faults
7262 ? 6% +57.8% 11459 ? 10% perf-stat.ps.page-faults
7.903e+13 ? 8% -19.6% 6.354e+13 perf-stat.total.instructions
***************************************************************************************************
lkp-skl-fpga01: 104 threads 2 sockets (Skylake) with 192G memory
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-11/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-skl-fpga01/hackbench
commit:
v6.3-rc3
a53ce18cac ("sched/fair: Sanitize vruntime of entity being migrated")
v6.3-rc3 a53ce18cacb477dd0513c607f18
---------------- ---------------------------
%stddev %change %stddev
\ | \
2372517 ? 6% -15.3% 2009977 ? 3% cpuidle..usage
386.65 -15.1% 328.12 uptime.boot
202008 ? 65% +235.3% 677252 ? 50% numa-vmstat.node1.nr_file_pages
80400 ? 87% +121.6% 178136 ? 24% numa-vmstat.node1.nr_mapped
1.10 ? 7% +0.1 1.25 mpstat.cpu.all.idle%
0.03 +0.0 0.03 mpstat.cpu.all.soft%
19.97 +2.9 22.83 mpstat.cpu.all.usr%
814458 ? 64% +233.6% 2717000 ? 50% numa-meminfo.node1.FilePages
323256 ? 86% +122.7% 719869 ? 24% numa-meminfo.node1.Mapped
1850342 ? 41% +111.8% 3919066 ? 36% numa-meminfo.node1.MemUsed
2273 -24.1% 1725 vmstat.procs.r
5659154 ? 2% -71.9% 1591096 vmstat.system.cs
498688 -36.1% 318814 vmstat.system.in
1373950 ? 9% -27.0% 1003101 ? 4% turbostat.C1
336758 ? 7% -29.9% 236152 ? 9% turbostat.C1E
1.02 ? 6% +0.1 1.15 turbostat.C6%
1.691e+08 -47.0% 89681944 turbostat.IRQ
301212 ? 4% +47.0% 442678 ? 6% turbostat.POLL
32.72 +6.4% 34.82 turbostat.RAMWatt
81482 +2.6% 83563 proc-vmstat.nr_kernel_stack
111505 +7.6% 119994 proc-vmstat.nr_slab_unreclaimable
119548 ? 14% +72.4% 206115 ? 18% proc-vmstat.numa_hint_faults
56965 ? 30% +156.2% 145972 ? 25% proc-vmstat.numa_hint_faults_local
413798 ? 9% +31.6% 544570 ? 11% proc-vmstat.numa_pte_updates
3403745 ? 5% -23.1% 2617705 ? 3% proc-vmstat.pgalloc_normal
1498057 +7.1% 1604481 ? 2% proc-vmstat.pgfault
3087578 ? 7% -22.1% 2405993 ? 11% proc-vmstat.pgfree
119101 -5.3% 112827 ? 2% proc-vmstat.pgreuse
2587904 -16.5% 2159616 proc-vmstat.unevictable_pgs_scanned
165103 +8.3% 178791 hackbench.throughput
145699 +21.0% 176324 hackbench.throughput_avg
165103 +8.3% 178791 hackbench.throughput_best
109401 ? 3% +56.4% 171113 hackbench.throughput_worst
336.59 -17.3% 278.45 hackbench.time.elapsed_time
336.59 -17.3% 278.45 hackbench.time.elapsed_time.max
7.113e+08 ? 5% -85.6% 1.023e+08 hackbench.time.involuntary_context_switches
523124 -1.9% 513262 hackbench.time.minor_page_faults
27290 -20.3% 21749 hackbench.time.system_time
7022 -6.1% 6594 hackbench.time.user_time
1.206e+09 ? 2% -71.6% 3.428e+08 hackbench.time.voluntary_context_switches
17.78 ? 6% -22.3% 13.81 ? 2% sched_debug.cfs_rq:/.h_nr_running.avg
1.97 ? 18% -61.1% 0.77 ? 9% sched_debug.cfs_rq:/.h_nr_running.min
7.82 ? 9% +13.5% 8.87 sched_debug.cfs_rq:/.h_nr_running.stddev
22482 ? 14% +176.4% 62133 ?121% sched_debug.cfs_rq:/.load.max
3989 ? 10% +109.2% 8346 ? 85% sched_debug.cfs_rq:/.load.stddev
16432813 -28.2% 11806347 sched_debug.cfs_rq:/.min_vruntime.avg
27697257 ? 6% -49.2% 14056988 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
14630804 -22.0% 11418377 sched_debug.cfs_rq:/.min_vruntime.min
1611161 ? 7% -80.4% 315248 ? 17% sched_debug.cfs_rq:/.min_vruntime.stddev
0.05 ? 6% +28.7% 0.07 ? 10% sched_debug.cfs_rq:/.nr_running.stddev
141.19 ? 44% +44.4% 203.87 sched_debug.cfs_rq:/.removed.load_avg.max
72.22 ? 44% +44.7% 104.53 sched_debug.cfs_rq:/.removed.runnable_avg.max
72.22 ? 44% +44.7% 104.53 sched_debug.cfs_rq:/.removed.util_avg.max
18306 ? 6% -23.7% 13975 ? 2% sched_debug.cfs_rq:/.runnable_avg.avg
28663 ? 12% -17.7% 23592 ? 3% sched_debug.cfs_rq:/.runnable_avg.max
4463 ? 10% -18.8% 3623 ? 4% sched_debug.cfs_rq:/.runnable_avg.stddev
-3646660 -70.7% -1069365 sched_debug.cfs_rq:/.spread0.avg
7598731 ? 30% -84.5% 1180377 ? 66% sched_debug.cfs_rq:/.spread0.max
-5400368 -72.9% -1460935 sched_debug.cfs_rq:/.spread0.min
1597450 ? 7% -80.2% 315751 ? 17% sched_debug.cfs_rq:/.spread0.stddev
166.55 ? 6% +39.0% 231.53 ? 2% sched_debug.cfs_rq:/.util_avg.stddev
683.78 ? 4% -29.5% 482.40 ? 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
69.06 ? 20% -80.6% 13.37 ? 70% sched_debug.cfs_rq:/.util_est_enqueued.min
297890 ? 3% +13.2% 337258 sched_debug.cpu.avg_idle.avg
208323 -17.2% 172593 sched_debug.cpu.clock.avg
209312 -17.4% 172901 sched_debug.cpu.clock.max
207275 -16.9% 172268 sched_debug.cpu.clock.min
598.49 ? 23% -68.8% 186.64 ? 18% sched_debug.cpu.clock.stddev
206181 -17.1% 170875 sched_debug.cpu.clock_task.avg
207543 -17.4% 171519 sched_debug.cpu.clock_task.max
189665 -18.2% 155215 sched_debug.cpu.clock_task.min
1847 ? 4% -14.4% 1580 sched_debug.cpu.clock_task.stddev
0.00 ? 23% -68.1% 0.00 ? 18% sched_debug.cpu.next_balance.stddev
17.76 ? 6% -22.1% 13.84 ? 2% sched_debug.cpu.nr_running.avg
1.78 ? 20% -51.2% 0.87 ? 17% sched_debug.cpu.nr_running.min
8951364 ? 2% -80.1% 1785353 sched_debug.cpu.nr_switches.avg
13121147 ? 6% -83.0% 2229280 ? 6% sched_debug.cpu.nr_switches.max
6675136 ? 5% -77.1% 1528788 sched_debug.cpu.nr_switches.min
1223176 ? 12% -88.8% 136498 ? 18% sched_debug.cpu.nr_switches.stddev
207253 -16.9% 172251 sched_debug.cpu_clk
204122 -17.1% 169119 sched_debug.ktime
208018 -16.8% 173016 sched_debug.sched_clk
8.95 -18.4% 7.30 perf-stat.i.MPKI
2.198e+10 -11.5% 1.944e+10 perf-stat.i.branch-instructions
2.08 +0.1 2.15 perf-stat.i.branch-miss-rate%
4.454e+08 -6.3% 4.174e+08 perf-stat.i.branch-misses
14.80 ? 3% +8.3 23.06 perf-stat.i.cache-miss-rate%
1.17e+08 ? 3% +47.6% 1.727e+08 perf-stat.i.cache-misses
1.011e+09 ? 2% -25.2% 7.557e+08 perf-stat.i.cache-references
7375449 ? 4% -78.4% 1596261 perf-stat.i.context-switches
2.54 +6.7% 2.71 perf-stat.i.cpi
105262 -1.0% 104246 perf-stat.i.cpu-clock
454058 ? 9% -72.1% 126812 perf-stat.i.cpu-migrations
3793 ? 9% -56.3% 1658 perf-stat.i.cycles-between-cache-misses
1.11 ? 2% -0.6 0.48 perf-stat.i.dTLB-load-miss-rate%
3.656e+08 ? 2% -61.2% 1.42e+08 perf-stat.i.dTLB-load-misses
3.186e+10 -7.6% 2.944e+10 perf-stat.i.dTLB-loads
0.21 ? 3% -0.1 0.12 ? 4% perf-stat.i.dTLB-store-miss-rate%
40561226 ? 3% -46.2% 21804659 ? 4% perf-stat.i.dTLB-store-misses
1.909e+10 -7.3% 1.77e+10 perf-stat.i.dTLB-stores
44.18 +13.8 57.95 perf-stat.i.iTLB-load-miss-rate%
82455140 ? 2% -28.0% 59395280 ? 2% perf-stat.i.iTLB-load-misses
1.084e+08 ? 3% -59.8% 43544511 perf-stat.i.iTLB-loads
1.143e+11 -8.8% 1.042e+11 perf-stat.i.instructions
1584 ? 2% +11.9% 1773 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.41 -8.9% 0.37 perf-stat.i.ipc
1441 +4.7% 1509 perf-stat.i.metric.K/sec
706.65 -8.5% 646.73 perf-stat.i.metric.M/sec
4941 +6.8% 5276 ? 3% perf-stat.i.minor-faults
41.57 -18.4 23.19 perf-stat.i.node-load-miss-rate%
24494497 ? 2% +66.8% 40863737 perf-stat.i.node-loads
33.06 ? 3% -24.8 8.27 ? 2% perf-stat.i.node-store-miss-rate%
4827109 ? 11% -40.4% 2878491 ? 2% perf-stat.i.node-store-misses
18618707 ? 2% +87.4% 34896795 perf-stat.i.node-stores
4971 +6.7% 5305 ? 3% perf-stat.i.page-faults
105262 -1.0% 104246 perf-stat.i.task-clock
8.08 -10.7% 7.22 perf-stat.overall.MPKI
2.06 +0.1 2.14 perf-stat.overall.branch-miss-rate%
14.44 ? 3% +8.5 22.91 perf-stat.overall.cache-miss-rate%
2.56 +5.5% 2.70 perf-stat.overall.cpi
2194 -25.6% 1632 perf-stat.overall.cycles-between-cache-misses
0.96 ? 2% -0.5 0.48 perf-stat.overall.dTLB-load-miss-rate%
0.19 ? 4% -0.1 0.12 ? 4% perf-stat.overall.dTLB-store-miss-rate%
43.98 +13.8 57.75 perf-stat.overall.iTLB-load-miss-rate%
1485 ? 2% +18.6% 1762 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.39 -5.2% 0.37 perf-stat.overall.ipc
28.27 ? 4% -5.7 22.53 perf-stat.overall.node-load-miss-rate%
14.63 ? 7% -7.2 7.40 ? 2% perf-stat.overall.node-store-miss-rate%
2.087e+10 -7.0% 1.94e+10 perf-stat.ps.branch-instructions
4.305e+08 -3.4% 4.158e+08 perf-stat.ps.branch-misses
1.276e+08 +34.7% 1.718e+08 perf-stat.ps.cache-misses
8.838e+08 -15.1% 7.501e+08 perf-stat.ps.cache-references
5683501 ? 2% -72.2% 1577596 perf-stat.ps.context-switches
318284 ? 8% -61.2% 123552 perf-stat.ps.cpu-migrations
2.975e+08 ? 2% -52.6% 1.411e+08 perf-stat.ps.dTLB-load-misses
3.063e+10 -4.2% 2.935e+10 perf-stat.ps.dTLB-loads
34785985 ? 4% -37.8% 21643894 ? 4% perf-stat.ps.dTLB-store-misses
1.837e+10 -4.0% 1.764e+10 perf-stat.ps.dTLB-stores
73690177 ? 2% -19.9% 59021610 ? 2% perf-stat.ps.iTLB-load-misses
93898143 ? 3% -54.0% 43180114 perf-stat.ps.iTLB-loads
1.094e+11 -5.0% 1.039e+11 perf-stat.ps.instructions
18.79 ? 9% +19.9% 22.54 ? 3% perf-stat.ps.major-faults
3833 +29.5% 4963 ? 2% perf-stat.ps.minor-faults
29621399 ? 2% +37.6% 40757022 perf-stat.ps.node-loads
3907385 ? 9% -28.9% 2776748 ? 2% perf-stat.ps.node-store-misses
22752383 +52.6% 34730720 perf-stat.ps.node-stores
3852 +29.4% 4985 ? 2% perf-stat.ps.page-faults
3.69e+13 -21.4% 2.899e+13 perf-stat.total.instructions
16.80 ? 3% -14.7 2.11 ? 5% perf-profile.calltrace.cycles-pp.unix_stream_data_wait.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
15.81 ? 5% -14.0 1.84 ? 4% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_write_iter.vfs_write
16.18 ? 4% -13.7 2.46 ? 2% perf-profile.calltrace.cycles-pp.sock_def_readable.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write
15.12 ? 5% -13.5 1.63 ? 6% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_write_iter
14.68 ? 4% -13.1 1.57 ? 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg
14.40 ? 5% -12.9 1.54 ? 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
14.47 ? 3% -12.7 1.81 ? 5% perf-profile.calltrace.cycles-pp.schedule_timeout.unix_stream_data_wait.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
14.25 ? 3% -12.5 1.78 ? 5% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.unix_stream_data_wait.unix_stream_read_generic.unix_stream_recvmsg
14.04 ? 3% -12.3 1.74 ? 5% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.unix_stream_data_wait.unix_stream_read_generic
6.92 ? 9% -6.9 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
7.37 ? 4% -6.6 0.73 ? 5% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
41.52 -6.2 35.38 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
6.10 ? 9% -6.1 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.92 ? 9% -5.9 0.00 perf-profile.calltrace.cycles-pp.schedule.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
5.78 ? 9% -5.8 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
6.18 ? 4% -5.6 0.62 ? 5% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
5.84 ? 4% -5.2 0.62 ? 6% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.unix_stream_data_wait
43.62 -5.0 38.58 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write
9.66 ? 5% -4.8 4.84 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
27.44 ? 2% -2.8 24.63 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write.do_syscall_64
50.46 -2.7 47.80 perf-profile.calltrace.cycles-pp.__libc_write
29.19 ? 2% -2.4 26.81 perf-profile.calltrace.cycles-pp.sock_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.62 ? 2% -1.7 28.90 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
28.56 -1.6 26.94 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.vfs_read
28.92 -1.6 27.34 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.vfs_read.ksys_read
30.08 -1.4 28.64 perf-profile.calltrace.cycles-pp.sock_recvmsg.sock_read_iter.vfs_read.ksys_read.do_syscall_64
31.28 -1.4 29.87 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
32.85 -1.3 31.55 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
30.65 -1.2 29.46 perf-profile.calltrace.cycles-pp.sock_read_iter.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
33.59 -1.2 32.44 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
0.90 ? 2% -0.3 0.60 perf-profile.calltrace.cycles-pp.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.90 ? 3% -0.0 0.87 perf-profile.calltrace.cycles-pp.aa_sk_perm.security_socket_sendmsg.sock_write_iter.vfs_write.ksys_write
0.72 ? 3% +0.1 0.79 ? 2% perf-profile.calltrace.cycles-pp.aa_sk_perm.security_socket_recvmsg.sock_recvmsg.sock_read_iter.vfs_read
1.07 ? 3% +0.1 1.15 ? 2% perf-profile.calltrace.cycles-pp.security_socket_sendmsg.sock_write_iter.vfs_write.ksys_write.do_syscall_64
0.55 ? 4% +0.1 0.64 ? 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
0.93 ? 3% +0.2 1.09 perf-profile.calltrace.cycles-pp.security_socket_recvmsg.sock_recvmsg.sock_read_iter.vfs_read.ksys_read
0.76 ? 5% +0.2 0.98 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.60 ? 7% +0.2 0.84 perf-profile.calltrace.cycles-pp.apparmor_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
0.67 ? 4% +0.2 0.92 ? 3% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.79 ? 3% +0.4 1.16 perf-profile.calltrace.cycles-pp.memcg_slab_post_alloc_hook.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.34 ? 70% +0.4 0.72 ? 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
0.65 ? 3% +0.4 1.03 perf-profile.calltrace.cycles-pp.memcg_slab_post_alloc_hook.__kmem_cache_alloc_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
0.76 ? 2% +0.4 1.21 perf-profile.calltrace.cycles-pp.__check_object_size.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_write_iter.vfs_write
0.56 ? 5% +0.5 1.02 ? 10% perf-profile.calltrace.cycles-pp.copyin._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_write_iter
0.00 +0.5 0.53 ? 4% perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.__kmem_cache_alloc_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
0.00 +0.6 0.55 ? 3% perf-profile.calltrace.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.00 +0.6 0.56 ? 4% perf-profile.calltrace.cycles-pp.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state.consume_skb
0.00 +0.6 0.56 ? 3% perf-profile.calltrace.cycles-pp.__get_obj_cgroup_from_memcg.get_obj_cgroup_from_current.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
0.00 +0.6 0.61 ? 2% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.63 ? 9% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_queue_tail.unix_stream_sendmsg.sock_write_iter.vfs_write
0.86 ? 4% +0.6 1.49 ? 7% perf-profile.calltrace.cycles-pp._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_write_iter.vfs_write
1.66 ? 5% +0.7 2.33 ? 2% perf-profile.calltrace.cycles-pp.__kmem_cache_free.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
4.27 +0.7 4.95 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
0.08 ?223% +0.7 0.78 ? 6% perf-profile.calltrace.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_write_iter.vfs_write
0.00 +0.7 0.70 ? 9% perf-profile.calltrace.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write
0.00 +0.7 0.75 perf-profile.calltrace.cycles-pp.__build_skb_around.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
1.47 ? 3% +0.8 2.24 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_read
0.09 ?223% +0.8 0.87 ? 13% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg
0.46 ? 46% +0.8 1.27 ? 4% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write
40.49 +0.8 41.33 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read
1.58 ? 3% +0.9 2.48 perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
1.27 ? 3% +0.9 2.18 ? 2% perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_write
1.44 ? 3% +0.9 2.37 perf-profile.calltrace.cycles-pp.__check_object_size.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.00 +1.1 1.08 ? 23% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_partial_node.___slab_alloc.__kmem_cache_alloc_node
0.00 +1.1 1.09 ? 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__unfreeze_partials.skb_release_data.consume_skb
0.92 ? 6% +1.1 2.01 ? 2% perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.consume_skb.unix_stream_read_generic
1.10 ? 5% +1.1 2.23 perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
0.00 +1.1 1.13 ? 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_partial_node.___slab_alloc.__kmem_cache_alloc_node.__kmalloc_node_track_caller
0.00 +1.1 1.14 ? 21% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__unfreeze_partials.skb_release_data.consume_skb.unix_stream_read_generic
1.15 ? 5% +1.1 2.30 perf-profile.calltrace.cycles-pp.skb_release_head_state.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
1.83 ? 3% +1.2 3.00 ? 3% perf-profile.calltrace.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write
0.00 +1.2 1.20 ? 26% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_partial_node.___slab_alloc.kmem_cache_alloc_node
0.00 +1.2 1.24 ? 26% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__unfreeze_partials.unix_stream_read_generic.unix_stream_recvmsg
0.00 +1.3 1.25 ? 25% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_partial_node.___slab_alloc.kmem_cache_alloc_node.__alloc_skb
0.00 +1.3 1.29 ? 25% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__unfreeze_partials.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.00 +1.3 1.30 ? 19% perf-profile.calltrace.cycles-pp.__unfreeze_partials.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
1.38 ? 7% +1.4 2.80 perf-profile.calltrace.cycles-pp.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.00 +1.4 1.42 ? 19% perf-profile.calltrace.cycles-pp.get_partial_node.___slab_alloc.__kmem_cache_alloc_node.__kmalloc_node_track_caller.kmalloc_reserve
0.00 +1.4 1.44 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__libc_read
0.00 +1.4 1.44 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__libc_write
0.00 +1.5 1.46 ? 23% perf-profile.calltrace.cycles-pp.__unfreeze_partials.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.08 ?223% +1.5 1.55 ? 2% perf-profile.calltrace.cycles-pp.__slab_free.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.00 +1.5 1.54 ? 22% perf-profile.calltrace.cycles-pp.get_partial_node.___slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
1.08 ? 7% +1.6 2.67 ? 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
1.16 ? 7% +1.6 2.78 ? 2% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.00 +1.7 1.68 perf-profile.calltrace.cycles-pp.__slab_free.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
1.42 ? 6% +1.8 3.18 ? 2% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
0.00 +1.8 1.85 ? 15% perf-profile.calltrace.cycles-pp.___slab_alloc.__kmem_cache_alloc_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
0.00 +2.0 1.98 ? 18% perf-profile.calltrace.cycles-pp.___slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
3.28 ? 4% +2.3 5.56 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_write
3.36 ? 3% +2.4 5.74 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_read
48.59 +2.5 51.12 perf-profile.calltrace.cycles-pp.__libc_read
3.09 ? 4% +2.7 5.80 perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
3.12 ? 4% +2.7 5.86 ? 2% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
3.15 ? 4% +2.8 5.91 ? 2% perf-profile.calltrace.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
2.29 ? 6% +2.9 5.19 ? 6% perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
2.06 ? 7% +2.9 4.98 ? 5% perf-profile.calltrace.cycles-pp.__kmem_cache_alloc_node.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
2.30 ? 7% +3.1 5.41 ? 4% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
2.68 ? 6% +3.3 6.02 ? 4% perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
2.29 ? 9% +4.2 6.47 ? 2% perf-profile.calltrace.cycles-pp.skb_release_data.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
3.65 ? 7% +5.4 9.06 perf-profile.calltrace.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
5.79 ? 6% +7.2 12.99 ? 4% perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_write_iter
5.91 ? 6% +7.3 13.18 ? 4% perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_write_iter.vfs_write
6.55 ? 6% +8.1 14.68 ? 3% perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_write_iter.vfs_write.ksys_write
20.60 ? 5% -18.1 2.47 ? 4% perf-profile.children.cycles-pp.__schedule
20.44 ? 5% -18.0 2.43 ? 4% perf-profile.children.cycles-pp.schedule
16.85 ? 3% -14.7 2.11 ? 5% perf-profile.children.cycles-pp.unix_stream_data_wait
16.19 ? 4% -13.7 2.47 perf-profile.children.cycles-pp.sock_def_readable
15.84 ? 4% -13.7 2.12 ? 3% perf-profile.children.cycles-pp.__wake_up_common_lock
15.13 ? 5% -13.2 1.90 ? 4% perf-profile.children.cycles-pp.__wake_up_common
14.70 ? 4% -12.9 1.84 ? 4% perf-profile.children.cycles-pp.autoremove_wake_function
14.43 ? 4% -12.6 1.80 ? 4% perf-profile.children.cycles-pp.try_to_wake_up
14.49 ? 3% -12.3 2.21 ? 3% perf-profile.children.cycles-pp.schedule_timeout
8.67 ? 8% -7.9 0.77 ? 4% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
7.40 ? 4% -6.5 0.86 ? 4% perf-profile.children.cycles-pp.ttwu_do_activate
79.94 -6.4 73.55 perf-profile.children.cycles-pp.do_syscall_64
6.35 ? 8% -6.1 0.28 ? 10% perf-profile.children.cycles-pp.exit_to_user_mode_loop
6.20 ? 4% -5.5 0.74 ? 3% perf-profile.children.cycles-pp.enqueue_task_fair
5.86 ? 4% -5.1 0.75 ? 5% perf-profile.children.cycles-pp.dequeue_task_fair
13.96 ? 4% -4.1 9.85 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
84.26 -4.1 80.14 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
4.41 ? 6% -3.9 0.52 ? 3% perf-profile.children.cycles-pp.pick_next_task_fair
4.28 ? 4% -3.8 0.50 ? 5% perf-profile.children.cycles-pp.update_curr
3.95 ? 6% -3.5 0.47 ? 4% perf-profile.children.cycles-pp.switch_mm_irqs_off
3.65 ? 4% -3.2 0.43 ? 2% perf-profile.children.cycles-pp.enqueue_entity
3.64 ? 6% -3.2 0.48 ? 2% perf-profile.children.cycles-pp.update_load_avg
27.48 ? 2% -2.8 24.70 perf-profile.children.cycles-pp.unix_stream_sendmsg
3.11 ? 4% -2.7 0.39 ? 4% perf-profile.children.cycles-pp.dequeue_entity
50.81 -2.6 48.19 perf-profile.children.cycles-pp.__libc_write
4.67 ? 5% -2.5 2.16 perf-profile.children.cycles-pp._raw_spin_lock
29.21 ? 2% -2.4 26.82 perf-profile.children.cycles-pp.sock_write_iter
2.32 ? 11% -2.0 0.29 ? 4% perf-profile.children.cycles-pp.select_task_rq
2.12 ? 5% -1.8 0.27 ? 7% perf-profile.children.cycles-pp.prepare_task_switch
2.10 ? 6% -1.8 0.28 ? 2% perf-profile.children.cycles-pp.switch_fpu_return
1.97 ? 13% -1.7 0.25 ? 4% perf-profile.children.cycles-pp.select_task_rq_fair
30.65 ? 2% -1.7 28.94 perf-profile.children.cycles-pp.vfs_write
28.60 -1.6 26.99 perf-profile.children.cycles-pp.unix_stream_read_generic
28.93 -1.6 27.35 perf-profile.children.cycles-pp.unix_stream_recvmsg
1.65 ? 6% -1.5 0.19 ? 3% perf-profile.children.cycles-pp.set_next_entity
30.10 -1.4 28.67 perf-profile.children.cycles-pp.sock_recvmsg
1.61 ? 4% -1.4 0.19 ? 6% perf-profile.children.cycles-pp.reweight_entity
1.63 ? 7% -1.4 0.22 ? 3% perf-profile.children.cycles-pp.restore_fpregs_from_fpstate
31.30 -1.4 29.90 perf-profile.children.cycles-pp.ksys_write
1.53 ? 6% -1.3 0.20 ? 7% perf-profile.children.cycles-pp.__switch_to_asm
32.87 -1.3 31.59 perf-profile.children.cycles-pp.vfs_read
30.66 -1.2 29.47 perf-profile.children.cycles-pp.sock_read_iter
33.60 -1.1 32.46 perf-profile.children.cycles-pp.ksys_read
1.01 ? 8% -0.9 0.06 ? 7% perf-profile.children.cycles-pp.put_prev_entity
1.04 ? 5% -0.9 0.13 ? 5% perf-profile.children.cycles-pp.__switch_to
0.97 ? 6% -0.9 0.09 ? 7% perf-profile.children.cycles-pp.check_preempt_curr
1.08 ? 5% -0.9 0.22 ? 3% perf-profile.children.cycles-pp.prepare_to_wait
0.92 ? 27% -0.9 0.07 ? 8% perf-profile.children.cycles-pp.select_idle_sibling
0.92 ? 7% -0.8 0.09 ? 9% perf-profile.children.cycles-pp.___perf_sw_event
0.94 ? 3% -0.8 0.12 ? 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.94 ? 5% -0.8 0.13 ? 3% perf-profile.children.cycles-pp.update_rq_clock
0.92 ? 5% -0.8 0.16 ? 4% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.83 ? 6% -0.8 0.08 ? 6% perf-profile.children.cycles-pp.check_preempt_wakeup
0.88 ? 4% -0.7 0.14 ? 5% perf-profile.children.cycles-pp.__update_load_avg_se
0.95 ? 2% -0.7 0.27 ? 5% perf-profile.children.cycles-pp.update_cfs_group
0.70 ? 5% -0.6 0.08 ? 7% perf-profile.children.cycles-pp.update_min_vruntime
0.68 ? 6% -0.6 0.08 ? 6% perf-profile.children.cycles-pp.os_xsave
0.61 ? 6% -0.5 0.08 ? 6% perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
0.52 ? 6% -0.5 0.02 ? 99% perf-profile.children.cycles-pp.pick_next_entity
0.48 ? 3% -0.4 0.07 ? 9% perf-profile.children.cycles-pp.__calc_delta
0.46 ? 3% -0.4 0.08 ? 6% perf-profile.children.cycles-pp.perf_tp_event
0.49 ? 15% -0.4 0.12 ? 4% perf-profile.children.cycles-pp.wake_affine
0.49 ? 5% -0.4 0.12 ? 3% perf-profile.children.cycles-pp.asm_sysvec_reschedule_ipi
0.38 ? 5% -0.3 0.05 ? 8% perf-profile.children.cycles-pp.finish_task_switch
1.08 ? 2% -0.3 0.76 perf-profile.children.cycles-pp.__cond_resched
0.36 ? 5% -0.3 0.05 ? 8% perf-profile.children.cycles-pp.sched_clock_cpu
0.93 ? 2% -0.3 0.62 perf-profile.children.cycles-pp.mutex_lock
0.34 ? 9% -0.3 0.03 ? 70% perf-profile.children.cycles-pp.cpuacct_charge
0.48 ? 4% -0.3 0.19 ? 3% perf-profile.children.cycles-pp.__list_del_entry_valid
0.32 ? 6% -0.3 0.04 ? 44% perf-profile.children.cycles-pp.native_sched_clock
0.44 ? 2% -0.2 0.19 ? 3% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.28 ? 5% -0.2 0.03 ? 70% perf-profile.children.cycles-pp.rb_erase
0.32 ? 15% -0.2 0.08 ? 7% perf-profile.children.cycles-pp.task_h_load
0.18 ? 5% -0.1 0.07 ? 7% perf-profile.children.cycles-pp.__list_add_valid
0.13 ? 9% -0.1 0.04 ? 44% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.23 ? 2% +0.0 0.25 ? 2% perf-profile.children.cycles-pp.mutex_unlock
0.06 +0.0 0.08 ? 4% perf-profile.children.cycles-pp.put_pid
0.16 ? 4% +0.0 0.18 ? 2% perf-profile.children.cycles-pp.__x64_sys_write
0.11 ? 6% +0.0 0.14 ? 5% perf-profile.children.cycles-pp.check_stack_object
0.09 ? 10% +0.0 0.12 ? 7% perf-profile.children.cycles-pp.__mod_memcg_state
0.05 ? 7% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.unix_scm_to_skb
0.11 ? 6% +0.0 0.15 ? 6% perf-profile.children.cycles-pp.should_failslab
0.06 ? 6% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.scm_recv
0.13 ? 5% +0.0 0.17 ? 2% perf-profile.children.cycles-pp.__x64_sys_read
0.11 ? 16% +0.0 0.16 ? 15% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.05 perf-profile.children.cycles-pp.skb_put
0.02 ? 99% +0.1 0.08 ? 4% perf-profile.children.cycles-pp.try_charge_memcg
0.12 +0.1 0.18 ? 2% perf-profile.children.cycles-pp.apparmor_socket_getpeersec_dgram
0.08 ? 8% +0.1 0.14 ? 4% perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare
0.12 ? 3% +0.1 0.18 ? 3% perf-profile.children.cycles-pp.apparmor_socket_recvmsg
0.18 ? 5% +0.1 0.24 ? 4% perf-profile.children.cycles-pp.kmalloc_size_roundup
0.13 ? 8% +0.1 0.20 ? 3% perf-profile.children.cycles-pp.memcg_account_kmem
0.27 ? 2% +0.1 0.34 ? 2% perf-profile.children.cycles-pp.security_socket_getpeersec_dgram
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.rw_verify_area
0.00 +0.1 0.07 ? 18% perf-profile.children.cycles-pp.newidle_balance
0.24 ? 5% +0.1 0.31 ? 2% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.14 ? 7% +0.1 0.21 ? 5% perf-profile.children.cycles-pp.aa_file_perm
0.00 +0.1 0.08 ? 16% perf-profile.children.cycles-pp.load_balance
0.11 ? 8% +0.1 0.18 ? 4% perf-profile.children.cycles-pp.apparmor_socket_sendmsg
0.43 ? 5% +0.1 0.50 ? 3% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
1.07 ? 3% +0.1 1.16 ? 2% perf-profile.children.cycles-pp.security_socket_sendmsg
0.02 ?141% +0.1 0.11 ? 7% perf-profile.children.cycles-pp.refill_stock
0.36 ? 3% +0.1 0.46 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.30 ? 5% +0.1 0.40 ? 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.14 ? 10% +0.1 0.25 ? 3% perf-profile.children.cycles-pp.scheduler_tick
0.35 ? 3% +0.1 0.46 ? 3% perf-profile.children.cycles-pp.hrtimer_interrupt
0.10 ? 12% +0.1 0.20 ? 3% perf-profile.children.cycles-pp.task_tick_fair
0.35 ? 2% +0.1 0.46 ? 2% perf-profile.children.cycles-pp.wait_for_unix_gc
0.20 ? 7% +0.1 0.31 ? 3% perf-profile.children.cycles-pp.update_process_times
0.24 ? 6% +0.1 0.35 ? 3% perf-profile.children.cycles-pp.tick_sched_timer
0.40 ? 4% +0.1 0.51 ? 2% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.21 ? 7% +0.1 0.32 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
0.00 +0.1 0.12 ? 11% perf-profile.children.cycles-pp.ordered_events__queue
0.00 +0.1 0.12 ? 11% perf-profile.children.cycles-pp.queue_event
0.04 ? 73% +0.1 0.16 ? 9% perf-profile.children.cycles-pp.__cmd_record
0.00 +0.1 0.12 ? 8% perf-profile.children.cycles-pp.process_simple
0.44 ? 4% +0.1 0.56 ? 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.1 0.13 ? 8% perf-profile.children.cycles-pp.record__finish_output
0.00 +0.1 0.13 ? 8% perf-profile.children.cycles-pp.perf_session__process_events
0.00 +0.1 0.13 ? 8% perf-profile.children.cycles-pp.reader__read_event
0.40 ? 3% +0.1 0.53 ? 2% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.14 ? 4% perf-profile.children.cycles-pp.put_cpu_partial
0.83 ? 4% +0.1 0.97 perf-profile.children.cycles-pp.__virt_addr_valid
0.17 ? 7% +0.2 0.32 ? 2% perf-profile.children.cycles-pp.refill_obj_stock
0.24 ? 6% +0.2 0.40 ? 3% perf-profile.children.cycles-pp.kmalloc_slab
0.94 ? 2% +0.2 1.10 perf-profile.children.cycles-pp.security_socket_recvmsg
0.05 ? 48% +0.2 0.24 ? 5% perf-profile.children.cycles-pp.fsnotify_perm
0.28 ? 5% +0.2 0.48 ? 2% perf-profile.children.cycles-pp.__might_fault
1.81 ? 4% +0.2 2.05 perf-profile.children.cycles-pp.security_file_permission
0.46 ? 6% +0.3 0.71 ? 9% perf-profile.children.cycles-pp.skb_queue_tail
1.00 ? 4% +0.3 1.27 ? 2% perf-profile.children.cycles-pp.__fget_light
0.62 ? 5% +0.3 0.89 perf-profile.children.cycles-pp.__check_heap_object
0.41 ? 4% +0.3 0.69 perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.41 ? 6% +0.3 0.70 ? 2% perf-profile.children.cycles-pp.obj_cgroup_charge
1.41 ? 5% +0.3 1.71 ? 2% perf-profile.children.cycles-pp.apparmor_file_permission
0.49 ? 4% +0.3 0.79 ? 2% perf-profile.children.cycles-pp.__get_obj_cgroup_from_memcg
0.26 ? 4% +0.3 0.56 ? 4% perf-profile.children.cycles-pp.unix_write_space
0.25 ? 9% +0.3 0.56 ? 3% perf-profile.children.cycles-pp.skb_unlink
0.46 ? 6% +0.3 0.78 ? 6% perf-profile.children.cycles-pp.skb_set_owner_w
1.06 ? 4% +0.3 1.39 ? 2% perf-profile.children.cycles-pp.__fdget_pos
0.70 ? 3% +0.3 1.03 ? 2% perf-profile.children.cycles-pp.__might_resched
0.58 ? 4% +0.5 1.04 ? 10% perf-profile.children.cycles-pp.copyin
0.26 ? 22% +0.5 0.75 perf-profile.children.cycles-pp.__build_skb_around
0.99 ? 3% +0.5 1.49 ? 3% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
0.88 ? 4% +0.6 1.46 ? 2% perf-profile.children.cycles-pp.mod_objcg_state
0.88 ? 4% +0.6 1.52 ? 7% perf-profile.children.cycles-pp._copy_from_iter
1.66 ? 4% +0.7 2.34 perf-profile.children.cycles-pp.__kmem_cache_free
1.45 ? 3% +0.8 2.21 perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.72 ? 6% +0.9 1.62 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
1.58 ? 3% +0.9 2.48 perf-profile.children.cycles-pp.simple_copy_to_iter
0.92 ? 6% +1.1 2.02 perf-profile.children.cycles-pp.sock_wfree
1.13 ? 5% +1.1 2.26 perf-profile.children.cycles-pp.unix_destruct_scm
1.16 ? 5% +1.1 2.31 perf-profile.children.cycles-pp.skb_release_head_state
1.84 ? 3% +1.2 3.02 ? 3% perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
2.41 ? 2% +1.4 3.77 perf-profile.children.cycles-pp.__check_object_size
1.39 ? 7% +1.4 2.82 perf-profile.children.cycles-pp.kmem_cache_free
1.16 ? 7% +1.6 2.79 ? 2% perf-profile.children.cycles-pp.copyout
1.43 ? 6% +1.8 3.20 ? 2% perf-profile.children.cycles-pp._copy_to_iter
1.66 ? 6% +2.0 3.71 ? 4% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.24 ? 4% +2.4 5.69 perf-profile.children.cycles-pp.__entry_text_start
49.06 +2.5 51.55 perf-profile.children.cycles-pp.__libc_read
0.62 ? 23% +2.6 3.24 perf-profile.children.cycles-pp.__slab_free
3.09 ? 4% +2.7 5.80 perf-profile.children.cycles-pp.__skb_datagram_iter
0.04 ?223% +2.7 2.77 ? 21% perf-profile.children.cycles-pp.__unfreeze_partials
3.12 ? 4% +2.7 5.86 ? 2% perf-profile.children.cycles-pp.skb_copy_datagram_iter
3.16 ? 4% +2.8 5.92 perf-profile.children.cycles-pp.unix_stream_read_actor
2.36 ? 5% +2.9 5.29 ? 6% perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.04 ?223% +3.0 3.00 ? 20% perf-profile.children.cycles-pp.get_partial_node
2.14 ? 7% +3.0 5.10 ? 5% perf-profile.children.cycles-pp.__kmem_cache_alloc_node
2.12 ? 10% +3.0 5.12 ? 21% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
2.38 ? 7% +3.2 5.56 ? 4% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
2.70 ? 6% +3.3 6.04 ? 4% perf-profile.children.cycles-pp.kmalloc_reserve
0.18 ?102% +3.7 3.83 ? 17% perf-profile.children.cycles-pp.___slab_alloc
2.24 ? 6% +4.0 6.25 ? 17% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
2.30 ? 9% +4.2 6.49 ? 2% perf-profile.children.cycles-pp.skb_release_data
6.70 ? 3% +4.7 11.39 perf-profile.children.cycles-pp.syscall_return_via_sysret
3.66 ? 7% +5.4 9.08 perf-profile.children.cycles-pp.consume_skb
5.82 ? 7% +7.2 13.04 ? 4% perf-profile.children.cycles-pp.__alloc_skb
5.92 ? 6% +7.3 13.19 ? 4% perf-profile.children.cycles-pp.alloc_skb_with_frags
6.56 ? 6% +8.1 14.70 ? 3% perf-profile.children.cycles-pp.sock_alloc_send_pskb
3.84 ? 6% -3.4 0.46 ? 4% perf-profile.self.cycles-pp.switch_mm_irqs_off
1.99 ? 5% -1.8 0.21 ? 6% perf-profile.self.cycles-pp.update_curr
1.78 ? 8% -1.6 0.22 ? 3% perf-profile.self.cycles-pp.update_load_avg
1.70 ? 5% -1.5 0.22 ? 6% perf-profile.self.cycles-pp.__schedule
1.63 ? 7% -1.4 0.22 ? 3% perf-profile.self.cycles-pp.restore_fpregs_from_fpstate
1.52 ? 6% -1.3 0.20 ? 7% perf-profile.self.cycles-pp.__switch_to_asm
1.39 ? 3% -1.2 0.16 ? 3% perf-profile.self.cycles-pp.enqueue_entity
0.98 ? 5% -0.9 0.12 ? 5% perf-profile.self.cycles-pp.__switch_to
0.92 ? 5% -0.8 0.12 ? 8% perf-profile.self.cycles-pp.pick_next_task_fair
0.90 ? 3% -0.8 0.11 perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.84 ? 7% -0.8 0.08 ? 10% perf-profile.self.cycles-pp.___perf_sw_event
2.58 ? 3% -0.7 1.85 ? 2% perf-profile.self.cycles-pp._raw_spin_lock
0.85 ? 4% -0.7 0.13 ? 5% perf-profile.self.cycles-pp.__update_load_avg_se
2.12 ? 5% -0.7 1.40 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.93 ? 2% -0.7 0.27 ? 5% perf-profile.self.cycles-pp.update_cfs_group
0.72 ? 6% -0.6 0.07 ? 7% perf-profile.self.cycles-pp.enqueue_task_fair
0.67 ? 5% -0.6 0.08 ? 6% perf-profile.self.cycles-pp.update_min_vruntime
0.67 ? 6% -0.6 0.07 ? 6% perf-profile.self.cycles-pp.os_xsave
0.70 ? 7% -0.6 0.12 ? 6% perf-profile.self.cycles-pp.prepare_task_switch
0.61 ? 5% -0.5 0.08 ? 6% perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
0.60 ? 5% -0.5 0.08 ? 6% perf-profile.self.cycles-pp.dequeue_task_fair
0.60 ? 4% -0.5 0.09 ? 9% perf-profile.self.cycles-pp.reweight_entity
0.57 ? 9% -0.5 0.07 perf-profile.self.cycles-pp.update_rq_clock
0.55 ? 3% -0.5 0.07 ? 5% perf-profile.self.cycles-pp.select_task_rq_fair
0.55 ? 4% -0.5 0.08 ? 9% perf-profile.self.cycles-pp.try_to_wake_up
0.49 ? 6% -0.4 0.06 ? 11% perf-profile.self.cycles-pp.unix_stream_data_wait
0.48 ? 4% -0.4 0.07 ? 8% perf-profile.self.cycles-pp.__calc_delta
0.46 ? 8% -0.4 0.05 ? 7% perf-profile.self.cycles-pp.switch_fpu_return
0.43 ? 6% -0.4 0.02 ? 99% perf-profile.self.cycles-pp.check_preempt_wakeup
0.43 ? 7% -0.4 0.06 ? 11% perf-profile.self.cycles-pp.__wake_up_common
0.44 ? 4% -0.4 0.08 perf-profile.self.cycles-pp.prepare_to_wait
1.21 ? 5% -0.4 0.86 ? 2% perf-profile.self.cycles-pp.__libc_read
0.35 ? 6% -0.3 0.03 ? 70% perf-profile.self.cycles-pp.select_task_rq
0.32 ? 10% -0.3 0.03 ? 70% perf-profile.self.cycles-pp.cpuacct_charge
0.32 ? 5% -0.3 0.04 ? 44% perf-profile.self.cycles-pp.native_sched_clock
0.95 ? 6% -0.3 0.69 perf-profile.self.cycles-pp.vfs_read
0.31 ? 3% -0.3 0.05 ? 7% perf-profile.self.cycles-pp.dequeue_entity
0.30 ? 4% -0.3 0.05 ? 7% perf-profile.self.cycles-pp.perf_tp_event
0.27 ? 5% -0.2 0.02 ? 99% perf-profile.self.cycles-pp.rb_erase
0.32 ? 14% -0.2 0.08 ? 4% perf-profile.self.cycles-pp.task_h_load
0.42 ? 4% -0.2 0.19 ? 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.29 ? 12% -0.2 0.06 ? 7% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.47 ? 2% -0.2 0.28 ? 4% perf-profile.self.cycles-pp.mutex_lock
0.22 ? 4% -0.2 0.05 perf-profile.self.cycles-pp.schedule_timeout
0.27 ? 4% -0.1 0.14 ? 2% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.17 ? 6% -0.1 0.07 ? 7% perf-profile.self.cycles-pp.__list_add_valid
0.39 ? 6% -0.1 0.34 ? 5% perf-profile.self.cycles-pp.security_file_permission
0.24 ? 7% -0.0 0.20 ? 3% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.17 ? 7% -0.0 0.14 perf-profile.self.cycles-pp.is_vmalloc_addr
0.06 ? 8% +0.0 0.07 ? 5% perf-profile.self.cycles-pp.should_failslab
0.15 ? 4% +0.0 0.17 ? 4% perf-profile.self.cycles-pp.obj_cgroup_uncharge_pages
0.22 ? 2% +0.0 0.25 ? 3% perf-profile.self.cycles-pp.mutex_unlock
0.20 ? 3% +0.0 0.22 ? 5% perf-profile.self.cycles-pp.unix_destruct_scm
0.15 ? 6% +0.0 0.18 ? 2% perf-profile.self.cycles-pp.__x64_sys_write
0.07 ? 5% +0.0 0.10 ? 3% perf-profile.self.cycles-pp.__might_fault
0.12 ? 4% +0.0 0.14 ? 4% perf-profile.self.cycles-pp.wait_for_unix_gc
0.09 ? 7% +0.0 0.12 ? 5% perf-profile.self.cycles-pp.check_stack_object
0.32 ? 3% +0.0 0.36 ? 2% perf-profile.self.cycles-pp.unix_stream_recvmsg
0.05 ? 7% +0.0 0.08 ? 5% perf-profile.self.cycles-pp.scm_recv
0.25 ? 4% +0.0 0.28 perf-profile.self.cycles-pp.unix_write_space
0.07 ? 5% +0.0 0.10 ? 3% perf-profile.self.cycles-pp.security_socket_recvmsg
0.12 ? 7% +0.0 0.16 ? 3% perf-profile.self.cycles-pp.__x64_sys_read
0.03 ? 70% +0.0 0.08 ? 6% perf-profile.self.cycles-pp.security_socket_sendmsg
0.07 ? 5% +0.0 0.11 ? 4% perf-profile.self.cycles-pp.__skb_datagram_iter
0.15 ? 3% +0.0 0.20 ? 3% perf-profile.self.cycles-pp._copy_to_iter
0.42 ? 2% +0.0 0.46 perf-profile.self.cycles-pp.__cond_resched
0.16 ? 3% +0.0 0.21 ? 4% perf-profile.self.cycles-pp.ksys_read
0.00 +0.1 0.05 perf-profile.self.cycles-pp.copyout
0.08 ? 10% +0.1 0.13 ? 3% perf-profile.self.cycles-pp.syscall_exit_to_user_mode_prepare
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.skb_release_head_state
0.12 ? 4% +0.1 0.17 ? 4% perf-profile.self.cycles-pp.apparmor_socket_recvmsg
0.11 ? 3% +0.1 0.17 ? 2% perf-profile.self.cycles-pp.apparmor_socket_getpeersec_dgram
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.rw_verify_area
0.02 ? 99% +0.1 0.08 ? 8% perf-profile.self.cycles-pp.unix_scm_to_skb
0.00 +0.1 0.06 ? 8% perf-profile.self.cycles-pp.refill_stock
0.10 ? 9% +0.1 0.16 ? 3% perf-profile.self.cycles-pp.alloc_skb_with_frags
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.put_pid
0.07 ? 5% +0.1 0.13 ? 5% perf-profile.self.cycles-pp.__fdget_pos
0.00 +0.1 0.06 perf-profile.self.cycles-pp.try_charge_memcg
0.35 ? 5% +0.1 0.41 ? 3% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.29 ? 2% +0.1 0.35 perf-profile.self.cycles-pp.do_syscall_64
0.13 ? 7% +0.1 0.19 ? 2% perf-profile.self.cycles-pp.skb_copy_datagram_from_iter
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.skb_copy_datagram_iter
0.13 ? 6% +0.1 0.20 ? 6% perf-profile.self.cycles-pp.aa_file_perm
0.01 ?223% +0.1 0.08 ? 9% perf-profile.self.cycles-pp.copyin
0.00 +0.1 0.07 ? 5% perf-profile.self.cycles-pp.memcg_account_kmem
0.16 ? 7% +0.1 0.23 ? 2% perf-profile.self.cycles-pp._copy_from_iter
0.15 ? 8% +0.1 0.22 ? 5% perf-profile.self.cycles-pp.ksys_write
0.10 ? 5% +0.1 0.18 ? 5% perf-profile.self.cycles-pp.apparmor_socket_sendmsg
0.62 +0.1 0.70 ? 3% perf-profile.self.cycles-pp.__libc_write
0.17 ? 4% +0.1 0.26 ? 2% perf-profile.self.cycles-pp.sock_alloc_send_pskb
0.20 ? 2% +0.1 0.29 ? 3% perf-profile.self.cycles-pp.consume_skb
0.00 +0.1 0.09 ? 7% perf-profile.self.cycles-pp.skb_queue_tail
0.17 ? 5% +0.1 0.27 ? 5% perf-profile.self.cycles-pp.kmalloc_slab
0.14 ? 7% +0.1 0.24 ? 3% perf-profile.self.cycles-pp.kmalloc_reserve
0.00 +0.1 0.11 ? 12% perf-profile.self.cycles-pp.queue_event
0.80 ? 4% +0.1 0.91 ? 2% perf-profile.self.cycles-pp.__virt_addr_valid
0.35 ? 3% +0.1 0.47 ? 2% perf-profile.self.cycles-pp.__might_sleep
0.22 ? 5% +0.1 0.35 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.00 +0.1 0.14 ? 2% perf-profile.self.cycles-pp.put_cpu_partial
0.17 ? 7% +0.2 0.32 ? 2% perf-profile.self.cycles-pp.refill_obj_stock
0.95 ? 4% +0.2 1.10 perf-profile.self.cycles-pp.aa_sk_perm
0.15 ? 13% +0.2 0.31 perf-profile.self.cycles-pp.__kmalloc_node_track_caller
0.01 ?223% +0.2 0.18 ? 8% perf-profile.self.cycles-pp.skb_unlink
0.26 ? 7% +0.2 0.44 ? 2% perf-profile.self.cycles-pp.obj_cgroup_charge
0.04 ? 72% +0.2 0.24 ? 4% perf-profile.self.cycles-pp.fsnotify_perm
0.47 ? 4% +0.2 0.67 ? 7% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
1.26 ? 6% +0.2 1.49 ? 2% perf-profile.self.cycles-pp.apparmor_file_permission
0.57 ? 3% +0.2 0.80 perf-profile.self.cycles-pp.sock_read_iter
0.36 ? 4% +0.2 0.60 perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.99 ? 4% +0.3 1.25 ? 2% perf-profile.self.cycles-pp.__fget_light
0.35 ? 4% +0.3 0.61 ? 7% perf-profile.self.cycles-pp.sock_def_readable
0.60 ? 5% +0.3 0.86 perf-profile.self.cycles-pp.__check_heap_object
0.01 ?223% +0.3 0.28 ? 9% perf-profile.self.cycles-pp.__unfreeze_partials
0.47 ? 4% +0.3 0.74 ? 2% perf-profile.self.cycles-pp.__get_obj_cgroup_from_memcg
0.44 ? 6% +0.3 0.73 ? 3% perf-profile.self.cycles-pp.kmem_cache_alloc_node
0.45 ? 7% +0.3 0.77 ? 6% perf-profile.self.cycles-pp.skb_set_owner_w
0.68 ? 3% +0.3 1.00 ? 2% perf-profile.self.cycles-pp.__might_resched
0.64 ? 4% +0.3 0.98 perf-profile.self.cycles-pp.sock_write_iter
0.51 ? 5% +0.3 0.86 perf-profile.self.cycles-pp.__kmem_cache_alloc_node
1.12 ? 2% +0.4 1.48 ? 4% perf-profile.self.cycles-pp.unix_stream_sendmsg
1.09 ? 5% +0.4 1.50 ? 2% perf-profile.self.cycles-pp.__kmem_cache_free
0.54 ? 4% +0.4 0.95 perf-profile.self.cycles-pp.vfs_write
0.02 ?223% +0.4 0.43 ? 8% perf-profile.self.cycles-pp.get_partial_node
0.52 ? 6% +0.5 0.98 ? 2% perf-profile.self.cycles-pp.__alloc_skb
0.24 ? 21% +0.5 0.73 perf-profile.self.cycles-pp.__build_skb_around
1.12 ? 3% +0.5 1.62 perf-profile.self.cycles-pp.memcg_slab_post_alloc_hook
0.75 ? 4% +0.5 1.28 perf-profile.self.cycles-pp.mod_objcg_state
1.32 ? 3% +0.6 1.95 perf-profile.self.cycles-pp.unix_stream_read_generic
0.11 ? 78% +0.7 0.82 ? 5% perf-profile.self.cycles-pp.___slab_alloc
0.20 ? 17% +0.8 0.96 perf-profile.self.cycles-pp.skb_release_data
0.64 ? 8% +0.8 1.43 perf-profile.self.cycles-pp.sock_wfree
0.71 ? 7% +1.0 1.69 perf-profile.self.cycles-pp.__check_object_size
0.99 ? 9% +1.1 2.12 perf-profile.self.cycles-pp.kmem_cache_free
1.64 ? 6% +2.0 3.66 ? 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.89 ? 3% +2.2 5.10 perf-profile.self.cycles-pp.__entry_text_start
4.39 ? 3% +2.4 6.75 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.61 ? 24% +2.6 3.22 perf-profile.self.cycles-pp.__slab_free
2.11 ? 10% +3.0 5.12 ? 21% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
5.32 ? 4% +3.7 8.98 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
6.69 ? 3% +4.7 11.38 perf-profile.self.cycles-pp.syscall_return_via_sysret
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests