Greeting,
FYI, we noticed a -21.3% regression of will-it-scale.per_process_ops due to commit:
commit: 5387c90490f7f42df3209154ca955a453ee01b41 ("mm/memcg: improve refill_obj_stock() performance")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 144 threads 4 sockets Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
nr_task: 50%
mode: process
test: unix1
cpufreq_governor: performance
ucode: 0x16
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput 63.3% improvement |
| test machine | 96 threads 2 sockets Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=socket |
| | iterations=8 |
| | mode=threads |
| | nr_threads=50% |
| | ucode=0x4003006 |
+------------------+----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
bin/lkp run generated-yaml-file
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-hsw-4ex1/unix1/will-it-scale/0x16
commit:
68ac5b3c8d ("mm/memcg: cache vmstat data in percpu memcg_stock_pcp")
5387c90490 ("mm/memcg: improve refill_obj_stock() performance")
68ac5b3c8db2fda0 5387c90490f7f42df3209154ca9
---------------- ---------------------------
%stddev %change %stddev
\ | \
35506465 -21.3% 27928327 ? 7% will-it-scale.72.processes
493145 -21.3% 387892 ? 7% will-it-scale.per_process_ops
35506465 -21.3% 27928327 ? 7% will-it-scale.workload
12193 ? 2% +5.7% 12883 ? 2% proc-vmstat.nr_mapped
1965 +9.2% 2146 vmstat.system.cs
615.46 -3.6% 593.51 turbostat.PkgWatt
44.99 ? 2% +5.7% 47.56 ? 3% turbostat.RAMWatt
5276 ? 12% +15.5% 6094 ? 15% numa-meminfo.node3.KernelStack
499829 ? 4% +47.3% 736092 ? 29% numa-meminfo.node3.MemUsed
40145 ? 4% +24.3% 49894 ? 8% numa-meminfo.node3.SUnreclaim
53547 ? 4% +41.1% 75537 ? 19% numa-meminfo.node3.Slab
0.00 +1.3e+107% 133446 ? 99% numa-meminfo.node3.Unevictable
5275 ? 12% +15.5% 6095 ? 14% numa-vmstat.node3.nr_kernel_stack
1419 ? 27% +58.2% 2246 ? 21% numa-vmstat.node3.nr_mapped
10035 ? 4% +24.3% 12473 ? 8% numa-vmstat.node3.nr_slab_unreclaimable
0.00 +3.3e+106% 33361 ? 99% numa-vmstat.node3.nr_unevictable
0.00 +3.3e+106% 33361 ? 99% numa-vmstat.node3.nr_zone_unevictable
10635401 ? 18% +2.1e+07 32049944 ? 49% syscalls.sys_close.noise.100%
28951819 ? 6% +2e+07 48750876 ? 29% syscalls.sys_close.noise.2%
27203005 ? 7% +2e+07 46937922 ? 30% syscalls.sys_close.noise.25%
28880837 ? 6% +2e+07 48687647 ? 29% syscalls.sys_close.noise.5%
21612952 ? 9% +2e+07 41847414 ? 36% syscalls.sys_close.noise.50%
15672133 ? 13% +2.1e+07 36660699 ? 42% syscalls.sys_close.noise.75%
2.586e+08 ? 20% +1.5e+08 4.057e+08 ? 21% syscalls.sys_mmap.noise.100%
3.491e+08 ? 15% +1.5e+08 4.977e+08 ? 19% syscalls.sys_mmap.noise.2%
3.252e+08 ? 16% +1.5e+08 4.743e+08 ? 20% syscalls.sys_mmap.noise.25%
3.483e+08 ? 15% +1.5e+08 4.968e+08 ? 19% syscalls.sys_mmap.noise.5%
2.875e+08 ? 18% +1.5e+08 4.41e+08 ? 21% syscalls.sys_mmap.noise.50%
2.688e+08 ? 20% +1.5e+08 4.217e+08 ? 21% syscalls.sys_mmap.noise.75%
968.00 ? 20% -18.4% 790.33 interrupts.CPU100.CAL:Function_call_interrupts
40.20 ? 63% +161.2% 105.00 ? 48% interrupts.CPU104.RES:Rescheduling_interrupts
29.00 ? 77% +429.9% 153.67 ? 49% interrupts.CPU117.RES:Rescheduling_interrupts
79.80 ? 57% +80.7% 144.17 ? 34% interrupts.CPU118.RES:Rescheduling_interrupts
1016 ? 22% -20.8% 804.50 ? 5% interrupts.CPU125.CAL:Function_call_interrupts
196.00 ? 6% -52.6% 93.00 ? 54% interrupts.CPU125.RES:Rescheduling_interrupts
61.80 ? 69% +141.4% 149.17 ? 46% interrupts.CPU129.RES:Rescheduling_interrupts
3842 ? 58% +72.7% 6636 ? 23% interrupts.CPU13.NMI:Non-maskable_interrupts
3842 ? 58% +72.7% 6636 ? 23% interrupts.CPU13.PMI:Performance_monitoring_interrupts
260.40 ? 12% -34.5% 170.50 ? 39% interrupts.CPU20.RES:Rescheduling_interrupts
1515 ? 70% -47.6% 794.17 ? 2% interrupts.CPU21.CAL:Function_call_interrupts
3382 ? 17% +106.7% 6992 ? 22% interrupts.CPU38.NMI:Non-maskable_interrupts
3382 ? 17% +106.7% 6992 ? 22% interrupts.CPU38.PMI:Performance_monitoring_interrupts
258.60 ? 10% -51.3% 126.00 ? 64% interrupts.CPU45.RES:Rescheduling_interrupts
222.40 ? 15% -55.2% 99.67 ? 56% interrupts.CPU50.RES:Rescheduling_interrupts
2136 ?107% -63.0% 791.17 interrupts.CPU51.CAL:Function_call_interrupts
3122 ? 23% +116.1% 6746 ? 28% interrupts.CPU60.NMI:Non-maskable_interrupts
3122 ? 23% +116.1% 6746 ? 28% interrupts.CPU60.PMI:Performance_monitoring_interrupts
112.60 ? 70% +85.9% 209.33 ? 23% interrupts.CPU60.RES:Rescheduling_interrupts
132.60 ? 23% +46.1% 193.67 ? 17% interrupts.CPU71.RES:Rescheduling_interrupts
268.20 ? 8% -25.1% 201.00 ? 19% interrupts.CPU72.RES:Rescheduling_interrupts
231.60 ? 11% -32.5% 156.33 ? 39% interrupts.CPU96.RES:Rescheduling_interrupts
0.02 ? 25% +75.8% 0.03 ? 9% perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open
10683 ? 2% +10.8% 11839 ? 3% perf-sched.total_wait_and_delay.count.ms
1.29 +27.6% 1.64 ? 14% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4
0.04 ? 2% +874.9% 0.38 ?193% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.04 ? 4% +10.8% 0.05 ? 6% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.aa_sk_perm
2931 ? 79% -93.5% 190.88 ?223% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write
352.89 ? 19% -30.7% 244.45 ? 20% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop
11.34 ? 3% -52.2% 5.42 ? 18% perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread
400.00 ? 6% +16.0% 464.17 ? 6% perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
297.60 ? 3% -15.9% 250.33 ? 9% perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.aa_sk_perm
89.00 ? 13% +53.6% 136.67 ? 15% perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop
865.80 ? 3% +116.6% 1875 ? 16% perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread
2015 ? 5% -14.8% 1716 ? 7% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read
2015 ? 5% -14.8% 1716 ? 7% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0
2019 ? 5% -14.8% 1721 ? 7% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read
4507 ? 68% -95.4% 208.48 ?223% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write
336.40 ? 6% -39.8% 202.67 ? 38% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread
0.02 ? 25% +75.8% 0.03 ? 9% perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open
1.28 +27.7% 1.64 ? 14% perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4
0.04 ? 4% +10.8% 0.05 ? 6% perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.aa_sk_perm
2931 ? 79% -92.4% 221.94 ?187% perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write
352.89 ? 19% -30.7% 244.45 ? 20% perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop
11.33 ? 3% -52.2% 5.41 ? 18% perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread
2015 ? 5% -14.8% 1716 ? 7% perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read
2015 ? 5% -14.8% 1716 ? 7% perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0
2019 ? 5% -14.8% 1721 ? 7% perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read
4507 ? 68% -90.4% 434.45 ?132% perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write
336.39 ? 6% -39.8% 202.66 ? 38% perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread
4e+10 -20.5% 3.181e+10 ? 6% perf-stat.i.branch-instructions
5.393e+08 -20.6% 4.282e+08 ? 6% perf-stat.i.branch-misses
3.10 ? 3% +47.7 50.79 perf-stat.i.cache-miss-rate%
385998 ? 7% +5787.2% 22724557 ? 2% perf-stat.i.cache-misses
12724208 ? 4% +265.6% 46522988 perf-stat.i.cache-references
1893 +8.5% 2053 perf-stat.i.context-switches
1.03 +27.1% 1.31 ? 7% perf-stat.i.cpi
732924 ? 5% -98.7% 9180 ? 2% perf-stat.i.cycles-between-cache-misses
5.984e+10 -20.6% 4.75e+10 ? 6% perf-stat.i.dTLB-loads
71166327 -21.0% 56203810 ? 6% perf-stat.i.dTLB-store-misses
4.195e+10 -20.7% 3.325e+10 ? 6% perf-stat.i.dTLB-stores
95.55 -2.2 93.36 perf-stat.i.iTLB-load-miss-rate%
1.044e+08 ? 5% -37.3% 65429401 ? 7% perf-stat.i.iTLB-load-misses
2.033e+11 -20.5% 1.616e+11 ? 6% perf-stat.i.instructions
1970 ? 4% +25.8% 2478 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.97 -20.5% 0.77 ? 6% perf-stat.i.ipc
613.69 +46.3% 898.05 ? 3% perf-stat.i.metric.K/sec
984.60 -20.6% 781.43 ? 6% perf-stat.i.metric.M/sec
95.28 -1.2 94.09 perf-stat.i.node-load-miss-rate%
227909 ? 7% +3910.6% 9140524 ? 3% perf-stat.i.node-load-misses
29783 ? 19% +1815.1% 570393 ? 8% perf-stat.i.node-loads
58.16 +38.4 96.54 perf-stat.i.node-store-miss-rate%
62670 ? 5% +20183.3% 12711692 ? 2% perf-stat.i.node-store-misses
61358 ? 5% +622.9% 443579 ? 6% perf-stat.i.node-stores
0.06 ? 4% +357.0% 0.29 ? 5% perf-stat.overall.MPKI
3.07 ? 4% +45.6 48.63 perf-stat.overall.cache-miss-rate%
1.03 +26.5% 1.30 ? 6% perf-stat.overall.cpi
528569 ? 7% -98.3% 9180 ? 2% perf-stat.overall.cycles-between-cache-misses
95.69 -2.3 93.44 perf-stat.overall.iTLB-load-miss-rate%
1953 ? 4% +26.6% 2472 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.98 -20.6% 0.77 ? 6% perf-stat.overall.ipc
87.79 ? 2% +6.3 94.09 perf-stat.overall.node-load-miss-rate%
50.32 +46.3 96.62 perf-stat.overall.node-store-miss-rate%
1732297 +0.8% 1745669 perf-stat.overall.path-length
3.985e+10 -20.5% 3.169e+10 ? 6% perf-stat.ps.branch-instructions
5.376e+08 -20.6% 4.268e+08 ? 6% perf-stat.ps.branch-misses
395364 ? 7% +5629.5% 22652576 ? 2% perf-stat.ps.cache-misses
12874423 ? 4% +261.7% 46573149 perf-stat.ps.cache-references
1882 +8.3% 2039 perf-stat.ps.context-switches
5.963e+10 -20.6% 4.732e+10 ? 6% perf-stat.ps.dTLB-loads
70897603 -21.0% 55974505 ? 6% perf-stat.ps.dTLB-store-misses
4.18e+10 -20.8% 3.312e+10 ? 6% perf-stat.ps.dTLB-stores
1.04e+08 ? 5% -37.3% 65157736 ? 7% perf-stat.ps.iTLB-load-misses
2.026e+11 -20.6% 1.61e+11 ? 6% perf-stat.ps.instructions
232283 ? 7% +3823.3% 9113095 ? 3% perf-stat.ps.node-load-misses
32585 ? 22% +1651.8% 570838 ? 8% perf-stat.ps.node-loads
63195 ? 5% +19944.0% 12666959 ? 2% perf-stat.ps.node-store-misses
62383 ? 5% +610.2% 443065 ? 6% perf-stat.ps.node-stores
6.151e+13 -20.7% 4.875e+13 ? 6% perf-stat.total.instructions
0.00 +1.6 1.57 ? 27% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic
1.07 ? 12% +1.6 2.67 ? 16% perf-profile.calltrace.cycles-pp.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +1.7 1.74 ? 26% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg
0.00 +1.8 1.76 ? 25% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.81 ? 11% +1.8 3.62 ? 16% perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.00 +2.1 2.10 ? 30% perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb
0.00 +2.1 2.12 ? 29% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
0.00 +2.2 2.24 ? 27% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
3.11 ? 12% +2.8 5.92 ? 16% perf-profile.calltrace.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +3.0 3.00 ? 35% perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.consume_skb
25.48 ? 13% +3.1 28.55 ? 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
25.20 ? 13% +3.1 28.31 ? 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.30 ? 11% +3.1 4.43 ? 23% perf-profile.calltrace.cycles-pp.kfree.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
12.84 ? 12% +3.2 16.05 ? 11% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
12.23 ? 12% +3.3 15.54 ? 11% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +3.3 3.31 ? 34% perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.consume_skb.unix_stream_read_generic
0.00 +3.3 3.34 ? 34% perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kfree.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
32.01 ? 12% +3.4 35.45 ? 10% perf-profile.calltrace.cycles-pp.write
10.77 ? 12% +3.5 14.26 ? 11% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.44 ? 12% +3.6 13.99 ? 11% perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
8.72 ? 12% +3.8 12.52 ? 11% perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
8.28 ? 12% +3.9 12.14 ? 11% perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read
1.77 ? 12% +4.1 5.91 ? 23% perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
1.65 ? 12% +4.2 5.81 ? 23% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
26.51 ? 12% +4.2 30.74 ? 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
26.25 ? 12% +4.3 30.53 ? 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
14.03 ? 12% +4.3 18.34 ? 11% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
13.35 ? 12% +4.4 17.74 ? 11% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +4.4 4.40 ? 34% perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve
0.00 +4.4 4.42 ? 34% perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
0.00 +4.5 4.54 ? 33% perf-profile.calltrace.cycles-pp.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
12.01 ? 12% +4.6 16.63 ? 11% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.75 ? 12% +4.7 16.40 ? 11% perf-profile.calltrace.cycles-pp.sock_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
10.98 ? 12% +4.8 15.75 ? 11% perf-profile.calltrace.cycles-pp.sock_sendmsg.sock_write_iter.new_sync_write.vfs_write.ksys_write
9.84 ? 12% +5.0 14.79 ? 11% perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write.vfs_write
5.33 ? 12% +5.7 11.00 ? 14% perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
4.92 ? 12% +5.7 10.65 ? 15% perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
4.78 ? 12% +5.7 10.53 ? 15% perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
0.95 ? 11% -0.2 0.73 ? 19% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
1.09 ? 12% +1.6 2.70 ? 16% perf-profile.children.cycles-pp.kmem_cache_free
0.00 +1.7 1.73 ? 33% perf-profile.children.cycles-pp.propagate_protected_usage
1.86 ? 11% +1.8 3.67 ? 16% perf-profile.children.cycles-pp.kmem_cache_alloc_node
3.12 ? 11% +2.8 5.93 ? 16% perf-profile.children.cycles-pp.consume_skb
1.32 ? 11% +3.1 4.45 ? 23% perf-profile.children.cycles-pp.kfree
12.87 ? 12% +3.2 16.08 ? 11% perf-profile.children.cycles-pp.ksys_read
12.26 ? 12% +3.3 15.56 ? 11% perf-profile.children.cycles-pp.vfs_read
10.79 ? 12% +3.5 14.28 ? 11% perf-profile.children.cycles-pp.new_sync_read
10.47 ? 12% +3.5 14.02 ? 11% perf-profile.children.cycles-pp.sock_read_iter
8.74 ? 12% +3.8 12.53 ? 11% perf-profile.children.cycles-pp.unix_stream_recvmsg
8.36 ? 12% +3.8 12.20 ? 11% perf-profile.children.cycles-pp.unix_stream_read_generic
1.78 ? 12% +4.1 5.92 ? 23% perf-profile.children.cycles-pp.kmalloc_reserve
1.72 ? 12% +4.1 5.86 ? 23% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
14.09 ? 12% +4.3 18.40 ? 10% perf-profile.children.cycles-pp.ksys_write
13.42 ? 12% +4.4 17.81 ? 11% perf-profile.children.cycles-pp.vfs_write
0.00 +4.6 4.57 ? 29% perf-profile.children.cycles-pp.page_counter_cancel
12.07 ? 12% +4.6 16.70 ? 11% perf-profile.children.cycles-pp.new_sync_write
11.79 ? 12% +4.6 16.43 ? 11% perf-profile.children.cycles-pp.sock_write_iter
11.00 ? 12% +4.8 15.77 ? 11% perf-profile.children.cycles-pp.sock_sendmsg
9.91 ? 12% +4.9 14.84 ? 11% perf-profile.children.cycles-pp.unix_stream_sendmsg
0.00 +5.1 5.06 ? 28% perf-profile.children.cycles-pp.page_counter_uncharge
0.00 +5.1 5.10 ? 28% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
5.34 ? 12% +5.7 11.00 ? 14% perf-profile.children.cycles-pp.sock_alloc_send_pskb
4.94 ? 12% +5.7 10.66 ? 15% perf-profile.children.cycles-pp.alloc_skb_with_frags
4.79 ? 12% +5.8 10.54 ? 15% perf-profile.children.cycles-pp.__alloc_skb
0.28 ? 14% +6.5 6.79 ? 28% perf-profile.children.cycles-pp.obj_cgroup_charge
0.00 +6.5 6.52 ? 29% perf-profile.children.cycles-pp.page_counter_try_charge
0.00 +6.6 6.55 ? 29% perf-profile.children.cycles-pp.obj_cgroup_charge_pages
52.11 ? 12% +7.3 59.44 ? 10% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
51.60 ? 12% +7.4 59.00 ? 10% perf-profile.children.cycles-pp.do_syscall_64
0.80 ? 10% -0.2 0.64 ? 18% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
0.09 ? 14% -0.0 0.06 ? 45% perf-profile.self.cycles-pp.wait_for_unix_gc
0.00 +1.7 1.72 ? 33% perf-profile.self.cycles-pp.propagate_protected_usage
0.00 +4.5 4.53 ? 29% perf-profile.self.cycles-pp.page_counter_cancel
0.00 +5.2 5.21 ? 27% perf-profile.self.cycles-pp.page_counter_try_charge
12649 ? 3% -16.2% 10604 ? 11% softirqs.CPU0.RCU
14469 ? 24% -36.8% 9142 ? 18% softirqs.CPU1.RCU
14170 ? 14% -36.8% 8959 ? 20% softirqs.CPU10.RCU
10547 ? 14% -28.6% 7528 ? 25% softirqs.CPU100.RCU
10166 ? 13% -30.1% 7102 ? 19% softirqs.CPU101.RCU
10483 ? 13% -32.0% 7131 ? 21% softirqs.CPU102.RCU
10881 ? 19% -35.6% 7010 ? 16% softirqs.CPU105.RCU
11342 ? 17% -31.5% 7774 ? 19% softirqs.CPU106.RCU
11132 ? 15% -30.6% 7731 ? 26% softirqs.CPU107.RCU
9921 ? 13% -32.7% 6678 ? 25% softirqs.CPU108.RCU
10028 ? 12% -34.4% 6582 ? 25% softirqs.CPU109.RCU
10083 ? 7% -24.9% 7567 ? 19% softirqs.CPU113.RCU
10205 ? 11% -28.9% 7260 ? 15% softirqs.CPU114.RCU
9276 ? 15% -26.5% 6818 ? 12% softirqs.CPU115.RCU
10715 ? 18% -33.4% 7135 ? 17% softirqs.CPU119.RCU
10875 ? 10% -29.4% 7683 ? 16% softirqs.CPU120.RCU
30696 ? 11% -45.7% 16666 ? 50% softirqs.CPU122.SCHED
11869 ? 21% -36.3% 7560 ? 21% softirqs.CPU123.RCU
11458 ? 15% -42.8% 6557 ? 14% softirqs.CPU124.RCU
11317 ? 6% -38.9% 6913 ? 17% softirqs.CPU125.RCU
9390 ? 8% -22.3% 7299 ? 16% softirqs.CPU128.RCU
12175 ? 16% -33.0% 8156 ? 15% softirqs.CPU13.RCU
10938 ? 15% -38.7% 6709 ? 14% softirqs.CPU132.RCU
11091 ? 6% -33.2% 7408 ? 24% softirqs.CPU134.RCU
9173 ? 13% -25.1% 6870 ? 15% softirqs.CPU135.RCU
10832 ? 9% -34.7% 7068 ? 20% softirqs.CPU136.RCU
10703 ? 15% -28.6% 7638 ? 23% softirqs.CPU138.RCU
11309 ? 14% -34.8% 7371 ? 19% softirqs.CPU139.RCU
13122 ? 15% -35.8% 8428 ? 12% softirqs.CPU14.RCU
9909 ? 18% -26.9% 7240 ? 17% softirqs.CPU140.RCU
12178 ? 3% -26.5% 8952 ? 17% softirqs.CPU143.RCU
12774 ? 13% -31.0% 8812 ? 24% softirqs.CPU18.RCU
13854 ? 4% -37.5% 8663 ? 26% softirqs.CPU20.RCU
8500 ? 40% +121.7% 18844 ? 46% softirqs.CPU20.SCHED
12659 ? 16% -34.8% 8258 ? 20% softirqs.CPU21.RCU
11476 ? 9% -32.6% 7732 ? 12% softirqs.CPU22.RCU
10610 ? 14% -25.9% 7859 ? 21% softirqs.CPU23.RCU
10263 ? 10% -22.9% 7916 ? 20% softirqs.CPU24.RCU
11617 ? 11% -29.3% 8211 ? 17% softirqs.CPU28.RCU
12193 ? 8% -28.6% 8712 ? 20% softirqs.CPU29.RCU
11437 ? 9% -22.1% 8904 ? 20% softirqs.CPU3.RCU
10756 ? 9% -33.4% 7165 ? 15% softirqs.CPU32.RCU
9909 ? 36% +81.6% 17997 ? 28% softirqs.CPU32.SCHED
9996 ? 10% -28.5% 7149 ? 22% softirqs.CPU33.RCU
9395 ? 15% -27.4% 6823 ? 18% softirqs.CPU34.RCU
11751 ? 20% -30.1% 8216 ? 16% softirqs.CPU36.RCU
11418 ? 12% -33.0% 7649 ? 19% softirqs.CPU38.RCU
11659 ? 19% -35.1% 7572 ? 18% softirqs.CPU39.RCU
12900 ? 18% -33.9% 8524 ? 17% softirqs.CPU4.RCU
9982 ? 10% -24.2% 7570 ? 15% softirqs.CPU40.RCU
11088 ? 12% -28.3% 7946 ? 21% softirqs.CPU42.RCU
11276 ? 11% -34.1% 7433 ? 11% softirqs.CPU44.RCU
13013 ? 8% -43.4% 7367 ? 13% softirqs.CPU45.RCU
7389 ? 25% +214.0% 23200 ? 41% softirqs.CPU45.SCHED
12283 ? 12% -37.1% 7729 ? 13% softirqs.CPU46.RCU
11657 ? 11% -31.9% 7937 ? 18% softirqs.CPU47.RCU
9971 ? 6% -30.4% 6938 ? 17% softirqs.CPU48.RCU
9779 ? 11% -33.2% 6536 ? 16% softirqs.CPU49.RCU
10634 ? 4% -41.4% 6230 ? 16% softirqs.CPU50.RCU
12624 ? 39% +114.5% 27083 ? 25% softirqs.CPU50.SCHED
9330 ? 14% -26.5% 6859 ? 12% softirqs.CPU51.RCU
9568 ? 8% -20.3% 7625 ? 10% softirqs.CPU52.RCU
8987 ? 7% -20.2% 7171 ? 15% softirqs.CPU53.RCU
11938 ? 11% -30.1% 8344 ? 18% softirqs.CPU54.RCU
11141 ? 13% -31.4% 7645 ? 19% softirqs.CPU56.RCU
11801 ? 13% -33.8% 7815 ? 19% softirqs.CPU57.RCU
11149 ? 55% +94.2% 21652 ? 36% softirqs.CPU57.SCHED
10751 ? 16% -29.3% 7599 ? 21% softirqs.CPU61.RCU
9831 ? 16% -21.1% 7753 ? 11% softirqs.CPU62.RCU
12342 ? 20% -34.3% 8104 ? 16% softirqs.CPU7.RCU
10999 ? 13% -31.1% 7582 ? 17% softirqs.CPU70.RCU
12180 ? 8% -35.0% 7917 ? 19% softirqs.CPU72.RCU
6290 ? 16% +124.5% 14119 ? 41% softirqs.CPU72.SCHED
10881 ? 16% -29.6% 7655 ? 8% softirqs.CPU80.RCU
11332 ? 14% -35.3% 7333 ? 15% softirqs.CPU81.RCU
13496 ? 30% -42.6% 7747 ? 15% softirqs.CPU84.RCU
12794 ? 33% -38.1% 7925 ? 19% softirqs.CPU87.RCU
12375 ? 22% -40.0% 7419 ? 23% softirqs.CPU88.RCU
9373 ? 10% -22.3% 7279 ? 20% softirqs.CPU90.RCU
9802 ? 11% -28.4% 7018 ? 15% softirqs.CPU91.RCU
10415 ? 13% -32.1% 7072 ? 21% softirqs.CPU95.RCU
11236 ? 10% -33.3% 7490 ? 17% softirqs.CPU96.RCU
10720 ? 13% -34.4% 7031 ? 20% softirqs.CPU97.RCU
10450 ? 7% -36.7% 6616 ? 21% softirqs.CPU98.RCU
9844 ? 11% -25.7% 7314 ? 13% softirqs.CPU99.RCU
1549584 ? 5% -27.6% 1121604 ? 15% softirqs.RCU
29638 ? 2% +46.0% 43283 ? 6% softirqs.TIMER
will-it-scale.72.processes
4e+07 +-----------------------------------------------------------------+
| |
3.5e+07 |.++.+.++.+.++ + ++.+ +.++.+.++.+.++.+.++.+.++.++.+ +.+ |
3e+07 |-+ O : : : : : O O : O: |
| O O O O: : : : O : O O O O O O OO : : O O |
2.5e+07 |-+O O :O : : :O : O O O OO : : O |
| O: ::O: O : : O : : |
2e+07 |-+ : :: : : : : : |
| : : :: : : : : |
1.5e+07 |-+ : : :: : : :: |
1e+07 |-+ : : :: : : :: |
| :: :: : : : |
5e+06 |-+ : : :: : |
| : : :: : |
0 +-----------------------------------------------------------------+
will-it-scale.per_process_ops
500000 +------------------------------------------------------------------+
450000 |-+ : : : : + +.+ + : : |
| O : : : : : O O : O: |
400000 |-O O O O: : : : O: O O O O O O OO : : O O |
350000 |-+O O :O:: : : : O O O O O : : O |
| O: ::O: O :O : O : : |
300000 |-+ : :: : : : : : |
250000 |-+ : : :: : : : : |
200000 |-+ : : :: : : :: |
| : : :: : : :: |
150000 |-+ : : :: : : :: |
100000 |-+ : : : : : |
| : : : : : |
50000 |-+ : : : : : |
0 +------------------------------------------------------------------+
will-it-scale.workload
4e+07 +-----------------------------------------------------------------+
| |
3.5e+07 |.++.+.++.+.++ + ++.+ +.++.+.++.+.++.+.++.+.++.++.+ +.+ |
3e+07 |-+ O : : : : : O O : O: |
| O O O O: : : : O : O O O O O O OO : : O O |
2.5e+07 |-+O O :O : : :O : O O O OO : : O |
| O: ::O: O : : O : : |
2e+07 |-+ : :: : : : : : |
| : : :: : : : : |
1.5e+07 |-+ : : :: : : :: |
1e+07 |-+ : : :: : : :: |
| :: :: : : : |
5e+06 |-+ : : :: : |
| : : :: : |
0 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp4: 96 threads 2 sockets Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-9/performance/socket/8/x86_64-rhel-8.3/threads/50%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp4/hackbench/0x4003006
commit:
68ac5b3c8d ("mm/memcg: cache vmstat data in percpu memcg_stock_pcp")
5387c90490 ("mm/memcg: improve refill_obj_stock() performance")
68ac5b3c8db2fda0 5387c90490f7f42df3209154ca9
---------------- ---------------------------
%stddev %change %stddev
\ | \
85832 ? 2% +63.3% 140156 hackbench.throughput
1058 ? 2% -38.7% 648.60 hackbench.time.elapsed_time
1058 ? 2% -38.7% 648.60 hackbench.time.elapsed_time.max
1.481e+08 ? 5% -54.7% 67055135 ? 11% hackbench.time.involuntary_context_switches
2743 ? 14% -22.0% 2139 ? 14% hackbench.time.major_page_faults
660151 -24.9% 495452 ? 8% hackbench.time.minor_page_faults
9299 -1.1% 9193 hackbench.time.percent_of_cpu_this_job_got
96400 ? 3% -41.1% 56802 hackbench.time.system_time
2045 +38.3% 2828 hackbench.time.user_time
5.048e+08 ? 3% -29.2% 3.574e+08 ? 4% hackbench.time.voluntary_context_switches
1096 ? 2% -37.4% 686.41 uptime.boot
5854506 ? 13% +24.1% 7267519 vmstat.memory.cache
725.00 -11.4% 642.33 vmstat.procs.r
713449 +13.6% 810672 ? 3% vmstat.system.cs
6.84 ? 32% +5.8 12.68 ? 2% mpstat.cpu.all.idle%
0.04 ? 3% +0.0 0.05 ? 3% mpstat.cpu.all.soft%
90.26 ? 2% -8.1 82.14 mpstat.cpu.all.sys%
2.05 ? 3% +2.3 4.31 mpstat.cpu.all.usr%
1667229 ? 36% -58.9% 685206 ? 7% numa-numastat.node0.local_node
1735244 ? 33% -56.7% 751073 ? 8% numa-numastat.node0.numa_hit
1627453 ? 36% +63.7% 2664519 ? 2% numa-numastat.node1.local_node
1646250 ? 35% +63.1% 2685426 ? 2% numa-numastat.node1.numa_hit
5.473e+08 ? 7% -44.9% 3.016e+08 ? 28% cpuidle.C1E.time
8801798 ? 5% -38.4% 5418393 ? 7% cpuidle.C1E.usage
33788932 ? 57% +550.2% 2.197e+08 ? 57% cpuidle.C6.time
47511 ? 43% +468.8% 270226 ? 51% cpuidle.C6.usage
11602677 ? 2% +72.1% 19972848 ? 6% cpuidle.POLL.time
1419906 ? 2% +87.5% 2662436 ? 9% cpuidle.POLL.usage
2590099 ? 14% +44.9% 3752923 meminfo.Active
2589860 ? 14% +44.9% 3752683 meminfo.Active(anon)
5731493 ? 14% +24.6% 7142768 meminfo.Cached
3736163 ? 22% +37.8% 5148079 meminfo.Committed_AS
14417237 ? 3% -28.9% 10244096 ? 23% meminfo.DirectMap2M
7123535 ? 12% +20.6% 8588328 meminfo.Memused
3368642 ? 24% +41.9% 4779919 meminfo.Shmem
7626039 ? 10% +18.8% 9058324 meminfo.max_used_kB
1.62 ? 3% +0.8 2.44 turbostat.C1%
8801276 ? 5% -38.4% 5417869 ? 7% turbostat.C1E
35280 ? 56% +632.2% 258306 ? 53% turbostat.C6
0.02 ? 80% +0.3 0.34 ? 60% turbostat.C6%
2.08 ? 4% +44.2% 3.00 turbostat.CPU%c1
0.01 +700.0% 0.08 ? 73% turbostat.CPU%c6
3.455e+08 ? 2% -37.8% 2.149e+08 turbostat.IRQ
281.73 +9.4% 308.34 turbostat.PkgWatt
55.52 +10.1% 61.10 turbostat.RAMWatt
1210 ? 3% -8.7% 1104 ? 2% slabinfo.file_lock_cache.active_objs
1210 ? 3% -8.7% 1104 ? 2% slabinfo.file_lock_cache.num_objs
2183 ? 10% -15.9% 1835 ? 9% slabinfo.khugepaged_mm_slot.active_objs
2183 ? 10% -15.9% 1835 ? 9% slabinfo.khugepaged_mm_slot.num_objs
917.33 ? 11% +14.8% 1053 ? 9% slabinfo.mnt_cache.active_objs
917.33 ? 11% +14.8% 1053 ? 9% slabinfo.mnt_cache.num_objs
1400 ? 10% +15.2% 1614 ? 4% slabinfo.pool_workqueue.active_objs
1400 ? 10% +15.3% 1615 ? 4% slabinfo.pool_workqueue.num_objs
1280 ? 6% +8.9% 1394 ? 3% slabinfo.task_group.active_objs
1280 ? 6% +8.9% 1394 ? 3% slabinfo.task_group.num_objs
647077 ? 14% +44.9% 937599 proc-vmstat.nr_active_anon
3097147 -1.2% 3060593 proc-vmstat.nr_dirty_background_threshold
6201868 -1.2% 6128670 proc-vmstat.nr_dirty_threshold
1432801 ? 14% +24.6% 1785471 proc-vmstat.nr_file_pages
31171101 -1.2% 30805020 proc-vmstat.nr_free_pages
46084 -2.1% 45132 proc-vmstat.nr_kernel_stack
842088 ? 24% +41.9% 1194758 proc-vmstat.nr_shmem
647077 ? 14% +44.9% 937599 proc-vmstat.nr_zone_active_anon
833331 ? 6% -20.4% 663134 ? 5% proc-vmstat.numa_hint_faults
702802 ? 5% -26.5% 516314 ? 7% proc-vmstat.numa_hint_faults_local
930391 ? 24% +31.2% 1220675 ? 2% proc-vmstat.pgactivate
4037177 ? 4% +10.7% 4469838 ? 5% proc-vmstat.pgalloc_normal
3655571 -29.7% 2570082 proc-vmstat.pgfault
172618 -36.9% 108978 ? 2% proc-vmstat.pgreuse
313159 ? 88% -75.3% 77465 ? 52% numa-vmstat.node0.nr_active_anon
55894 ? 13% -62.2% 21134 ? 75% numa-vmstat.node0.nr_anon_pages
933940 ? 32% -66.3% 315121 ? 85% numa-vmstat.node0.nr_file_pages
39376 -52.3% 18792 ? 71% numa-vmstat.node0.nr_kernel_stack
456.67 ? 13% -66.6% 152.33 ? 71% numa-vmstat.node0.nr_mlock
767.67 ? 8% -39.1% 467.33 ? 41% numa-vmstat.node0.nr_page_table_pages
313159 ? 88% -75.3% 77465 ? 52% numa-vmstat.node0.nr_zone_active_anon
2432212 ? 18% -44.6% 1348351 ? 40% numa-vmstat.node0.numa_hit
2362626 ? 20% -46.0% 1275532 ? 41% numa-vmstat.node0.numa_local
334019 ? 61% +157.8% 860935 ? 4% numa-vmstat.node1.nr_active_anon
14549 ? 42% +234.0% 48600 ? 32% numa-vmstat.node1.nr_anon_pages
498850 ? 72% +194.8% 1470620 ? 18% numa-vmstat.node1.nr_file_pages
6710 ? 5% +292.1% 26308 ? 52% numa-vmstat.node1.nr_kernel_stack
431.67 ? 66% +93.4% 834.67 ? 26% numa-vmstat.node1.nr_page_table_pages
470148 ? 76% +131.8% 1089875 ? 3% numa-vmstat.node1.nr_shmem
9641 ? 22% +99.4% 19222 ? 34% numa-vmstat.node1.nr_slab_reclaimable
334018 ? 61% +157.8% 860935 ? 4% numa-vmstat.node1.nr_zone_active_anon
1675492 ? 31% +73.1% 2900221 ? 17% numa-vmstat.node1.numa_hit
1645063 ? 33% +74.6% 2873065 ? 16% numa-vmstat.node1.numa_local
1253183 ? 88% -75.3% 310060 ? 52% numa-meminfo.node0.Active
1252943 ? 88% -75.3% 309980 ? 52% numa-meminfo.node0.Active(anon)
175290 ? 17% -71.0% 50821 ? 99% numa-meminfo.node0.AnonHugePages
223550 ? 13% -62.2% 84509 ? 74% numa-meminfo.node0.AnonPages
288934 ? 18% -60.9% 112994 ? 67% numa-meminfo.node0.AnonPages.max
3735382 ? 32% -66.3% 1260589 ? 85% numa-meminfo.node0.FilePages
39390 -52.4% 18756 ? 71% numa-meminfo.node0.KernelStack
4689183 ? 27% -60.0% 1876258 ? 64% numa-meminfo.node0.MemUsed
1829 ? 13% -66.6% 610.67 ? 71% numa-meminfo.node0.Mlocked
3071 ? 8% -39.1% 1871 ? 41% numa-meminfo.node0.PageTables
257262 ? 5% -27.2% 187219 ? 28% numa-meminfo.node0.Slab
1337025 ? 61% +157.4% 3441839 ? 4% numa-meminfo.node1.Active
1337025 ? 61% +157.4% 3441679 ? 4% numa-meminfo.node1.Active(anon)
36096 ? 71% +317.6% 150736 ? 33% numa-meminfo.node1.AnonHugePages
58189 ? 42% +234.3% 194550 ? 32% numa-meminfo.node1.AnonPages
132860 ? 44% +103.5% 270328 ? 22% numa-meminfo.node1.AnonPages.max
1995953 ? 72% +194.7% 5882894 ? 18% numa-meminfo.node1.FilePages
38570 ? 22% +99.4% 76898 ? 34% numa-meminfo.node1.KReclaimable
6709 ? 5% +292.2% 26311 ? 52% numa-meminfo.node1.KernelStack
2434224 ? 65% +175.7% 6712368 ? 18% numa-meminfo.node1.MemUsed
1725 ? 66% +94.1% 3349 ? 26% numa-meminfo.node1.PageTables
38570 ? 22% +99.4% 76898 ? 34% numa-meminfo.node1.SReclaimable
1881144 ? 76% +131.8% 4359915 ? 3% numa-meminfo.node1.Shmem
149238 ? 12% +47.3% 219837 ? 25% numa-meminfo.node1.Slab
11.40 +3.0% 11.74 ? 3% perf-stat.i.MPKI
1.03e+10 ? 2% +58.4% 1.632e+10 perf-stat.i.branch-instructions
0.78 -0.1 0.73 perf-stat.i.branch-miss-rate%
80385861 ? 2% +47.2% 1.184e+08 perf-stat.i.branch-misses
8.22 ? 3% +1.1 9.32 perf-stat.i.cache-miss-rate%
48620190 ? 4% +85.4% 90134724 ? 3% perf-stat.i.cache-misses
5.926e+08 +63.9% 9.711e+08 ? 3% perf-stat.i.cache-references
711813 +14.2% 813230 ? 3% perf-stat.i.context-switches
5.60 ? 2% -38.0% 3.47 perf-stat.i.cpi
143097 +19.7% 171280 ? 4% perf-stat.i.cpu-migrations
6090 ? 4% -47.1% 3224 ? 3% perf-stat.i.cycles-between-cache-misses
9617191 ? 9% +64.9% 15856943 ? 23% perf-stat.i.dTLB-load-misses
1.499e+10 ? 2% +58.5% 2.375e+10 perf-stat.i.dTLB-loads
4836678 ? 9% +55.0% 7497483 ? 18% perf-stat.i.dTLB-store-misses
9.308e+09 ? 2% +58.1% 1.471e+10 perf-stat.i.dTLB-stores
43238998 ? 2% +49.9% 64800618 ? 2% perf-stat.i.iTLB-load-misses
2233157 ? 2% +49.1% 3330045 perf-stat.i.iTLB-loads
5.218e+10 ? 2% +58.6% 8.278e+10 perf-stat.i.instructions
1209 +6.0% 1281 perf-stat.i.instructions-per-iTLB-miss
0.18 ? 2% +60.3% 0.29 perf-stat.i.ipc
383.58 ? 5% +55.0% 594.52 ? 2% perf-stat.i.metric.K/sec
366.58 ? 2% +58.4% 580.83 perf-stat.i.metric.M/sec
3377 ? 2% +14.3% 3858 ? 2% perf-stat.i.minor-faults
9070329 ? 3% +55.8% 14135155 ? 3% perf-stat.i.node-load-misses
5342334 ? 16% +65.6% 8848582 ? 6% perf-stat.i.node-loads
69.49 -4.3 65.21 ? 3% perf-stat.i.node-store-miss-rate%
10105591 ? 3% +44.2% 14570861 ? 3% perf-stat.i.node-store-misses
4375529 ? 9% +76.3% 7714770 ? 7% perf-stat.i.node-stores
3380 ? 2% +14.3% 3862 ? 2% perf-stat.i.page-faults
11.35 +3.3% 11.73 ? 3% perf-stat.overall.MPKI
0.78 -0.1 0.72 perf-stat.overall.branch-miss-rate%
8.20 ? 3% +1.1 9.29 perf-stat.overall.cache-miss-rate%
5.56 ? 2% -37.6% 3.47 perf-stat.overall.cpi
5981 ? 4% -46.7% 3191 ? 3% perf-stat.overall.cycles-between-cache-misses
1206 +5.9% 1277 perf-stat.overall.instructions-per-iTLB-miss
0.18 ? 2% +60.2% 0.29 perf-stat.overall.ipc
69.78 -4.5 65.25 ? 3% perf-stat.overall.node-store-miss-rate%
1.03e+10 ? 2% +58.3% 1.63e+10 perf-stat.ps.branch-instructions
80301401 ? 2% +47.1% 1.182e+08 perf-stat.ps.branch-misses
48556052 ? 4% +85.4% 90029912 ? 3% perf-stat.ps.cache-misses
5.919e+08 +63.8% 9.694e+08 ? 3% perf-stat.ps.cache-references
710830 +14.1% 810704 ? 3% perf-stat.ps.context-switches
2.899e+11 -1.0% 2.869e+11 perf-stat.ps.cpu-cycles
142960 +19.4% 170729 ? 4% perf-stat.ps.cpu-migrations
9604496 ? 9% +64.8% 15830747 ? 23% perf-stat.ps.dTLB-load-misses
1.497e+10 ? 2% +58.4% 2.372e+10 perf-stat.ps.dTLB-loads
4828506 ? 9% +55.0% 7482674 ? 18% perf-stat.ps.dTLB-store-misses
9.3e+09 ? 2% +58.0% 1.469e+10 perf-stat.ps.dTLB-stores
43197068 ? 2% +49.8% 64702614 ? 2% perf-stat.ps.iTLB-load-misses
2223191 ? 2% +49.0% 3313515 perf-stat.ps.iTLB-loads
5.214e+10 ? 2% +58.5% 8.266e+10 perf-stat.ps.instructions
2.80 ? 15% +29.0% 3.62 ? 15% perf-stat.ps.major-faults
3374 ? 2% +14.2% 3855 ? 2% perf-stat.ps.minor-faults
9043371 ? 3% +55.8% 14088071 ? 3% perf-stat.ps.node-load-misses
5348069 ? 16% +65.9% 8870147 ? 6% perf-stat.ps.node-loads
10076406 ? 3% +44.1% 14516809 ? 3% perf-stat.ps.node-store-misses
4377182 ? 9% +76.6% 7731566 ? 7% perf-stat.ps.node-stores
3377 ? 2% +14.3% 3858 ? 2% perf-stat.ps.page-faults
5.519e+13 -2.7% 5.37e+13 perf-stat.total.instructions
106704 ? 9% -25.7% 79283 ? 12% softirqs.CPU0.RCU
64259 ? 2% -19.9% 51502 softirqs.CPU0.SCHED
108011 ? 9% -26.4% 79547 ? 12% softirqs.CPU1.RCU
64894 ? 2% -23.0% 49994 ? 2% softirqs.CPU1.SCHED
107939 ? 10% -26.6% 79250 ? 12% softirqs.CPU10.RCU
62388 ? 3% -23.0% 48057 softirqs.CPU10.SCHED
108213 ? 9% -26.6% 79427 ? 12% softirqs.CPU11.RCU
61964 ? 3% -22.4% 48055 softirqs.CPU11.SCHED
107351 ? 10% -26.5% 78853 ? 12% softirqs.CPU12.RCU
61645 ? 3% -22.0% 48106 softirqs.CPU12.SCHED
107288 ? 10% -26.3% 79040 ? 12% softirqs.CPU13.RCU
61656 ? 3% -21.9% 48137 softirqs.CPU13.SCHED
107484 ? 10% -26.5% 79037 ? 12% softirqs.CPU14.RCU
62102 ? 2% -22.4% 48172 softirqs.CPU14.SCHED
108703 ? 8% -26.9% 79480 ? 12% softirqs.CPU15.RCU
62457 ? 3% -21.7% 48894 softirqs.CPU15.SCHED
114065 ? 9% -23.4% 87326 ? 12% softirqs.CPU16.RCU
62420 ? 3% -23.0% 48082 softirqs.CPU16.SCHED
10393 ?141% -100.0% 0.33 ?141% softirqs.CPU17.NET_RX
115421 ? 9% -24.4% 87236 ? 12% softirqs.CPU17.RCU
62400 ? 3% -23.5% 47747 softirqs.CPU17.SCHED
115463 ? 10% -23.4% 88488 ? 12% softirqs.CPU18.RCU
62315 ? 2% -23.0% 47965 softirqs.CPU18.SCHED
115560 ? 9% -23.6% 88270 ? 11% softirqs.CPU19.RCU
62247 ? 3% -22.7% 48092 softirqs.CPU19.SCHED
107912 ? 9% -26.5% 79292 ? 12% softirqs.CPU2.RCU
62637 ? 2% -23.4% 47974 softirqs.CPU2.SCHED
115434 ? 10% -23.8% 87904 ? 12% softirqs.CPU20.RCU
62017 ? 2% -22.5% 48094 softirqs.CPU20.SCHED
114425 ? 9% -23.3% 87728 ? 12% softirqs.CPU21.RCU
62499 ? 4% -22.8% 48219 softirqs.CPU21.SCHED
114555 ? 10% -24.2% 86806 ? 12% softirqs.CPU22.RCU
62625 ? 2% -22.9% 48262 softirqs.CPU22.SCHED
114146 ? 10% -24.1% 86673 ? 12% softirqs.CPU23.RCU
62389 ? 2% -23.0% 48068 softirqs.CPU23.SCHED
113853 ? 9% -25.6% 84677 ? 13% softirqs.CPU24.RCU
113479 ? 8% -25.4% 84659 ? 12% softirqs.CPU25.RCU
113574 ? 9% -26.1% 83983 ? 12% softirqs.CPU26.RCU
112671 ? 9% -25.6% 83830 ? 12% softirqs.CPU27.RCU
112010 ? 11% -24.8% 84194 ? 12% softirqs.CPU28.RCU
112656 ? 9% -25.5% 83900 ? 13% softirqs.CPU29.RCU
107810 ? 10% -26.5% 79217 ? 12% softirqs.CPU3.RCU
61877 ? 3% -21.3% 48709 softirqs.CPU3.SCHED
113351 ? 9% -25.7% 84204 ? 13% softirqs.CPU30.RCU
113431 ? 9% -25.0% 85097 ? 13% softirqs.CPU31.RCU
107367 ? 8% -24.2% 81383 ? 13% softirqs.CPU32.RCU
107084 ? 8% -24.0% 81383 ? 13% softirqs.CPU33.RCU
107005 ? 7% -24.7% 80625 ? 13% softirqs.CPU34.RCU
107186 ? 8% -24.6% 80862 ? 13% softirqs.CPU35.RCU
107848 ? 8% -24.2% 81791 ? 13% softirqs.CPU36.RCU
107903 ? 8% -24.2% 81755 ? 13% softirqs.CPU37.RCU
107488 ? 8% -23.5% 82194 ? 12% softirqs.CPU38.RCU
105435 ? 5% -23.2% 80981 ? 13% softirqs.CPU39.RCU
108034 ? 9% -25.9% 80034 ? 12% softirqs.CPU4.RCU
61751 ? 3% -22.2% 48020 softirqs.CPU4.SCHED
106734 ? 8% -24.1% 80972 ? 13% softirqs.CPU40.RCU
106968 ? 8% -24.2% 81096 ? 13% softirqs.CPU41.RCU
106004 ? 9% -22.7% 81922 ? 13% softirqs.CPU42.RCU
107864 ? 8% -24.1% 81843 ? 13% softirqs.CPU43.RCU
107438 ? 8% -23.9% 81759 ? 12% softirqs.CPU44.RCU
106775 ? 8% -24.2% 80962 ? 13% softirqs.CPU45.RCU
106937 ? 8% -24.1% 81204 ? 13% softirqs.CPU46.RCU
106692 ? 8% -23.9% 81160 ? 12% softirqs.CPU47.RCU
107158 ? 10% -26.3% 79027 ? 12% softirqs.CPU48.RCU
61783 ? 3% -22.9% 47654 softirqs.CPU48.SCHED
108981 ? 12% -26.6% 79984 ? 12% softirqs.CPU49.RCU
62592 ? 2% -22.6% 48442 softirqs.CPU49.SCHED
108385 ? 9% -26.7% 79488 ? 12% softirqs.CPU5.RCU
61993 ? 3% -22.5% 48029 softirqs.CPU5.SCHED
107256 ? 10% -26.3% 79076 ? 12% softirqs.CPU50.RCU
62241 ? 3% -23.0% 47947 softirqs.CPU50.SCHED
107264 ? 10% -25.9% 79437 ? 12% softirqs.CPU51.RCU
61722 ? 3% -22.1% 48074 softirqs.CPU51.SCHED
108010 ? 10% -25.3% 80714 ? 12% softirqs.CPU52.RCU
61915 ? 2% -21.5% 48573 softirqs.CPU52.SCHED
108636 ? 8% -27.2% 79077 ? 12% softirqs.CPU53.RCU
62148 ? 3% -23.1% 47775 softirqs.CPU53.SCHED
107423 ? 10% -25.9% 79587 ? 12% softirqs.CPU54.RCU
61936 ? 2% -22.1% 48256 softirqs.CPU54.SCHED
107587 ? 10% -28.5% 76908 ? 11% softirqs.CPU55.RCU
62136 ? 2% -19.9% 49790 ? 4% softirqs.CPU55.SCHED
106767 ? 10% -27.1% 77870 ? 14% softirqs.CPU56.RCU
62526 ? 3% -22.5% 48438 softirqs.CPU56.SCHED
107500 ? 10% -26.1% 79391 ? 12% softirqs.CPU57.RCU
62142 ? 3% -22.1% 48393 softirqs.CPU57.SCHED
107661 ? 10% -26.2% 79421 ? 12% softirqs.CPU58.RCU
62377 ? 3% -22.7% 48236 softirqs.CPU58.SCHED
107440 ? 10% -25.9% 79664 ? 11% softirqs.CPU59.RCU
62244 ? 4% -22.7% 48102 softirqs.CPU59.SCHED
107874 ? 10% -26.6% 79172 ? 12% softirqs.CPU6.RCU
61549 ? 3% -21.5% 48328 softirqs.CPU6.SCHED
107796 ? 10% -25.8% 79995 ? 11% softirqs.CPU60.RCU
62391 ? 2% -22.6% 48281 softirqs.CPU60.SCHED
107574 ? 10% -26.0% 79652 ? 12% softirqs.CPU61.RCU
61965 ? 3% -22.1% 48247 softirqs.CPU61.SCHED
106922 ? 10% -26.1% 79054 ? 12% softirqs.CPU62.RCU
62157 ? 2% -21.7% 48654 softirqs.CPU62.SCHED
107710 ? 9% -25.8% 79874 ? 13% softirqs.CPU63.RCU
62240 ? 2% -21.9% 48588 softirqs.CPU63.SCHED
115857 ? 10% -24.0% 88074 ? 12% softirqs.CPU64.RCU
62034 ? 3% -21.9% 48444 softirqs.CPU64.SCHED
115784 ? 10% -23.2% 88930 ? 12% softirqs.CPU65.RCU
62241 ? 3% -22.8% 48036 softirqs.CPU65.SCHED
117310 ? 10% -23.6% 89587 ? 12% softirqs.CPU66.RCU
62200 ? 3% -22.6% 48136 softirqs.CPU66.SCHED
116800 ? 10% -24.0% 88721 ? 12% softirqs.CPU67.RCU
62287 ? 2% -22.6% 48220 softirqs.CPU67.SCHED
117016 ? 10% -23.9% 89102 ? 12% softirqs.CPU68.RCU
62400 ? 3% -22.6% 48327 softirqs.CPU68.SCHED
116187 ? 10% -23.5% 88902 ? 12% softirqs.CPU69.RCU
62598 ? 3% -23.0% 48179 softirqs.CPU69.SCHED
107620 ? 10% -29.5% 75879 ? 11% softirqs.CPU7.RCU
62574 ? 3% -22.9% 48218 softirqs.CPU7.SCHED
116869 ? 10% -24.6% 88169 ? 13% softirqs.CPU70.RCU
62466 ? 2% -22.8% 48247 softirqs.CPU70.SCHED
115673 ? 10% -23.6% 88330 ? 12% softirqs.CPU71.RCU
62179 ? 2% -22.4% 48267 softirqs.CPU71.SCHED
114535 ? 10% -24.6% 86338 ? 14% softirqs.CPU72.RCU
114016 ? 10% -25.6% 84825 ? 13% softirqs.CPU73.RCU
113966 ? 10% -25.5% 84865 ? 13% softirqs.CPU74.RCU
113792 ? 9% -24.4% 86007 ? 15% softirqs.CPU75.RCU
112809 ? 10% -24.9% 84688 ? 13% softirqs.CPU76.RCU
113134 ? 10% -25.2% 84680 ? 12% softirqs.CPU77.RCU
114455 ? 9% -25.8% 84942 ? 13% softirqs.CPU78.RCU
112932 ? 8% -24.1% 85684 ? 14% softirqs.CPU79.RCU
108100 ? 10% -28.8% 77016 ? 15% softirqs.CPU8.RCU
62010 ? 3% -22.1% 48280 softirqs.CPU8.SCHED
108992 ? 8% -25.2% 81562 ? 13% softirqs.CPU80.RCU
108826 ? 9% -24.8% 81792 ? 14% softirqs.CPU81.RCU
109238 ? 8% -23.2% 83873 ? 11% softirqs.CPU82.RCU
109645 ? 8% -24.9% 82327 ? 13% softirqs.CPU83.RCU
108129 ? 8% -24.4% 81719 ? 13% softirqs.CPU84.RCU
108119 ? 8% -24.6% 81524 ? 13% softirqs.CPU85.RCU
108440 ? 8% -24.2% 82166 ? 13% softirqs.CPU86.RCU
107359 ? 6% -23.3% 82337 ? 13% softirqs.CPU87.RCU
109438 ? 9% -24.9% 82223 ? 12% softirqs.CPU88.RCU
109294 ? 8% -24.8% 82219 ? 13% softirqs.CPU89.RCU
107583 ? 9% -25.2% 80514 ? 11% softirqs.CPU9.RCU
62183 ? 3% -22.1% 48417 softirqs.CPU9.SCHED
106831 ? 9% -23.5% 81710 ? 13% softirqs.CPU90.RCU
109190 ? 8% -25.0% 81857 ? 13% softirqs.CPU91.RCU
108703 ? 8% -24.1% 82555 ? 13% softirqs.CPU92.RCU
108890 ? 8% -24.9% 81742 ? 13% softirqs.CPU93.RCU
109187 ? 8% -24.5% 82413 ? 13% softirqs.CPU94.RCU
109222 ? 8% -24.5% 82471 ? 13% softirqs.CPU95.RCU
10564212 ? 9% -25.1% 7916521 ? 12% softirqs.RCU
5930750 -12.8% 5174428 ? 2% softirqs.SCHED
123828 ? 5% -28.6% 88466 ? 8% softirqs.TIMER
33.81 ?138% -99.2% 0.28 ? 65% perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
11.10 ? 72% -99.1% 0.11 ? 13% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
55.36 ?120% -99.8% 0.13 ? 30% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
8.67 ? 79% -89.2% 0.93 ? 6% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.25 ? 41% +82.1% 0.46 ? 25% perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
1.73 ? 20% -38.4% 1.06 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
0.32 ?141% +604.2% 2.22 ? 25% perf-sched.sch_delay.avg.ms.io_schedule.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault
11.81 ? 64% -99.5% 0.06 ? 9% perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.09 ? 75% +2166.9% 2.12 ? 80% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.wp_page_copy
0.03 ?141% +644.4% 0.20 ? 84% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pte_alloc_one.__pte_alloc
0.00 ?141% +5383.3% 0.11 ?126% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.kmem_cache_alloc_node
1.68 ? 66% -88.6% 0.19 ? 6% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
2.26 ? 30% -28.0% 1.63 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_recvmsg.sock_recvmsg
2.34 ? 76% -92.4% 0.18 ? 6% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
0.00 ?141% +10950.0% 0.37 ? 77% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk
0.00 ?141% +97300.0% 0.32 ? 68% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.shift_arg_pages
0.95 ?137% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.exit_mmap.mmput.begin_new_exec
4.25 ? 76% -98.0% 0.08 ? 12% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
2.90 ? 21% -45.4% 1.58 ? 2% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
1.23 ? 81% -84.3% 0.19 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_unix_gc.unix_stream_sendmsg.sock_sendmsg
2.23 ? 53% -85.0% 0.34 ? 84% perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault
19.46 ?106% -99.9% 0.01 ? 3% perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
276.35 ? 71% -99.8% 0.45 ?130% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
28.34 ?141% -100.0% 0.01 ? 12% perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq
8.25 ? 58% -85.7% 1.18 ? 2% perf-sched.sch_delay.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
1.59 ? 35% -49.6% 0.80 ? 2% perf-sched.sch_delay.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
149.04 ? 94% -100.0% 0.02 ? 55% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
694.71 ? 66% -99.1% 6.24 ? 21% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
2990 ?117% -99.9% 3.34 ? 30% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
1112 ? 67% -96.6% 38.12 ? 31% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
1.52 ?141% +279.5% 5.76 ? 74% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function.[unknown]
532.39 ? 72% -82.0% 95.74 ? 65% perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
3667 ? 68% -94.7% 193.92 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.00 ?141% +841.2% 18.78 ? 32% perf-sched.sch_delay.max.ms.io_schedule.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault
1459 ? 32% -99.2% 11.28 ? 28% perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.57 ?113% +3702.4% 21.58 ? 81% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.wp_page_copy
0.19 ?141% +1324.8% 2.68 ? 85% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.pte_alloc_one.__pte_alloc
0.01 ?141% +1070.0% 0.12 ?114% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.kmem_cache_alloc_node
3363 ? 74% -96.6% 114.46 ? 43% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
893.79 ? 84% -88.0% 107.52 ? 56% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_recvmsg.sock_recvmsg
3303 ? 98% -97.1% 94.82 ? 7% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
55.17 ? 10% -47.3% 29.08 ? 30% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.02 ?141% +21726.7% 3.27 ? 86% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk
0.00 ?141% +1.5e+05% 2.04 ? 65% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.shift_arg_pages
6.00 ?122% -100.0% 0.00 ?141% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.exit_mmap.mmput.begin_new_exec
2387 ? 67% -96.3% 88.32 ? 11% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
1390 ? 71% -95.0% 69.35 ? 23% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
1.08 ?141% +347.4% 4.84 ? 82% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.task_numa_work.task_work_run.exit_to_user_mode_prepare
683.45 ?117% -96.9% 21.07 ? 57% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_unix_gc.unix_stream_sendmsg.sock_sendmsg
30.67 ? 51% -78.4% 6.61 ? 94% perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault
931.01 ? 89% -100.0% 0.08 ? 44% perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork
3414 ? 70% -99.9% 3.94 ?111% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
3580 ? 67% -91.8% 293.88 ? 11% perf-sched.sch_delay.max.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
2695 ? 70% -90.1% 266.67 ? 7% perf-sched.sch_delay.max.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
5845 ? 59% -99.8% 13.40 ?106% perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork
2.25 ? 35% -54.0% 1.03 ? 2% perf-sched.total_sch_delay.average.ms
5896 ? 57% -91.2% 519.37 ? 65% perf-sched.total_sch_delay.max.ms
8.53 ? 25% -39.2% 5.19 perf-sched.total_wait_and_delay.average.ms
3683656 ? 21% +39.7% 5145208 perf-sched.total_wait_and_delay.count.ms
14490 ? 28% -40.8% 8573 ? 6% perf-sched.total_wait_and_delay.max.ms
6.28 ? 22% -33.9% 4.15 perf-sched.total_wait_time.average.ms
1422 ? 28% -38.6% 872.88 ? 4% perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
187.09 ? 45% +52.4% 285.20 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
30.10 ? 51% -70.8% 8.78 ? 23% perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
2.67 ?141% +296.3% 10.59 ? 8% perf-sched.wait_and_delay.avg.ms.io_schedule.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault
7.72 ? 37% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
18.49 ? 50% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
18.17 ? 35% -53.5% 8.45 ? 3% perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
455.38 ? 40% -58.1% 190.88 ? 10% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
931.68 ? 39% -49.1% 474.45 ? 8% perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq
953.55 ? 49% -49.7% 479.42 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
31.25 ? 46% -74.7% 7.89 perf-sched.wait_and_delay.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
5.36 ? 30% -42.7% 3.07 ? 2% perf-sched.wait_and_delay.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
958.10 ? 22% -47.5% 503.03 ? 6% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
1142 ? 25% -36.1% 730.67 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
152.33 ? 54% +75.1% 266.67 ? 3% perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
7.00 ?141% +695.2% 55.67 ? 78% perf-sched.wait_and_delay.count.io_schedule.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault
74387 ? 13% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
9475 ? 37% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
1139 ?141% +737.1% 9535 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
43.67 ?139% +522.1% 271.67 ? 20% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate
667.00 ? 42% +69.0% 1127 ? 3% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
18.33 ?141% +700.0% 146.67 ? 42% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
15.33 ? 27% +102.2% 31.00 ? 38% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
44.67 ? 58% +200.0% 134.00 ? 9% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
422364 ? 61% +214.0% 1326343 perf-sched.wait_and_delay.count.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
2409581 ? 20% +29.0% 3108851 perf-sched.wait_and_delay.count.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
963.33 ? 54% +119.6% 2115 ? 8% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
704.67 ? 50% +71.7% 1209 ? 2% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
3334 ? 14% -70.0% 1001 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
2287 ? 66% -83.0% 387.80 ?114% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
7489 ? 64% -91.5% 637.52 ? 40% perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
6795 ? 73% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
5141 ? 68% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
1136 ? 8% -11.7% 1002 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
26.59 ?141% +623.3% 192.33 ? 92% perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
8681 ? 33% -51.2% 4236 ? 26% perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
5338 ? 27% -38.4% 3287 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
2623 ? 47% -62.6% 981.29 ? 30% perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq
4199 ? 91% -88.0% 505.02 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
7532 ? 68% -92.1% 593.82 ? 12% perf-sched.wait_and_delay.max.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
5407 ? 71% -89.8% 549.25 ? 7% perf-sched.wait_and_delay.max.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
14250 ? 31% -42.5% 8191 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
1388 ? 31% -37.2% 872.60 ? 4% perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
175.99 ? 50% +62.0% 285.10 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
21.42 ? 40% -63.4% 7.84 ? 25% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
0.45 ?141% +235.8% 1.51 ? 40% perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function.[unknown]
5.50 ? 34% -48.2% 2.84 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
2.36 ?141% +255.1% 8.37 ? 10% perf-sched.wait_time.avg.ms.io_schedule.put_and_wait_on_page_locked.do_swap_page.__handle_mm_fault
0.06 ? 26% -67.4% 0.02 ? 24% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.__pmd_alloc.__handle_mm_fault
0.01 ? 78% +5014.3% 0.72 ?101% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.do_anonymous_page
1.25 ? 31% -69.8% 0.38 ? 89% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault
0.04 ? 29% -60.4% 0.01 ? 74% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_prepare_creds.prepare_creds
0.90 ?140% +652.4% 6.81 ? 37% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.__kmalloc_node_track_caller
0.41 ?125% +1185.4% 5.24 ? 43% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.kmem_cache_alloc_node
6.04 ? 30% -53.5% 2.81 ? 6% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
1.83 ?121% -99.2% 0.02 ? 24% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__split_huge_pud.__handle_mm_fault.handle_mm_fault
5.37 ? 27% -32.1% 3.64 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_recvmsg.sock_recvmsg
7.16 ? 43% -61.9% 2.73 ? 5% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
0.90 ? 33% +232.8% 3.00 ? 44% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.33 ? 67% -94.9% 0.02 ? 11% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat
0.12 ? 93% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_anonymous_page
0.05 ? 66% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_fault
0.10 ? 91% +405.1% 0.52 ? 16% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.__split_vma
0.55 ? 80% -99.0% 0.01 ?141% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.anon_vma_clone.__split_vma
0.47 ? 76% +171.2% 1.27 ? 43% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables
0.10 ? 57% +828.7% 0.92 ?100% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary
0.70 ? 74% -84.3% 0.11 ? 83% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component
0.04 ?141% +2100.0% 0.92 ?111% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_lookupat
2.74 ? 29% -58.9% 1.13 ? 9% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.17 ? 68% +579.3% 1.16 ? 25% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file
0.25 ? 31% -95.6% 0.01 ? 75% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_fault
0.60 ? 80% +177.1% 1.65 ? 61% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file
0.71 ? 82% -95.0% 0.04 ? 70% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma
14.23 ? 42% -64.7% 5.03 ? 2% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
6.14 ? 24% -40.5% 3.65 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
0.06 ? 82% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.mmap_region
0.01 ?141% +8730.4% 0.68 ? 60% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64
18.16 ? 35% -53.5% 8.45 ? 3% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
4.20 ?141% +228.1% 13.77 ? 19% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
4.54 ? 25% -36.4% 2.89 ? 6% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_unix_gc.unix_stream_sendmsg.sock_sendmsg
0.64 ? 67% +3519.8% 23.06 ? 67% perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas
27.45 ? 96% -91.8% 2.24 ? 7% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
455.23 ? 40% -58.7% 188.20 ? 9% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
1.36 ? 72% +2071.8% 29.61 ?113% perf-sched.wait_time.avg.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom
903.34 ? 42% -47.5% 474.44 ? 8% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq
953.54 ? 49% -49.7% 479.41 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
1.86 ? 71% +100.5% 3.73 ? 13% perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork
12.86 ? 41% -57.9% 5.42 ? 37% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
23.00 ? 42% -70.8% 6.72 perf-sched.wait_time.avg.ms.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
3.77 ? 29% -39.7% 2.27 perf-sched.wait_time.avg.ms.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.00 ?141% +227.3% 0.01 ? 6% perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
0.10 ? 2% -26.2% 0.08 ? 25% perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion_interruptible.usb_stor_control_thread.kthread
958.06 ? 22% -47.5% 503.02 ? 6% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
2.45 ? 28% -37.8% 1.52 ? 15% perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra
993.60 ? 14% -26.5% 730.65 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
3000 ? 27% -66.7% 1000 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
1181 ? 65% -68.8% 368.37 ?124% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
4.42 ?141% +297.0% 17.56 ? 10% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function.[unknown]
49.31 ? 6% -22.8% 38.05 ? 3% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
3927 ? 56% -85.4% 572.35 ? 56% perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.41 ?141% +1307.5% 33.97 ? 49% perf-sched.wait_time.max.ms.io_schedule.__lock_page_killable.filemap_fault.__do_fault
0.11 ? 14% -59.7% 0.04 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.__pmd_alloc.__handle_mm_fault
54.77 ? 25% -41.6% 32.01 ? 17% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages.alloc_pages_vma.shmem_alloc_page
12.59 ? 27% -76.2% 3.00 ? 23% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault
0.06 ? 30% -71.9% 0.02 ? 80% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_prepare_creds.prepare_creds
3.90 ?141% +241.3% 13.32 ? 20% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.__kmalloc_node_track_caller
1.82 ?137% +427.1% 9.57 ? 65% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.kmem_cache_alloc_node
3535 ? 69% -93.9% 214.54 ? 56% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
3.63 ?122% -99.2% 0.03 ? 66% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__split_huge_pud.__handle_mm_fault.handle_mm_fault
1398 ? 64% -91.7% 116.36 ? 47% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_recvmsg.sock_recvmsg
3754 ? 83% -96.6% 125.96 ? 25% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_sendmsg.sock_sendmsg
4.04 ?141% +266.5% 14.80 ? 37% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.change_prot_numa
31.43 ? 65% +1188.1% 404.82 ?108% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
1.07 ?138% +365.3% 4.96 ? 86% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk
1.60 ? 95% -98.3% 0.03 ? 37% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat
0.20 ? 95% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_anonymous_page
0.13 ? 77% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_fault
4.70 ?120% -99.8% 0.01 ?141% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.anon_vma_clone.__split_vma
0.12 ?113% -91.9% 0.01 ? 83% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.alloc_bprm.do_execveat_common
0.35 ? 69% +1761.3% 6.51 ?104% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary
9.68 ? 44% +3498.9% 348.31 ?132% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run
14.23 ? 70% -92.7% 1.03 ? 71% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component
0.04 ?141% +2329.6% 1.01 ? 97% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_lookupat
1610 ? 84% -97.4% 42.63 ? 6% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
1.24 ?109% +506.2% 7.52 ? 77% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file
1.02 ? 74% -98.7% 0.01 ? 82% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_fault
1.92 ? 62% -95.9% 0.08 ? 92% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma
3042 ? 71% -96.7% 101.00 ? 16% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
0.92 ?140% +326.6% 3.94 ? 59% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_write
1907 ? 85% -95.1% 93.61 ? 34% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg
0.10 ?106% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.mmap_region
10.28 ?141% +254.2% 36.41 ? 25% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.task_numa_work.task_work_run.exit_to_user_mode_prepare
0.67 ?106% +512.9% 4.08 ? 79% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap
0.01 ?141% +79165.2% 6.08 ? 32% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64
1135 ? 8% -11.7% 1001 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
17.32 ?141% +934.2% 179.11 ?103% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
1080 ?112% -95.2% 51.78 ? 4% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_unix_gc.unix_stream_sendmsg.sock_sendmsg
16.44 ?103% +3979.0% 670.44 ? 69% perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas
965.66 ? 86% -98.4% 15.36 ? 95% perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork
5338 ? 27% -38.4% 3287 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
1.36 ? 72% +2071.8% 29.61 ?113% perf-sched.wait_time.max.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom
2538 ? 52% -61.3% 981.28 ? 30% perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq
4199 ? 91% -88.0% 505.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
1.86 ? 71% +100.5% 3.73 ? 13% perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork
1012 ? 74% -82.0% 182.69 ? 36% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.00 ?141% +227.3% 0.01 ? 6% perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
51.37 ? 2% -51.4 0.00 perf-profile.calltrace.cycles-pp.__libc_write
50.84 ? 2% -50.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write
50.74 ? 2% -50.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
49.58 -49.6 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
48.07 ? 2% -48.1 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write
46.06 -46.1 0.00 perf-profile.calltrace.cycles-pp.__libc_read
45.43 -45.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read
45.34 ? 2% -45.3 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
44.80 ? 2% -44.8 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
44.20 ? 2% -44.2 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read
10.28 ? 8% -10.3 0.00 perf-profile.calltrace.cycles-pp.get_obj_cgroup_from_current.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
17.51 ? 6% -8.0 9.55 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
6.05 ? 71% -6.0 0.00 perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event
5.64 ? 71% -5.6 0.00 perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow
5.61 ? 71% -5.6 0.00 perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow
32.34 ? 6% -5.1 27.21 perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
32.40 ? 6% -5.1 27.30 perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
5.01 ? 71% -5.0 0.00 perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward
14.13 ? 12% -3.9 10.19 perf-profile.calltrace.cycles-pp.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
16.16 ? 9% -3.3 12.87 perf-profile.calltrace.cycles-pp.kfree.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
42.82 ? 2% -2.4 40.42 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read
42.92 ? 2% -2.4 40.55 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
2.89 ? 56% -2.3 0.60 ? 2% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
2.86 ? 56% -2.3 0.59 ? 2% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
43.41 ? 2% -2.0 41.39 perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
43.56 ? 2% -1.9 41.61 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.43 ? 2% -1.7 44.68 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write.vfs_write
0.80 ? 18% +0.5 1.29 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
0.00 +0.5 0.51 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.83 ? 18% +0.5 1.35 perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.00 +0.5 0.55 perf-profile.calltrace.cycles-pp.fput_many.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.11 ? 10% +0.6 1.66 ? 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg.sock_sendmsg
0.00 +0.6 0.55 ? 3% perf-profile.calltrace.cycles-pp.__build_skb_around.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.00 +0.6 0.56 ? 2% perf-profile.calltrace.cycles-pp.__check_object_size.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.00 +0.6 0.56 ? 3% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 ? 2% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.01 ? 8% +0.6 1.59 ? 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.sock_def_readable.unix_stream_sendmsg
0.00 +0.6 0.59 ? 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg
0.84 ? 4% +0.6 1.44 ? 2% perf-profile.calltrace.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.00 +0.6 0.60 ? 3% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
0.00 +0.6 0.61 ? 3% perf-profile.calltrace.cycles-pp.schedule_timeout.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.88 ? 10% +0.6 1.49 perf-profile.calltrace.cycles-pp.__check_object_size.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
0.91 ? 9% +0.6 1.53 ? 2% perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
0.99 ? 14% +0.6 1.61 perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
0.00 +0.7 0.68 ? 4% perf-profile.calltrace.cycles-pp.__fget_files.__fget_light.__fdget_pos.ksys_read.do_syscall_64
0.00 +0.8 0.77 ? 6% perf-profile.calltrace.cycles-pp.mutex_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +0.8 0.77 ? 3% perf-profile.calltrace.cycles-pp._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.00 +0.8 0.80 ? 4% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.82 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unix_write_space
0.95 ? 43% +0.8 1.77 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +0.8 0.82 ? 4% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.83 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.unix_write_space.sock_wfree
0.00 +0.8 0.83 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.unix_write_space.sock_wfree.unix_destruct_scm
0.00 +0.8 0.84 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state
0.99 ? 41% +0.8 1.84 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.99 ? 42% +0.9 1.84 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
0.00 +0.9 0.86 ? 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_partial_node.___slab_alloc.__slab_alloc.__kmalloc_node_track_caller
0.00 +0.9 0.89 perf-profile.calltrace.cycles-pp.mutex_unlock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.63 ? 9% +0.9 1.53 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +0.9 0.95 ? 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_partial_node.___slab_alloc.__slab_alloc.kmem_cache_alloc_node
0.00 +1.0 0.99 perf-profile.calltrace.cycles-pp.fput_many.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.20 ? 36% +1.0 2.22 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.20 ? 36% +1.0 2.23 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.20 ? 36% +1.0 2.23 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
1.22 ? 36% +1.0 2.25 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
0.00 +1.0 1.04 ? 5% perf-profile.calltrace.cycles-pp.get_partial_node.___slab_alloc.__slab_alloc.__kmalloc_node_track_caller.kmalloc_reserve
0.21 ?141% +1.1 1.27 perf-profile.calltrace.cycles-pp.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all
0.00 +1.1 1.06 ? 11% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node
0.22 ?141% +1.1 1.30 ? 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.prepare_to_wait.sock_alloc_send_pskb.unix_stream_sendmsg
0.23 ?141% +1.1 1.34 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.prepare_to_wait.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
0.00 +1.1 1.11 ? 8% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.consume_skb
0.00 +1.1 1.13 ? 4% perf-profile.calltrace.cycles-pp.get_partial_node.___slab_alloc.__slab_alloc.kmem_cache_alloc_node.__alloc_skb
0.23 ?141% +1.1 1.38 perf-profile.calltrace.cycles-pp.prepare_to_wait.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.17 ?141% +1.2 1.40 ? 3% perf-profile.calltrace.cycles-pp.___slab_alloc.__slab_alloc.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
0.18 ?141% +1.2 1.43 ? 4% perf-profile.calltrace.cycles-pp.__slab_alloc.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
0.67 ? 19% +1.3 1.93 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
1.94 ? 12% +1.3 3.21 perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
1.96 ? 12% +1.3 3.24 perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.04 ? 20% +1.3 2.32 perf-profile.calltrace.cycles-pp.__fget_files.__fget_light.__fdget_pos.ksys_write.do_syscall_64
1.99 ? 11% +1.3 3.27 perf-profile.calltrace.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.69 ? 19% +1.3 1.99 perf-profile.calltrace.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
1.13 ? 18% +1.3 2.44 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.17 ?141% +1.3 1.51 ? 3% perf-profile.calltrace.cycles-pp.__slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.18 ?141% +1.4 1.53 ? 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.20 ?141% +1.4 1.60 ? 2% perf-profile.calltrace.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter.new_sync_read
0.00 +1.5 1.49 ? 3% perf-profile.calltrace.cycles-pp.___slab_alloc.__slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
0.00 +1.5 1.50 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_partial_node.___slab_alloc.__slab_alloc
0.67 ? 16% +1.6 2.25 ? 2% perf-profile.calltrace.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.89 ? 18% +1.7 2.60 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
0.00 +1.8 1.81 perf-profile.calltrace.cycles-pp.__slab_free.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.39 ? 72% +1.9 2.26 perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
1.20 ? 21% +1.9 3.15 perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
1.25 ? 19% +2.0 3.21 perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic
1.28 ? 19% +2.0 3.25 perf-profile.calltrace.cycles-pp.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
1.29 ? 18% +2.0 3.26 perf-profile.calltrace.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.00 +2.1 2.07 ? 12% perf-profile.calltrace.cycles-pp.propagate_protected_usage.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller
14.02 ? 6% +2.3 16.28 perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
14.09 ? 6% +2.3 16.40 perf-profile.calltrace.cycles-pp.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.00 +2.5 2.47 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +5.1 5.08 perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic
0.00 +5.7 5.66 perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg
0.65 ? 25% +5.8 6.47 perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb
0.00 +5.8 5.83 perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.unix_stream_read_generic.unix_stream_recvmsg.sock_read_iter
0.74 ? 22% +5.9 6.69 perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
0.80 ? 22% +6.0 6.79 perf-profile.calltrace.cycles-pp.obj_cgroup_charge.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.00 +10.0 9.98 perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.consume_skb
0.00 +11.1 11.11 perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kfree.consume_skb.unix_stream_read_generic
0.00 +11.5 11.46 perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kfree.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
1.30 ? 27% +11.5 12.80 perf-profile.calltrace.cycles-pp.page_counter_try_charge.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve
1.64 ? 23% +11.6 13.22 perf-profile.calltrace.cycles-pp.obj_cgroup_charge_pages.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb
1.71 ? 23% +11.6 13.35 perf-profile.calltrace.cycles-pp.obj_cgroup_charge.__kmalloc_node_track_caller.kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
0.00 +42.6 42.61 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +44.1 44.07 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +47.0 46.99 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +50.6 50.56 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +95.5 95.50 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +95.7 95.74 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
51.50 ? 2% -51.1 0.36 ? 11% perf-profile.children.cycles-pp.__libc_write
46.20 -46.2 0.00 perf-profile.children.cycles-pp.__libc_read
14.85 ? 7% -14.3 0.58 ? 2% perf-profile.children.cycles-pp.get_obj_cgroup_from_current
17.54 ? 6% -8.0 9.59 perf-profile.children.cycles-pp.kmem_cache_alloc_node
6.99 ? 13% -7.0 0.00 perf-profile.children.cycles-pp.drain_obj_stock
7.13 ? 13% -7.0 0.17 perf-profile.children.cycles-pp.refill_obj_stock
32.35 ? 6% -5.1 27.23 perf-profile.children.cycles-pp.__alloc_skb
32.40 ? 6% -5.1 27.30 perf-profile.children.cycles-pp.alloc_skb_with_frags
14.15 ? 12% -3.9 10.21 perf-profile.children.cycles-pp.kmem_cache_free
4.72 ? 6% -3.7 1.05 ? 2% perf-profile.children.cycles-pp.mod_objcg_state
16.17 ? 9% -3.3 12.88 perf-profile.children.cycles-pp.kfree
42.85 ? 2% -2.4 40.45 perf-profile.children.cycles-pp.unix_stream_read_generic
42.92 ? 2% -2.4 40.56 perf-profile.children.cycles-pp.unix_stream_recvmsg
43.41 ? 2% -2.0 41.39 perf-profile.children.cycles-pp.sock_read_iter
43.58 ? 2% -1.9 41.63 perf-profile.children.cycles-pp.new_sync_read
2.11 ? 10% -1.9 0.19 ? 2% perf-profile.children.cycles-pp.get_mem_cgroup_from_objcg
46.45 ? 2% -1.7 44.72 perf-profile.children.cycles-pp.unix_stream_sendmsg
44.21 ? 2% -1.6 42.63 perf-profile.children.cycles-pp.vfs_read
0.87 ? 39% -0.6 0.31 ? 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.59 ? 40% -0.4 0.20 ? 4% perf-profile.children.cycles-pp.select_idle_sibling
0.56 ? 43% -0.4 0.18 ? 2% perf-profile.children.cycles-pp.pick_next_task_fair
0.38 ? 36% -0.2 0.14 ? 3% perf-profile.children.cycles-pp.select_idle_cpu
0.32 ? 31% -0.2 0.15 ? 3% perf-profile.children.cycles-pp.update_cfs_group
0.27 ? 37% -0.2 0.10 ? 4% perf-profile.children.cycles-pp.available_idle_cpu
0.23 ? 40% -0.1 0.09 ? 5% perf-profile.children.cycles-pp.update_rq_clock
0.21 ? 37% -0.1 0.08 perf-profile.children.cycles-pp.set_next_entity
0.17 ? 41% -0.1 0.06 ? 7% perf-profile.children.cycles-pp.__switch_to_asm
0.42 ? 5% -0.1 0.33 perf-profile.children.cycles-pp.__entry_text_start
0.07 ? 7% +0.0 0.09 ? 9% perf-profile.children.cycles-pp.security_socket_getpeersec_dgram
0.06 ? 23% +0.0 0.09 ? 14% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.06 ? 23% +0.0 0.09 ? 9% perf-profile.children.cycles-pp.irq_exit_rcu
0.03 ? 70% +0.0 0.07 ? 7% perf-profile.children.cycles-pp.finish_wait
0.36 ? 2% +0.0 0.40 ? 2% perf-profile.children.cycles-pp.scheduler_tick
0.13 ? 9% +0.0 0.16 ? 2% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.04 ? 70% +0.0 0.08 ? 6% perf-profile.children.cycles-pp.check_stack_object
0.07 ? 6% +0.0 0.11 ? 4% perf-profile.children.cycles-pp.rcu_all_qs
0.05 +0.0 0.09 perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.33 ? 2% +0.0 0.38 ? 2% perf-profile.children.cycles-pp.task_tick_fair
0.06 +0.0 0.10 ? 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.00 +0.1 0.05 perf-profile.children.cycles-pp.rw_verify_area
0.00 +0.1 0.05 perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.05 perf-profile.children.cycles-pp.put_pid
0.00 +0.1 0.05 perf-profile.children.cycles-pp.apparmor_socket_recvmsg
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.kmalloc_slab
0.02 ?141% +0.1 0.08 ? 6% perf-profile.children.cycles-pp.__softirqentry_text_start
0.42 ? 3% +0.1 0.48 perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.1 0.06 ? 8% perf-profile.children.cycles-pp.menu_select
0.40 ? 3% +0.1 0.46 ? 2% perf-profile.children.cycles-pp.update_process_times
0.41 ? 3% +0.1 0.47 ? 2% perf-profile.children.cycles-pp.tick_sched_handle
0.05 ? 84% +0.1 0.11 ? 14% perf-profile.children.cycles-pp.queue_event
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.shmem_write_end
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.perf_output_put_handle
0.57 ? 2% +0.1 0.63 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.05 ? 84% +0.1 0.12 ? 11% perf-profile.children.cycles-pp.ordered_events__queue
0.46 ? 3% +0.1 0.53 perf-profile.children.cycles-pp.__hrtimer_run_queues
0.56 ? 2% +0.1 0.63 perf-profile.children.cycles-pp.hrtimer_interrupt
0.02 ?141% +0.1 0.09 ? 5% perf-profile.children.cycles-pp.perf_output_end
0.08 ? 16% +0.1 0.16 ? 10% perf-profile.children.cycles-pp.__mod_memcg_state
0.15 ? 16% +0.1 0.24 perf-profile.children.cycles-pp.schedule_idle
0.12 +0.1 0.21 ? 4% perf-profile.children.cycles-pp.wait_for_unix_gc
0.14 ? 3% +0.1 0.23 ? 4% perf-profile.children.cycles-pp.__might_fault
0.06 ? 83% +0.1 0.15 ? 8% perf-profile.children.cycles-pp.process_simple
0.07 ? 86% +0.1 0.16 ? 7% perf-profile.children.cycles-pp.record__finish_output
0.07 ? 86% +0.1 0.16 ? 7% perf-profile.children.cycles-pp.perf_session__process_events
0.03 ?141% +0.1 0.12 ? 10% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.69 ? 3% +0.1 0.78 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.63 ? 3% +0.1 0.73 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.03 ?141% +0.1 0.13 ? 13% perf-profile.children.cycles-pp.shmem_write_begin
0.25 ? 5% +0.1 0.35 ? 4% perf-profile.children.cycles-pp.security_socket_sendmsg
0.07 ? 86% +0.1 0.17 ? 7% perf-profile.children.cycles-pp.cmd_record
0.17 ? 4% +0.1 0.29 ? 2% perf-profile.children.cycles-pp.__ksize
0.10 ? 43% +0.1 0.23 ? 6% perf-profile.children.cycles-pp.memcpy_erms
0.17 ? 14% +0.1 0.30 ? 3% perf-profile.children.cycles-pp.aa_file_perm
0.19 ? 4% +0.1 0.33 ? 2% perf-profile.children.cycles-pp.__virt_addr_valid
0.21 ? 2% +0.1 0.34 ? 3% perf-profile.children.cycles-pp.__might_sleep
0.11 ? 46% +0.1 0.24 ? 5% perf-profile.children.cycles-pp.perf_output_copy
0.19 ? 41% +0.1 0.33 ? 11% perf-profile.children.cycles-pp.generic_perform_write
0.19 ? 41% +0.1 0.33 ? 12% perf-profile.children.cycles-pp.__generic_file_write_iter
0.19 ? 41% +0.1 0.33 ? 11% perf-profile.children.cycles-pp.generic_file_write_iter
0.26 +0.2 0.41 ? 3% perf-profile.children.cycles-pp.fsnotify
0.13 ? 3% +0.2 0.29 ? 3% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.24 +0.2 0.43 ? 2% perf-profile.children.cycles-pp.security_socket_recvmsg
0.29 ? 5% +0.2 0.49 ? 4% perf-profile.children.cycles-pp.__check_heap_object
0.37 ? 18% +0.2 0.58 ? 6% perf-profile.children.cycles-pp.copyin
0.33 ? 3% +0.2 0.54 ? 4% perf-profile.children.cycles-pp.___might_sleep
0.11 ? 15% +0.2 0.32 ? 5% perf-profile.children.cycles-pp.drain_stock
0.29 +0.2 0.51 ? 2% perf-profile.children.cycles-pp.sock_recvmsg
0.38 ? 3% +0.2 0.61 ? 3% perf-profile.children.cycles-pp.aa_sk_perm
0.20 ? 45% +0.2 0.43 ? 4% perf-profile.children.cycles-pp.perf_output_sample
0.12 ? 16% +0.2 0.36 ? 2% perf-profile.children.cycles-pp.refill_stock
0.32 ? 7% +0.2 0.56 ? 3% perf-profile.children.cycles-pp.__build_skb_around
0.46 ? 5% +0.3 0.78 ? 3% perf-profile.children.cycles-pp._copy_from_iter
0.44 ? 7% +0.3 0.76 perf-profile.children.cycles-pp.common_file_perm
0.21 ? 15% +0.4 0.56 ? 5% perf-profile.children.cycles-pp.unfreeze_partials
0.46 ? 6% +0.4 0.81 ? 3% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.23 ? 14% +0.4 0.59 ? 3% perf-profile.children.cycles-pp.put_cpu_partial
0.09 ? 19% +0.4 0.45 ? 3% perf-profile.children.cycles-pp.try_charge
0.36 ? 6% +0.4 0.78 ? 7% perf-profile.children.cycles-pp.mutex_lock
0.62 ? 4% +0.5 1.10 perf-profile.children.cycles-pp.security_file_permission
0.83 ? 18% +0.5 1.35 perf-profile.children.cycles-pp.copyout
0.85 ? 4% +0.6 1.45 ? 2% perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
0.91 ? 10% +0.6 1.53 perf-profile.children.cycles-pp.simple_copy_to_iter
0.99 ? 15% +0.6 1.62 perf-profile.children.cycles-pp._copy_to_iter
0.26 ? 15% +0.6 0.90 perf-profile.children.cycles-pp.mutex_unlock
1.18 ? 18% +0.7 1.88 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.47 ? 26% +0.8 1.27 perf-profile.children.cycles-pp.unix_write_space
0.96 ? 42% +0.8 1.79 perf-profile.children.cycles-pp.intel_idle
1.23 ? 8% +0.9 2.09 ? 2% perf-profile.children.cycles-pp.__check_object_size
1.01 ? 41% +0.9 1.87 perf-profile.children.cycles-pp.cpuidle_enter
1.01 ? 41% +0.9 1.87 perf-profile.children.cycles-pp.cpuidle_enter_state
0.61 ? 20% +0.9 1.51 perf-profile.children.cycles-pp.prepare_to_wait
1.20 ? 36% +1.0 2.23 perf-profile.children.cycles-pp.start_secondary
1.22 ? 36% +1.0 2.25 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
1.22 ? 36% +1.0 2.25 perf-profile.children.cycles-pp.cpu_startup_entry
1.22 ? 36% +1.0 2.25 perf-profile.children.cycles-pp.do_idle
0.44 ? 20% +1.1 1.54 perf-profile.children.cycles-pp.fput_many
0.49 ? 16% +1.1 1.61 ? 2% perf-profile.children.cycles-pp.skb_unlink
1.95 ? 12% +1.3 3.21 perf-profile.children.cycles-pp.__skb_datagram_iter
1.97 ? 11% +1.3 3.25 perf-profile.children.cycles-pp.skb_copy_datagram_iter
1.99 ? 11% +1.3 3.27 perf-profile.children.cycles-pp.unix_stream_read_actor
0.70 ? 18% +1.3 1.99 perf-profile.children.cycles-pp.skb_queue_tail
49.60 +1.3 50.92 perf-profile.children.cycles-pp.ksys_write
0.67 ? 16% +1.6 2.25 ? 2% perf-profile.children.cycles-pp.skb_set_owner_w
0.57 ? 7% +1.6 2.18 perf-profile.children.cycles-pp.get_partial_node
1.34 ? 18% +1.7 3.02 perf-profile.children.cycles-pp.__fget_files
1.52 ? 16% +1.8 3.27 perf-profile.children.cycles-pp.__fget_light
1.54 ? 16% +1.8 3.32 perf-profile.children.cycles-pp.__fdget_pos
1.20 ? 21% +1.9 3.15 perf-profile.children.cycles-pp.sock_wfree
0.94 ? 5% +1.9 2.89 perf-profile.children.cycles-pp.___slab_alloc
1.26 ? 19% +2.0 3.22 perf-profile.children.cycles-pp.unix_destruct_scm
0.97 ? 4% +2.0 2.94 perf-profile.children.cycles-pp.__slab_alloc
1.29 ? 18% +2.0 3.26 perf-profile.children.cycles-pp.skb_release_all
1.28 ? 19% +2.0 3.25 perf-profile.children.cycles-pp.skb_release_head_state
14.06 ? 6% +2.3 16.34 perf-profile.children.cycles-pp.__kmalloc_node_track_caller
14.10 ? 6% +2.3 16.41 perf-profile.children.cycles-pp.kmalloc_reserve
2.86 ? 18% +2.3 5.20 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.92 ? 14% +3.2 4.08 perf-profile.children.cycles-pp.__slab_free
3.25 ? 16% +3.3 6.54 perf-profile.children.cycles-pp._raw_spin_lock
2.85 ? 7% +3.9 6.75 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.53 ? 22% +4.8 5.32 ? 10% perf-profile.children.cycles-pp.propagate_protected_usage
1.52 ? 25% +13.6 15.15 perf-profile.children.cycles-pp.page_counter_cancel
3.58 ? 17% +13.7 17.30 perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
1.76 ? 24% +15.3 17.10 perf-profile.children.cycles-pp.page_counter_uncharge
2.39 ? 23% +17.5 19.93 perf-profile.children.cycles-pp.obj_cgroup_charge_pages
2.51 ? 23% +17.6 20.16 perf-profile.children.cycles-pp.obj_cgroup_charge
2.02 ? 26% +17.7 19.70 perf-profile.children.cycles-pp.page_counter_try_charge
14.66 ? 7% -14.2 0.46 ? 2% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
9.63 ? 9% -8.6 1.00 perf-profile.self.cycles-pp.kfree
8.53 ? 13% -7.1 1.43 perf-profile.self.cycles-pp.kmem_cache_free
6.78 ? 4% -5.9 0.84 ? 2% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
5.57 ? 2% -5.0 0.58 ? 4% perf-profile.self.cycles-pp.kmem_cache_alloc_node
4.56 ? 7% -3.8 0.75 ? 2% perf-profile.self.cycles-pp.mod_objcg_state
2.10 ? 10% -1.9 0.18 ? 2% perf-profile.self.cycles-pp.get_mem_cgroup_from_objcg
0.32 ? 49% -0.2 0.08 ? 5% perf-profile.self.cycles-pp.update_curr
0.27 ? 37% -0.2 0.09 ? 5% perf-profile.self.cycles-pp.available_idle_cpu
0.32 ? 33% -0.2 0.15 ? 3% perf-profile.self.cycles-pp.update_cfs_group
0.15 ? 35% -0.1 0.03 ? 70% perf-profile.self.cycles-pp.select_idle_cpu
0.17 ? 42% -0.1 0.06 ? 8% perf-profile.self.cycles-pp.update_rq_clock
0.17 ? 41% -0.1 0.06 perf-profile.self.cycles-pp.__switch_to_asm
0.07 ? 12% +0.0 0.09 ? 5% perf-profile.self.cycles-pp.ksys_write
0.11 ? 12% +0.0 0.13 perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.08 ? 10% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.rcu_read_unlock_strict
0.08 ? 16% +0.0 0.10 ? 8% perf-profile.self.cycles-pp.sock_sendmsg
0.06 +0.0 0.08 ? 5% perf-profile.self.cycles-pp.ksys_read
0.06 ? 8% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.rcu_all_qs
0.08 ? 6% +0.0 0.11 perf-profile.self.cycles-pp.unix_stream_recvmsg
0.03 ? 70% +0.0 0.07 ? 7% perf-profile.self.cycles-pp.unix_destruct_scm
0.04 ? 71% +0.0 0.07 perf-profile.self.cycles-pp.check_stack_object
0.04 ? 71% +0.0 0.07 ? 6% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.05 +0.0 0.09 perf-profile.self.cycles-pp.sock_recvmsg
0.04 ? 73% +0.0 0.08 ? 10% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.07 ? 7% +0.0 0.11 ? 4% perf-profile.self.cycles-pp.__cond_resched
0.06 ? 8% +0.0 0.10 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.02 ?141% +0.0 0.06 ? 7% perf-profile.self.cycles-pp.alloc_skb_with_frags
0.02 ?141% +0.1 0.07 ? 7% perf-profile.self.cycles-pp.__skb_datagram_iter
0.05 +0.1 0.10 ? 4% perf-profile.self.cycles-pp.skb_copy_datagram_from_iter
0.02 ?141% +0.1 0.07 ? 12% perf-profile.self.cycles-pp.skb_unlink
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.wait_for_unix_gc
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.security_socket_getpeersec_dgram
0.02 ?141% +0.1 0.07 perf-profile.self.cycles-pp.perf_output_copy
0.00 +0.1 0.06 ? 8% perf-profile.self.cycles-pp.__fdget_pos
0.22 ? 8% +0.1 0.27 perf-profile.self.cycles-pp.vfs_read
0.00 +0.1 0.06 perf-profile.self.cycles-pp.security_socket_recvmsg
0.18 ? 2% +0.1 0.24 perf-profile.self.cycles-pp.__fget_light
0.10 ? 8% +0.1 0.16 ? 2% perf-profile.self.cycles-pp._copy_to_iter
0.11 ? 4% +0.1 0.17 perf-profile.self.cycles-pp._copy_from_iter
0.08 ? 5% +0.1 0.15 ? 5% perf-profile.self.cycles-pp.security_file_permission
0.14 ? 8% +0.1 0.21 ? 7% perf-profile.self.cycles-pp.new_sync_read
0.00 +0.1 0.07 perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.00 +0.1 0.07 ? 6% perf-profile.self.cycles-pp.kmalloc_reserve
0.09 ? 5% +0.1 0.16 ? 2% perf-profile.self.cycles-pp.refill_obj_stock
0.03 ?141% +0.1 0.11 ? 12% perf-profile.self.cycles-pp.queue_event
0.02 ?141% +0.1 0.09 ? 5% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.08 ? 6% perf-profile.self.cycles-pp.obj_cgroup_uncharge_pages
0.18 ? 7% +0.1 0.25 ? 4% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.05 ? 77% +0.1 0.13 ? 3% perf-profile.self.cycles-pp.perf_output_sample
0.09 ? 13% +0.1 0.18 ? 5% perf-profile.self.cycles-pp.sock_alloc_send_pskb
0.07 ? 12% +0.1 0.16 ? 10% perf-profile.self.cycles-pp.__mod_memcg_state
0.00 +0.1 0.09 ? 9% perf-profile.self.cycles-pp.obj_cgroup_charge_pages
0.14 ? 3% +0.1 0.24 ? 9% perf-profile.self.cycles-pp.new_sync_write
0.11 ? 11% +0.1 0.20 ? 4% perf-profile.self.cycles-pp.obj_cgroup_charge
0.21 ? 3% +0.1 0.31 perf-profile.self.cycles-pp.sock_read_iter
0.17 ? 2% +0.1 0.28 ? 2% perf-profile.self.cycles-pp.__ksize
0.10 ? 43% +0.1 0.22 ? 8% perf-profile.self.cycles-pp.memcpy_erms
0.16 ? 15% +0.1 0.28 ? 3% perf-profile.self.cycles-pp.aa_file_perm
0.19 ? 2% +0.1 0.31 ? 2% perf-profile.self.cycles-pp.__might_sleep
0.21 ? 4% +0.1 0.33 ? 2% perf-profile.self.cycles-pp.sock_write_iter
0.10 ? 8% +0.1 0.23 ? 4% perf-profile.self.cycles-pp.unfreeze_partials
0.18 ? 4% +0.1 0.32 ? 2% perf-profile.self.cycles-pp.__virt_addr_valid
0.25 +0.1 0.39 ? 3% perf-profile.self.cycles-pp.fsnotify
0.19 ? 6% +0.1 0.33 perf-profile.self.cycles-pp.__entry_text_start
0.20 ? 4% +0.2 0.35 perf-profile.self.cycles-pp.aa_sk_perm
0.23 ? 7% +0.2 0.41 ? 6% perf-profile.self.cycles-pp.__build_skb_around
0.27 ? 3% +0.2 0.46 perf-profile.self.cycles-pp.common_file_perm
0.28 ? 5% +0.2 0.48 ? 5% perf-profile.self.cycles-pp.__check_heap_object
0.15 ? 3% +0.2 0.34 ? 2% perf-profile.self.cycles-pp.get_partial_node
0.33 ? 2% +0.2 0.53 ? 4% perf-profile.self.cycles-pp.___might_sleep
0.32 ? 5% +0.2 0.53 ? 3% perf-profile.self.cycles-pp.__alloc_skb
0.37 ? 10% +0.2 0.62 perf-profile.self.cycles-pp.vfs_write
0.16 ? 5% +0.3 0.43 perf-profile.self.cycles-pp.unix_write_space
0.17 ? 7% +0.3 0.45 perf-profile.self.cycles-pp.consume_skb
0.37 ? 4% +0.3 0.71 perf-profile.self.cycles-pp.___slab_alloc
0.45 ? 5% +0.3 0.79 ? 3% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.25 ? 4% +0.4 0.62 ? 9% perf-profile.self.cycles-pp.mutex_lock
0.73 ? 9% +0.4 1.17 ? 2% perf-profile.self.cycles-pp.unix_stream_sendmsg
0.69 ? 14% +0.5 1.20 ? 2% perf-profile.self.cycles-pp.__check_object_size
0.79 ? 6% +0.6 1.42 perf-profile.self.cycles-pp.unix_stream_read_generic
0.26 ? 15% +0.6 0.89 perf-profile.self.cycles-pp.mutex_unlock
1.16 ? 18% +0.7 1.85 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.96 ? 42% +0.8 1.79 perf-profile.self.cycles-pp.intel_idle
0.43 ? 20% +1.1 1.52 perf-profile.self.cycles-pp.fput_many
0.72 ? 17% +1.1 1.85 ? 2% perf-profile.self.cycles-pp.sock_wfree
0.59 ? 22% +1.2 1.78 perf-profile.self.cycles-pp.sock_def_readable
0.67 ? 17% +1.6 2.23 ? 2% perf-profile.self.cycles-pp.skb_set_owner_w
1.32 ? 18% +1.7 2.98 perf-profile.self.cycles-pp.__fget_files
1.47 ? 3% +2.3 3.80 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
2.86 ? 18% +2.3 5.19 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.75 ? 8% +2.5 4.25 perf-profile.self.cycles-pp._raw_spin_lock
0.91 ? 13% +3.1 4.04 perf-profile.self.cycles-pp.__slab_free
0.53 ? 22% +4.8 5.28 ? 10% perf-profile.self.cycles-pp.propagate_protected_usage
1.51 ? 25% +13.5 15.03 perf-profile.self.cycles-pp.page_counter_cancel
1.71 ? 26% +14.5 16.17 perf-profile.self.cycles-pp.page_counter_try_charge
1035 ? 2% -38.7% 634.67 interrupts.293:PCI-MSI.327680-edge.xhci_hcd
2061 ? 70% -99.6% 7.33 ? 89% interrupts.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
34513597 -20.2% 27541947 ? 2% interrupts.CAL:Function_call_interrupts
368907 ? 2% -26.7% 270378 ? 3% interrupts.CPU0.CAL:Function_call_interrupts
2119856 ? 2% -38.6% 1301096 interrupts.CPU0.LOC:Local_timer_interrupts
812218 ? 2% -55.1% 364825 ? 3% interrupts.CPU0.RES:Rescheduling_interrupts
283775 -25.6% 211144 ? 8% interrupts.CPU0.TLB:TLB_shootdowns
367731 ? 3% -27.1% 268163 ? 5% interrupts.CPU1.CAL:Function_call_interrupts
2119951 ? 2% -38.6% 1301019 interrupts.CPU1.LOC:Local_timer_interrupts
803406 ? 2% -54.5% 365533 ? 3% interrupts.CPU1.RES:Rescheduling_interrupts
283930 -25.9% 210521 ? 8% interrupts.CPU1.TLB:TLB_shootdowns
1035 ? 2% -38.7% 634.67 interrupts.CPU10.293:PCI-MSI.327680-edge.xhci_hcd
359552 -26.2% 265312 ? 4% interrupts.CPU10.CAL:Function_call_interrupts
2119851 ? 2% -38.6% 1300937 interrupts.CPU10.LOC:Local_timer_interrupts
814814 ? 2% -55.4% 363149 ? 4% interrupts.CPU10.RES:Rescheduling_interrupts
283873 -25.7% 210796 ? 8% interrupts.CPU10.TLB:TLB_shootdowns
356220 -25.3% 266202 ? 4% interrupts.CPU11.CAL:Function_call_interrupts
2119790 ? 2% -38.6% 1301000 interrupts.CPU11.LOC:Local_timer_interrupts
803621 ? 5% -54.5% 365317 ? 4% interrupts.CPU11.RES:Rescheduling_interrupts
283231 -25.8% 210126 ? 7% interrupts.CPU11.TLB:TLB_shootdowns
355040 -24.2% 269225 ? 4% interrupts.CPU12.CAL:Function_call_interrupts
2119856 ? 2% -38.6% 1300930 interrupts.CPU12.LOC:Local_timer_interrupts
4336 +66.9% 7236 ? 28% interrupts.CPU12.NMI:Non-maskable_interrupts
4336 +66.9% 7236 ? 28% interrupts.CPU12.PMI:Performance_monitoring_interrupts
797486 ? 4% -54.7% 360915 ? 3% interrupts.CPU12.RES:Rescheduling_interrupts
281724 -24.7% 212031 ? 8% interrupts.CPU12.TLB:TLB_shootdowns
355443 -25.6% 264475 ? 4% interrupts.CPU13.CAL:Function_call_interrupts
2119919 ? 2% -38.6% 1300988 interrupts.CPU13.LOC:Local_timer_interrupts
4337 +99.9% 8669 interrupts.CPU13.NMI:Non-maskable_interrupts
4337 +99.9% 8669 interrupts.CPU13.PMI:Performance_monitoring_interrupts
811518 ? 4% -55.8% 358818 ? 3% interrupts.CPU13.RES:Rescheduling_interrupts
281929 -25.5% 209995 ? 7% interrupts.CPU13.TLB:TLB_shootdowns
353684 -24.8% 266124 ? 4% interrupts.CPU14.CAL:Function_call_interrupts
2119942 ? 2% -38.6% 1301027 interrupts.CPU14.LOC:Local_timer_interrupts
805398 ? 2% -55.2% 360840 ? 3% interrupts.CPU14.RES:Rescheduling_interrupts
282195 -25.2% 210959 ? 8% interrupts.CPU14.TLB:TLB_shootdowns
358831 -26.2% 264773 ? 4% interrupts.CPU15.CAL:Function_call_interrupts
2119901 ? 2% -38.6% 1300980 interrupts.CPU15.LOC:Local_timer_interrupts
809550 ? 3% -54.1% 371481 ? 5% interrupts.CPU15.RES:Rescheduling_interrupts
282360 -26.0% 208905 ? 7% interrupts.CPU15.TLB:TLB_shootdowns
357218 ? 2% -25.5% 266176 ? 5% interrupts.CPU16.CAL:Function_call_interrupts
2119840 ? 2% -38.6% 1300868 interrupts.CPU16.LOC:Local_timer_interrupts
807752 ? 4% -55.6% 358709 ? 4% interrupts.CPU16.RES:Rescheduling_interrupts
282023 -25.1% 211191 ? 8% interrupts.CPU16.TLB:TLB_shootdowns
353491 -24.6% 266521 ? 5% interrupts.CPU17.CAL:Function_call_interrupts
2119889 ? 2% -38.6% 1301353 interrupts.CPU17.LOC:Local_timer_interrupts
803176 ? 4% -54.9% 362121 ? 4% interrupts.CPU17.RES:Rescheduling_interrupts
283156 -25.6% 210804 ? 8% interrupts.CPU17.TLB:TLB_shootdowns
369223 -28.6% 263612 ? 5% interrupts.CPU18.CAL:Function_call_interrupts
2119849 ? 2% -38.6% 1300973 interrupts.CPU18.LOC:Local_timer_interrupts
820678 ? 2% -56.1% 360476 ? 3% interrupts.CPU18.RES:Rescheduling_interrupts
282877 -26.0% 209452 ? 8% interrupts.CPU18.TLB:TLB_shootdowns
360224 ? 2% -25.5% 268239 ? 5% interrupts.CPU19.CAL:Function_call_interrupts
2119893 ? 2% -38.6% 1300994 interrupts.CPU19.LOC:Local_timer_interrupts
812017 ? 3% -55.6% 360262 ? 4% interrupts.CPU19.RES:Rescheduling_interrupts
281870 -25.2% 210898 ? 8% interrupts.CPU19.TLB:TLB_shootdowns
370625 ? 4% -27.9% 267036 ? 4% interrupts.CPU2.CAL:Function_call_interrupts
2119969 ? 2% -38.6% 1301015 interrupts.CPU2.LOC:Local_timer_interrupts
809689 ? 4% -55.1% 363170 ? 4% interrupts.CPU2.RES:Rescheduling_interrupts
283090 -25.8% 209936 ? 8% interrupts.CPU2.TLB:TLB_shootdowns
363794 ? 4% -26.7% 266675 ? 5% interrupts.CPU20.CAL:Function_call_interrupts
2119879 ? 2% -38.6% 1300983 interrupts.CPU20.LOC:Local_timer_interrupts
825023 ? 4% -56.1% 361999 ? 4% interrupts.CPU20.RES:Rescheduling_interrupts
283395 -25.3% 211775 ? 8% interrupts.CPU20.TLB:TLB_shootdowns
361556 -26.4% 266160 ? 4% interrupts.CPU21.CAL:Function_call_interrupts
2119908 ? 2% -38.6% 1300940 interrupts.CPU21.LOC:Local_timer_interrupts
814183 ? 2% -55.6% 361282 ? 3% interrupts.CPU21.RES:Rescheduling_interrupts
282265 -25.1% 211288 ? 8% interrupts.CPU21.TLB:TLB_shootdowns
355653 -25.5% 264787 ? 4% interrupts.CPU22.CAL:Function_call_interrupts
2119941 ? 2% -38.6% 1300980 interrupts.CPU22.LOC:Local_timer_interrupts
816087 ? 4% -55.7% 361817 ? 4% interrupts.CPU22.RES:Rescheduling_interrupts
282442 -25.5% 210351 ? 8% interrupts.CPU22.TLB:TLB_shootdowns
351805 -24.6% 265103 ? 4% interrupts.CPU23.CAL:Function_call_interrupts
2119801 ? 2% -38.6% 1301003 interrupts.CPU23.LOC:Local_timer_interrupts
814899 ? 4% -55.5% 363012 ? 4% interrupts.CPU23.RES:Rescheduling_interrupts
282562 -25.7% 209819 ? 8% interrupts.CPU23.TLB:TLB_shootdowns
367069 ? 3% -12.2% 322114 ? 5% interrupts.CPU24.CAL:Function_call_interrupts
2119886 ? 2% -38.6% 1301077 interrupts.CPU24.LOC:Local_timer_interrupts
835110 ? 9% -42.5% 480170 ? 5% interrupts.CPU24.RES:Rescheduling_interrupts
282311 -25.7% 209822 ? 8% interrupts.CPU24.TLB:TLB_shootdowns
354801 ? 2% -13.9% 305538 ? 3% interrupts.CPU25.CAL:Function_call_interrupts
2119830 ? 2% -38.6% 1301028 interrupts.CPU25.LOC:Local_timer_interrupts
825755 ? 8% -41.6% 482630 ? 5% interrupts.CPU25.RES:Rescheduling_interrupts
282684 -26.4% 207993 ? 8% interrupts.CPU25.TLB:TLB_shootdowns
356396 ? 2% -13.6% 307990 ? 2% interrupts.CPU26.CAL:Function_call_interrupts
2119870 ? 2% -38.6% 1300993 interrupts.CPU26.LOC:Local_timer_interrupts
823243 ? 9% -41.7% 479552 ? 5% interrupts.CPU26.RES:Rescheduling_interrupts
282459 -26.2% 208574 ? 9% interrupts.CPU26.TLB:TLB_shootdowns
351046 ? 2% -12.2% 308053 ? 2% interrupts.CPU27.CAL:Function_call_interrupts
2119824 ? 2% -38.6% 1301033 interrupts.CPU27.LOC:Local_timer_interrupts
827963 ? 8% -41.9% 480986 ? 5% interrupts.CPU27.RES:Rescheduling_interrupts
282131 -27.7% 204108 ? 8% interrupts.CPU27.TLB:TLB_shootdowns
359618 ? 4% -13.2% 312133 ? 3% interrupts.CPU28.CAL:Function_call_interrupts
2119793 ? 2% -38.6% 1301032 interrupts.CPU28.LOC:Local_timer_interrupts
8696 -33.5% 5784 ? 35% interrupts.CPU28.NMI:Non-maskable_interrupts
8696 -33.5% 5784 ? 35% interrupts.CPU28.PMI:Performance_monitoring_interrupts
819538 ? 8% -41.2% 481990 ? 5% interrupts.CPU28.RES:Rescheduling_interrupts
283227 -26.9% 206966 ? 8% interrupts.CPU28.TLB:TLB_shootdowns
360296 ? 4% -15.9% 303086 ? 3% interrupts.CPU29.CAL:Function_call_interrupts
2119786 ? 2% -38.6% 1300995 interrupts.CPU29.LOC:Local_timer_interrupts
828226 ? 8% -41.3% 485772 ? 4% interrupts.CPU29.RES:Rescheduling_interrupts
281734 -26.9% 206077 ? 9% interrupts.CPU29.TLB:TLB_shootdowns
365972 ? 2% -27.5% 265304 ? 5% interrupts.CPU3.CAL:Function_call_interrupts
2119935 ? 2% -38.6% 1300929 interrupts.CPU3.LOC:Local_timer_interrupts
807232 ? 4% -55.3% 360556 ? 4% interrupts.CPU3.RES:Rescheduling_interrupts
281844 -25.1% 211103 ? 8% interrupts.CPU3.TLB:TLB_shootdowns
362207 ? 3% -15.3% 306933 ? 3% interrupts.CPU30.CAL:Function_call_interrupts
2119862 ? 2% -38.6% 1301014 interrupts.CPU30.LOC:Local_timer_interrupts
831773 ? 11% -41.9% 483182 ? 5% interrupts.CPU30.RES:Rescheduling_interrupts
283263 -26.8% 207298 ? 9% interrupts.CPU30.TLB:TLB_shootdowns
354519 ? 3% -13.8% 305505 ? 2% interrupts.CPU31.CAL:Function_call_interrupts
2119829 ? 2% -38.6% 1300963 interrupts.CPU31.LOC:Local_timer_interrupts
826965 ? 8% -41.3% 485597 ? 5% interrupts.CPU31.RES:Rescheduling_interrupts
282101 -26.2% 208117 ? 8% interrupts.CPU31.TLB:TLB_shootdowns
352696 ? 2% -12.9% 307084 ? 2% interrupts.CPU32.CAL:Function_call_interrupts
2119862 ? 2% -38.6% 1300977 interrupts.CPU32.LOC:Local_timer_interrupts
830140 ? 9% -40.8% 491128 ? 8% interrupts.CPU32.RES:Rescheduling_interrupts
282170 -26.5% 207510 ? 9% interrupts.CPU32.TLB:TLB_shootdowns
350705 ? 3% -12.4% 307282 ? 3% interrupts.CPU33.CAL:Function_call_interrupts
2119781 ? 2% -38.6% 1300996 interrupts.CPU33.LOC:Local_timer_interrupts
7248 ? 28% -20.2% 5787 ? 35% interrupts.CPU33.NMI:Non-maskable_interrupts
7248 ? 28% -20.2% 5787 ? 35% interrupts.CPU33.PMI:Performance_monitoring_interrupts
799213 ? 8% -39.4% 483989 ? 7% interrupts.CPU33.RES:Rescheduling_interrupts
282340 -26.3% 207986 ? 8% interrupts.CPU33.TLB:TLB_shootdowns
351990 ? 3% -12.2% 308916 ? 2% interrupts.CPU34.CAL:Function_call_interrupts
2119839 ? 2% -38.6% 1301012 interrupts.CPU34.LOC:Local_timer_interrupts
8699 -33.5% 5785 ? 35% interrupts.CPU34.NMI:Non-maskable_interrupts
8699 -33.5% 5785 ? 35% interrupts.CPU34.PMI:Performance_monitoring_interrupts
819942 ? 7% -40.7% 486547 ? 4% interrupts.CPU34.RES:Rescheduling_interrupts
281682 -26.3% 207623 ? 8% interrupts.CPU34.TLB:TLB_shootdowns
353078 ? 2% -13.2% 306565 ? 3% interrupts.CPU35.CAL:Function_call_interrupts
2119891 ? 2% -38.6% 1301004 interrupts.CPU35.LOC:Local_timer_interrupts
8698 -33.5% 5787 ? 35% interrupts.CPU35.NMI:Non-maskable_interrupts
8698 -33.5% 5787 ? 35% interrupts.CPU35.PMI:Performance_monitoring_interrupts
817431 ? 8% -40.8% 483991 ? 5% interrupts.CPU35.RES:Rescheduling_interrupts
281829 -26.1% 208224 ? 8% interrupts.CPU35.TLB:TLB_shootdowns
361728 -15.5% 305716 ? 2% interrupts.CPU36.CAL:Function_call_interrupts
2119868 ? 2% -38.6% 1300969 interrupts.CPU36.LOC:Local_timer_interrupts
809352 ? 8% -40.1% 484977 ? 6% interrupts.CPU36.RES:Rescheduling_interrupts
284457 -27.2% 207116 ? 9% interrupts.CPU36.TLB:TLB_shootdowns
366085 ? 2% -14.8% 311826 ? 2% interrupts.CPU37.CAL:Function_call_interrupts
2119856 ? 2% -38.6% 1301083 interrupts.CPU37.LOC:Local_timer_interrupts
835957 ? 9% -40.8% 494569 ? 6% interrupts.CPU37.RES:Rescheduling_interrupts
281493 -26.0% 208334 ? 8% interrupts.CPU37.TLB:TLB_shootdowns
359733 ? 3% -15.4% 304308 interrupts.CPU38.CAL:Function_call_interrupts
2119729 ? 2% -38.6% 1300970 interrupts.CPU38.LOC:Local_timer_interrupts
832813 ? 8% -41.9% 483493 ? 3% interrupts.CPU38.RES:Rescheduling_interrupts
282618 -27.0% 206231 ? 8% interrupts.CPU38.TLB:TLB_shootdowns
355683 ? 2% -14.8% 303093 ? 2% interrupts.CPU39.CAL:Function_call_interrupts
2119922 ? 2% -38.6% 1300951 interrupts.CPU39.LOC:Local_timer_interrupts
824329 ? 8% -41.0% 486495 ? 4% interrupts.CPU39.RES:Rescheduling_interrupts
282567 -27.1% 205979 ? 8% interrupts.CPU39.TLB:TLB_shootdowns
2060 ? 70% -99.7% 7.00 ? 95% interrupts.CPU4.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
361473 -26.2% 266725 ? 5% interrupts.CPU4.CAL:Function_call_interrupts
2119917 ? 2% -38.6% 1300986 interrupts.CPU4.LOC:Local_timer_interrupts
793250 ? 3% -53.8% 366590 ? 5% interrupts.CPU4.RES:Rescheduling_interrupts
281957 -24.9% 211664 ? 8% interrupts.CPU4.TLB:TLB_shootdowns
355060 -14.4% 303790 ? 2% interrupts.CPU40.CAL:Function_call_interrupts
2119887 ? 2% -38.6% 1300979 interrupts.CPU40.LOC:Local_timer_interrupts
8701 -33.5% 5782 ? 35% interrupts.CPU40.NMI:Non-maskable_interrupts
8701 -33.5% 5782 ? 35% interrupts.CPU40.PMI:Performance_monitoring_interrupts
838720 ? 5% -41.7% 489153 ? 6% interrupts.CPU40.RES:Rescheduling_interrupts
282183 -26.5% 207488 ? 8% interrupts.CPU40.TLB:TLB_shootdowns
358851 ? 3% -16.1% 301031 ? 2% interrupts.CPU41.CAL:Function_call_interrupts
2119838 ? 2% -38.6% 1301063 interrupts.CPU41.LOC:Local_timer_interrupts
8706 -33.6% 5785 ? 35% interrupts.CPU41.NMI:Non-maskable_interrupts
8706 -33.6% 5785 ? 35% interrupts.CPU41.PMI:Performance_monitoring_interrupts
824491 ? 7% -40.7% 488745 ? 5% interrupts.CPU41.RES:Rescheduling_interrupts
282520 -27.0% 206344 ? 8% interrupts.CPU41.TLB:TLB_shootdowns
356041 ? 2% -12.7% 310979 ? 3% interrupts.CPU42.CAL:Function_call_interrupts
2119859 ? 2% -38.6% 1300984 interrupts.CPU42.LOC:Local_timer_interrupts
821458 ? 8% -40.7% 486800 ? 4% interrupts.CPU42.RES:Rescheduling_interrupts
283442 -26.6% 208104 ? 8% interrupts.CPU42.TLB:TLB_shootdowns
354867 -13.2% 308097 interrupts.CPU43.CAL:Function_call_interrupts
2119882 ? 2% -38.6% 1300957 interrupts.CPU43.LOC:Local_timer_interrupts
7247 ? 28% -20.1% 5789 ? 35% interrupts.CPU43.NMI:Non-maskable_interrupts
7247 ? 28% -20.1% 5789 ? 35% interrupts.CPU43.PMI:Performance_monitoring_interrupts
827624 ? 7% -40.9% 488757 ? 6% interrupts.CPU43.RES:Rescheduling_interrupts
282435 -26.6% 207171 ? 9% interrupts.CPU43.TLB:TLB_shootdowns
373041 ? 2% -17.7% 307007 ? 3% interrupts.CPU44.CAL:Function_call_interrupts
2119885 ? 2% -38.6% 1300981 interrupts.CPU44.LOC:Local_timer_interrupts
830281 ? 7% -41.7% 483990 ? 5% interrupts.CPU44.RES:Rescheduling_interrupts
283518 -26.7% 207742 ? 8% interrupts.CPU44.TLB:TLB_shootdowns
351747 ? 2% -11.2% 312289 ? 3% interrupts.CPU45.CAL:Function_call_interrupts
2119834 ? 2% -38.6% 1300916 interrupts.CPU45.LOC:Local_timer_interrupts
815498 ? 6% -40.7% 483675 ? 6% interrupts.CPU45.RES:Rescheduling_interrupts
282616 -26.3% 208400 ? 8% interrupts.CPU45.TLB:TLB_shootdowns
362954 ? 3% -15.7% 305932 ? 2% interrupts.CPU46.CAL:Function_call_interrupts
2119937 ? 2% -38.6% 1300973 interrupts.CPU46.LOC:Local_timer_interrupts
7247 ? 28% -20.1% 5787 ? 35% interrupts.CPU46.NMI:Non-maskable_interrupts
7247 ? 28% -20.1% 5787 ? 35% interrupts.CPU46.PMI:Performance_monitoring_interrupts
823684 ? 8% -41.2% 484716 ? 6% interrupts.CPU46.RES:Rescheduling_interrupts
282220 -26.8% 206567 ? 8% interrupts.CPU46.TLB:TLB_shootdowns
366811 ? 5% -14.0% 315291 ? 3% interrupts.CPU47.CAL:Function_call_interrupts
2119855 ? 2% -38.6% 1300980 interrupts.CPU47.LOC:Local_timer_interrupts
834351 ? 7% -41.7% 486720 ? 6% interrupts.CPU47.RES:Rescheduling_interrupts
281754 -26.8% 206297 ? 8% interrupts.CPU47.TLB:TLB_shootdowns
371972 ? 2% -27.8% 268510 ? 3% interrupts.CPU48.CAL:Function_call_interrupts
2120137 ? 2% -38.6% 1300949 interrupts.CPU48.LOC:Local_timer_interrupts
831543 ? 5% -56.0% 365606 ? 3% interrupts.CPU48.RES:Rescheduling_interrupts
283642 -25.5% 211447 ? 8% interrupts.CPU48.TLB:TLB_shootdowns
365126 ? 2% -27.1% 266018 ? 4% interrupts.CPU49.CAL:Function_call_interrupts
2119947 ? 2% -38.6% 1300980 interrupts.CPU49.LOC:Local_timer_interrupts
814887 -54.8% 368103 ? 4% interrupts.CPU49.RES:Rescheduling_interrupts
282397 -25.6% 210209 ? 8% interrupts.CPU49.TLB:TLB_shootdowns
362492 -26.9% 265013 ? 4% interrupts.CPU5.CAL:Function_call_interrupts
2119896 ? 2% -38.6% 1301006 interrupts.CPU5.LOC:Local_timer_interrupts
808080 ? 3% -55.5% 359740 ? 3% interrupts.CPU5.RES:Rescheduling_interrupts
284391 -25.9% 210684 ? 7% interrupts.CPU5.TLB:TLB_shootdowns
372563 ? 4% -28.7% 265793 ? 4% interrupts.CPU50.CAL:Function_call_interrupts
2119866 ? 2% -38.6% 1301041 interrupts.CPU50.LOC:Local_timer_interrupts
806873 ? 3% -54.5% 366775 ? 4% interrupts.CPU50.RES:Rescheduling_interrupts
281229 -25.5% 209593 ? 8% interrupts.CPU50.TLB:TLB_shootdowns
359331 -25.3% 268482 ? 5% interrupts.CPU51.CAL:Function_call_interrupts
2119870 ? 2% -38.6% 1300995 interrupts.CPU51.LOC:Local_timer_interrupts
813446 ? 3% -55.0% 365731 ? 4% interrupts.CPU51.RES:Rescheduling_interrupts
281624 -24.9% 211572 ? 9% interrupts.CPU51.TLB:TLB_shootdowns
361659 -26.6% 265453 ? 5% interrupts.CPU52.CAL:Function_call_interrupts
2119882 ? 2% -38.6% 1301026 interrupts.CPU52.LOC:Local_timer_interrupts
812992 ? 2% -55.2% 363860 ? 5% interrupts.CPU52.RES:Rescheduling_interrupts
280855 -24.8% 211171 ? 8% interrupts.CPU52.TLB:TLB_shootdowns
365036 ? 3% -27.3% 265284 ? 5% interrupts.CPU53.CAL:Function_call_interrupts
2119930 ? 2% -38.6% 1300980 interrupts.CPU53.LOC:Local_timer_interrupts
811759 ? 4% -54.8% 366964 ? 3% interrupts.CPU53.RES:Rescheduling_interrupts
281353 -25.3% 210038 ? 8% interrupts.CPU53.TLB:TLB_shootdowns
361816 -26.4% 266348 ? 5% interrupts.CPU54.CAL:Function_call_interrupts
2119855 ? 2% -38.6% 1300980 interrupts.CPU54.LOC:Local_timer_interrupts
808810 ? 4% -54.8% 365755 ? 5% interrupts.CPU54.RES:Rescheduling_interrupts
283099 -25.5% 210881 ? 8% interrupts.CPU54.TLB:TLB_shootdowns
361246 ? 2% -26.2% 266588 ? 5% interrupts.CPU55.CAL:Function_call_interrupts
2119871 ? 2% -38.6% 1300962 interrupts.CPU55.LOC:Local_timer_interrupts
8689 -33.5% 5777 ? 35% interrupts.CPU55.NMI:Non-maskable_interrupts
8689 -33.5% 5777 ? 35% interrupts.CPU55.PMI:Performance_monitoring_interrupts
806805 -54.7% 365382 ? 2% interrupts.CPU55.RES:Rescheduling_interrupts
281909 -25.3% 210697 ? 8% interrupts.CPU55.TLB:TLB_shootdowns
357802 -25.8% 265445 ? 5% interrupts.CPU56.CAL:Function_call_interrupts
2119920 ? 2% -38.6% 1300878 interrupts.CPU56.LOC:Local_timer_interrupts
806616 ? 4% -54.7% 365515 ? 3% interrupts.CPU56.RES:Rescheduling_interrupts
282887 -25.8% 209842 ? 9% interrupts.CPU56.TLB:TLB_shootdowns
363924 -27.2% 265089 ? 4% interrupts.CPU57.CAL:Function_call_interrupts
2119867 ? 2% -38.6% 1300963 interrupts.CPU57.LOC:Local_timer_interrupts
825721 ? 2% -55.7% 365657 ? 4% interrupts.CPU57.RES:Rescheduling_interrupts
283114 -26.1% 209195 ? 8% interrupts.CPU57.TLB:TLB_shootdowns
359218 -25.7% 266792 ? 4% interrupts.CPU58.CAL:Function_call_interrupts
2119900 ? 2% -38.6% 1300974 interrupts.CPU58.LOC:Local_timer_interrupts
814790 ? 2% -55.2% 365024 ? 4% interrupts.CPU58.RES:Rescheduling_interrupts
284315 -25.8% 211103 ? 8% interrupts.CPU58.TLB:TLB_shootdowns
353017 -25.3% 263770 ? 4% interrupts.CPU59.CAL:Function_call_interrupts
2120229 ? 2% -38.6% 1301036 interrupts.CPU59.LOC:Local_timer_interrupts
826820 ? 3% -55.7% 366452 ? 2% interrupts.CPU59.RES:Rescheduling_interrupts
282445 -26.1% 208807 ? 8% interrupts.CPU59.TLB:TLB_shootdowns
362620 ? 2% -26.5% 266662 ? 5% interrupts.CPU6.CAL:Function_call_interrupts
2119893 ? 2% -38.6% 1300993 interrupts.CPU6.LOC:Local_timer_interrupts
818353 ? 2% -55.3% 365437 ? 3% interrupts.CPU6.RES:Rescheduling_interrupts
282749 -25.5% 210705 ? 8% interrupts.CPU6.TLB:TLB_shootdowns
359979 -26.2% 265811 ? 4% interrupts.CPU60.CAL:Function_call_interrupts
2119911 ? 2% -38.6% 1300985 interrupts.CPU60.LOC:Local_timer_interrupts
812179 -55.4% 362609 ? 3% interrupts.CPU60.RES:Rescheduling_interrupts
279783 -25.1% 209643 ? 8% interrupts.CPU60.TLB:TLB_shootdowns
354979 ? 2% -25.4% 264861 ? 4% interrupts.CPU61.CAL:Function_call_interrupts
2119902 ? 2% -38.6% 1300997 interrupts.CPU61.LOC:Local_timer_interrupts
7247 ? 28% -20.3% 5779 ? 35% interrupts.CPU61.NMI:Non-maskable_interrupts
7247 ? 28% -20.3% 5779 ? 35% interrupts.CPU61.PMI:Performance_monitoring_interrupts
819685 ? 4% -55.5% 364604 ? 4% interrupts.CPU61.RES:Rescheduling_interrupts
281822 -25.3% 210582 ? 7% interrupts.CPU61.TLB:TLB_shootdowns
359968 ? 2% -26.6% 264076 ? 5% interrupts.CPU62.CAL:Function_call_interrupts
2119923 ? 2% -38.6% 1301007 interrupts.CPU62.LOC:Local_timer_interrupts
828254 ? 4% -56.2% 362818 ? 2% interrupts.CPU62.RES:Rescheduling_interrupts
283352 -25.8% 210223 ? 8% interrupts.CPU62.TLB:TLB_shootdowns
359513 -25.4% 268366 ? 4% interrupts.CPU63.CAL:Function_call_interrupts
2119866 ? 2% -38.6% 1300992 interrupts.CPU63.LOC:Local_timer_interrupts
823114 ? 3% -54.5% 374511 ? 4% interrupts.CPU63.RES:Rescheduling_interrupts
284346 -25.8% 211013 ? 8% interrupts.CPU63.TLB:TLB_shootdowns
355308 -24.4% 268553 ? 5% interrupts.CPU64.CAL:Function_call_interrupts
2119890 ? 2% -38.6% 1300994 interrupts.CPU64.LOC:Local_timer_interrupts
820093 ? 3% -55.0% 369177 ? 3% interrupts.CPU64.RES:Rescheduling_interrupts
281798 -24.8% 211816 ? 8% interrupts.CPU64.TLB:TLB_shootdowns
353896 -24.6% 266987 ? 5% interrupts.CPU65.CAL:Function_call_interrupts
2120192 ? 2% -38.6% 1300991 interrupts.CPU65.LOC:Local_timer_interrupts
801381 ? 2% -54.6% 363683 ? 4% interrupts.CPU65.RES:Rescheduling_interrupts
282158 -24.9% 211770 ? 8% interrupts.CPU65.TLB:TLB_shootdowns
366366 ? 2% -27.7% 264707 ? 5% interrupts.CPU66.CAL:Function_call_interrupts
2119921 ? 2% -38.6% 1301001 interrupts.CPU66.LOC:Local_timer_interrupts
827378 ? 4% -55.6% 367513 ? 3% interrupts.CPU66.RES:Rescheduling_interrupts
284318 -26.0% 210365 ? 8% interrupts.CPU66.TLB:TLB_shootdowns
359424 ? 2% -26.2% 265353 ? 5% interrupts.CPU67.CAL:Function_call_interrupts
2119937 ? 2% -38.6% 1300997 interrupts.CPU67.LOC:Local_timer_interrupts
831078 ? 3% -55.8% 367348 ? 3% interrupts.CPU67.RES:Rescheduling_interrupts
283433 -26.1% 209371 ? 8% interrupts.CPU67.TLB:TLB_shootdowns
367202 ? 4% -27.4% 266612 ? 4% interrupts.CPU68.CAL:Function_call_interrupts
2119935 ? 2% -38.6% 1301228 interrupts.CPU68.LOC:Local_timer_interrupts
825675 ? 6% -55.3% 369425 ? 4% interrupts.CPU68.RES:Rescheduling_interrupts
283359 -25.2% 212010 ? 8% interrupts.CPU68.TLB:TLB_shootdowns
359414 -25.6% 267404 ? 5% interrupts.CPU69.CAL:Function_call_interrupts
2119895 ? 2% -38.6% 1300944 interrupts.CPU69.LOC:Local_timer_interrupts
820401 ? 4% -55.5% 365244 ? 4% interrupts.CPU69.RES:Rescheduling_interrupts
281400 -24.7% 211949 ? 8% interrupts.CPU69.TLB:TLB_shootdowns
356951 -25.3% 266527 ? 4% interrupts.CPU7.CAL:Function_call_interrupts
2119878 ? 2% -38.6% 1300996 interrupts.CPU7.LOC:Local_timer_interrupts
799545 -55.2% 358272 ? 3% interrupts.CPU7.RES:Rescheduling_interrupts
282123 -25.0% 211614 ? 7% interrupts.CPU7.TLB:TLB_shootdowns
357682 -26.0% 264598 ? 5% interrupts.CPU70.CAL:Function_call_interrupts
2119917 ? 2% -38.6% 1301016 interrupts.CPU70.LOC:Local_timer_interrupts
821840 ? 3% -55.6% 364547 ? 5% interrupts.CPU70.RES:Rescheduling_interrupts
283191 -25.5% 210980 ? 8% interrupts.CPU70.TLB:TLB_shootdowns
352930 -24.8% 265466 ? 5% interrupts.CPU71.CAL:Function_call_interrupts
2119911 ? 2% -38.6% 1301039 interrupts.CPU71.LOC:Local_timer_interrupts
815992 ? 5% -54.9% 368007 ? 4% interrupts.CPU71.RES:Rescheduling_interrupts
283086 -25.9% 209679 ? 8% interrupts.CPU71.TLB:TLB_shootdowns
364592 ? 3% -15.5% 308066 ? 2% interrupts.CPU72.CAL:Function_call_interrupts
2119902 ? 2% -38.6% 1300933 interrupts.CPU72.LOC:Local_timer_interrupts
835551 ? 8% -41.7% 486909 ? 6% interrupts.CPU72.RES:Rescheduling_interrupts
280133 -25.3% 209233 ? 8% interrupts.CPU72.TLB:TLB_shootdowns
351807 -13.0% 306170 ? 2% interrupts.CPU73.CAL:Function_call_interrupts
2119872 ? 2% -38.6% 1300983 interrupts.CPU73.LOC:Local_timer_interrupts
831527 ? 10% -41.3% 488274 ? 4% interrupts.CPU73.RES:Rescheduling_interrupts
281603 -25.9% 208583 ? 8% interrupts.CPU73.TLB:TLB_shootdowns
354247 ? 2% -13.9% 304924 ? 2% interrupts.CPU74.CAL:Function_call_interrupts
2119865 ? 2% -38.6% 1301006 interrupts.CPU74.LOC:Local_timer_interrupts
829757 ? 8% -41.4% 486188 ? 6% interrupts.CPU74.RES:Rescheduling_interrupts
282369 -26.5% 207625 ? 8% interrupts.CPU74.TLB:TLB_shootdowns
350160 ? 2% -12.4% 306807 ? 2% interrupts.CPU75.CAL:Function_call_interrupts
2119900 ? 2% -38.6% 1301006 interrupts.CPU75.LOC:Local_timer_interrupts
841206 ? 9% -41.5% 491757 ? 4% interrupts.CPU75.RES:Rescheduling_interrupts
281358 -26.5% 206842 ? 8% interrupts.CPU75.TLB:TLB_shootdowns
356553 ? 4% -10.6% 318664 ? 7% interrupts.CPU76.CAL:Function_call_interrupts
2119972 ? 2% -38.6% 1301004 interrupts.CPU76.LOC:Local_timer_interrupts
845781 ? 11% -42.3% 487978 ? 5% interrupts.CPU76.RES:Rescheduling_interrupts
281205 -26.3% 207192 ? 8% interrupts.CPU76.TLB:TLB_shootdowns
364679 ? 3% -16.6% 304285 ? 2% interrupts.CPU77.CAL:Function_call_interrupts
2119763 ? 2% -38.6% 1301020 interrupts.CPU77.LOC:Local_timer_interrupts
812355 ? 7% -39.5% 491342 ? 5% interrupts.CPU77.RES:Rescheduling_interrupts
283855 -27.0% 207261 ? 8% interrupts.CPU77.TLB:TLB_shootdowns
359208 ? 2% -14.0% 308972 ? 3% interrupts.CPU78.CAL:Function_call_interrupts
2119907 ? 2% -38.6% 1300985 interrupts.CPU78.LOC:Local_timer_interrupts
839350 ? 10% -41.5% 491392 ? 5% interrupts.CPU78.RES:Rescheduling_interrupts
283511 -26.8% 207440 ? 10% interrupts.CPU78.TLB:TLB_shootdowns
357054 ? 4% -14.1% 306776 interrupts.CPU79.CAL:Function_call_interrupts
2119898 ? 2% -38.6% 1300998 interrupts.CPU79.LOC:Local_timer_interrupts
838045 ? 8% -42.0% 486044 ? 5% interrupts.CPU79.RES:Rescheduling_interrupts
281820 -26.3% 207652 ? 8% interrupts.CPU79.TLB:TLB_shootdowns
357337 -25.2% 267166 ? 5% interrupts.CPU8.CAL:Function_call_interrupts
2119861 ? 2% -38.6% 1300975 interrupts.CPU8.LOC:Local_timer_interrupts
822142 ? 4% -55.9% 362512 ? 4% interrupts.CPU8.RES:Rescheduling_interrupts
282367 -25.3% 210880 ? 9% interrupts.CPU8.TLB:TLB_shootdowns
353683 ? 2% -12.9% 308146 ? 3% interrupts.CPU80.CAL:Function_call_interrupts
2119902 ? 2% -38.6% 1300992 interrupts.CPU80.LOC:Local_timer_interrupts
849161 ? 6% -41.6% 495542 ? 6% interrupts.CPU80.RES:Rescheduling_interrupts
282788 -26.5% 207894 ? 9% interrupts.CPU80.TLB:TLB_shootdowns
351699 -12.1% 309301 ? 2% interrupts.CPU81.CAL:Function_call_interrupts
2119873 ? 2% -38.6% 1300997 interrupts.CPU81.LOC:Local_timer_interrupts
829162 ? 9% -40.9% 489741 ? 5% interrupts.CPU81.RES:Rescheduling_interrupts
281028 -26.0% 208006 ? 8% interrupts.CPU81.TLB:TLB_shootdowns
355240 ? 2% -11.5% 314426 ? 4% interrupts.CPU82.CAL:Function_call_interrupts
2119916 ? 2% -38.6% 1300923 interrupts.CPU82.LOC:Local_timer_interrupts
826386 ? 8% -41.0% 487824 ? 6% interrupts.CPU82.RES:Rescheduling_interrupts
281032 -26.3% 207063 ? 9% interrupts.CPU82.TLB:TLB_shootdowns
353706 ? 2% -13.3% 306818 ? 2% interrupts.CPU83.CAL:Function_call_interrupts
2119917 ? 2% -38.6% 1301004 interrupts.CPU83.LOC:Local_timer_interrupts
826758 ? 8% -40.5% 491595 ? 5% interrupts.CPU83.RES:Rescheduling_interrupts
282199 -25.9% 209072 ? 8% interrupts.CPU83.TLB:TLB_shootdowns
363376 ? 2% -17.3% 300658 ? 3% interrupts.CPU84.CAL:Function_call_interrupts
2119884 ? 2% -38.6% 1300993 interrupts.CPU84.LOC:Local_timer_interrupts
818489 ? 7% -40.9% 484073 ? 5% interrupts.CPU84.RES:Rescheduling_interrupts
282037 -26.2% 208075 ? 8% interrupts.CPU84.TLB:TLB_shootdowns
361581 ? 2% -14.4% 309541 ? 2% interrupts.CPU85.CAL:Function_call_interrupts
2119853 ? 2% -38.6% 1300927 interrupts.CPU85.LOC:Local_timer_interrupts
825059 ? 9% -40.0% 495049 ? 6% interrupts.CPU85.RES:Rescheduling_interrupts
280009 -25.9% 207426 ? 8% interrupts.CPU85.TLB:TLB_shootdowns
360577 ? 2% -15.7% 304023 ? 2% interrupts.CPU86.CAL:Function_call_interrupts
2119860 ? 2% -38.6% 1301024 interrupts.CPU86.LOC:Local_timer_interrupts
846687 ? 5% -42.3% 488197 ? 4% interrupts.CPU86.RES:Rescheduling_interrupts
282430 -26.7% 206906 ? 8% interrupts.CPU86.TLB:TLB_shootdowns
355552 ? 3% -14.8% 302975 ? 3% interrupts.CPU87.CAL:Function_call_interrupts
2119783 ? 2% -38.6% 1301007 interrupts.CPU87.LOC:Local_timer_interrupts
837108 ? 9% -41.0% 493489 ? 6% interrupts.CPU87.RES:Rescheduling_interrupts
279982 -25.6% 208231 ? 8% interrupts.CPU87.TLB:TLB_shootdowns
357243 -14.6% 305251 ? 2% interrupts.CPU88.CAL:Function_call_interrupts
2119798 ? 2% -38.6% 1300996 interrupts.CPU88.LOC:Local_timer_interrupts
830698 ? 6% -40.3% 496304 ? 5% interrupts.CPU88.RES:Rescheduling_interrupts
281515 -26.0% 208331 ? 8% interrupts.CPU88.TLB:TLB_shootdowns
357561 ? 2% -14.2% 306895 ? 2% interrupts.CPU89.CAL:Function_call_interrupts
2119898 ? 2% -38.6% 1300966 interrupts.CPU89.LOC:Local_timer_interrupts
844079 ? 7% -41.2% 495937 ? 6% interrupts.CPU89.RES:Rescheduling_interrupts
283569 -27.0% 206886 ? 8% interrupts.CPU89.TLB:TLB_shootdowns
363672 -26.6% 267079 ? 4% interrupts.CPU9.CAL:Function_call_interrupts
2119924 ? 2% -38.6% 1301011 interrupts.CPU9.LOC:Local_timer_interrupts
803331 ? 4% -55.1% 360571 ? 4% interrupts.CPU9.RES:Rescheduling_interrupts
282194 -25.0% 211729 ? 8% interrupts.CPU9.TLB:TLB_shootdowns
357677 ? 2% -14.4% 306269 ? 2% interrupts.CPU90.CAL:Function_call_interrupts
2119887 ? 2% -38.6% 1300984 interrupts.CPU90.LOC:Local_timer_interrupts
837795 ? 9% -41.2% 492875 ? 6% interrupts.CPU90.RES:Rescheduling_interrupts
281825 -26.2% 207949 ? 8% interrupts.CPU90.TLB:TLB_shootdowns
359694 ? 2% -14.8% 306597 interrupts.CPU91.CAL:Function_call_interrupts
2119761 ? 2% -38.6% 1300987 interrupts.CPU91.LOC:Local_timer_interrupts
842202 ? 9% -41.4% 493906 ? 6% interrupts.CPU91.RES:Rescheduling_interrupts
282106 -26.3% 207777 ? 8% interrupts.CPU91.TLB:TLB_shootdowns
375385 ? 2% -19.3% 302982 ? 3% interrupts.CPU92.CAL:Function_call_interrupts
2119879 ? 2% -38.6% 1300977 interrupts.CPU92.LOC:Local_timer_interrupts
847516 ? 8% -42.4% 488171 ? 5% interrupts.CPU92.RES:Rescheduling_interrupts
280812 -26.1% 207449 ? 9% interrupts.CPU92.TLB:TLB_shootdowns
353136 -11.6% 312278 ? 3% interrupts.CPU93.CAL:Function_call_interrupts
2119901 ? 2% -38.6% 1300954 interrupts.CPU93.LOC:Local_timer_interrupts
861749 ? 9% -42.5% 495355 ? 6% interrupts.CPU93.RES:Rescheduling_interrupts
281193 -26.3% 207249 ? 8% interrupts.CPU93.TLB:TLB_shootdowns
368783 ? 6% -18.1% 302055 ? 3% interrupts.CPU94.CAL:Function_call_interrupts
2119918 ? 2% -38.6% 1300957 interrupts.CPU94.LOC:Local_timer_interrupts
853109 ? 8% -42.9% 486709 ? 5% interrupts.CPU94.RES:Rescheduling_interrupts
283366 -26.9% 207085 ? 8% interrupts.CPU94.TLB:TLB_shootdowns
364463 ? 4% -15.3% 308673 ? 3% interrupts.CPU95.CAL:Function_call_interrupts
2119853 ? 2% -38.6% 1300975 interrupts.CPU95.LOC:Local_timer_interrupts
824354 ? 7% -40.8% 488005 ? 4% interrupts.CPU95.RES:Rescheduling_interrupts
282090 -27.3% 205190 ? 9% interrupts.CPU95.TLB:TLB_shootdowns
1925 ? 21% +47.4% 2838 ? 2% interrupts.IWI:IRQ_work_interrupts
2.035e+08 ? 2% -38.6% 1.249e+08 interrupts.LOC:Local_timer_interrupts
288.00 -33.3% 192.00 interrupts.MCP:Machine_check_polls
78939352 ? 4% -48.2% 40908024 ? 4% interrupts.RES:Rescheduling_interrupts
27108861 -26.0% 20070842 ? 8% interrupts.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang