Greeting,
FYI, we noticed a -6.1% regression of aim9.signal_test.ops_per_sec due to commit:
commit: 83b62687a05205847d627f29126a8fee3c644335 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: aim9
on test machine: 256 threads Intel(R) Genuine Intel(R) CPU 0000 @ 1.30GHz with 112G memory
with following parameters:
testtime: 5s
test: all
cpufreq_governor: performance
ucode: 0xffff0190
test-description: Suite IX is the "AIM Independent Resource Benchmark:" the famous synthetic benchmark.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite9/
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -2.9% regression |
| test machine | 88 threads Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=50% |
| | test=mmap2 |
| | ucode=0x5003006 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -3.0% regression |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=100% |
| | test=mmap1 |
| | ucode=0x2006a0a |
+------------------+---------------------------------------------------------------------------+
| testcase: change | unixbench: boot-time.boot 0.9% regression |
| test machine | 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=30% |
| | runtime=300s |
| | test=pipe |
| | ucode=0x4003006 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | pigz: pigz.throughput 4.4% improvement |
| test machine | 256 threads Intel(R) Genuine Intel(R) CPU 0000 @ 1.30GHz with 112G memory |
| test parameters | blocksize=128K |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | ucode=0x11 |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml
bin/lkp run compatible-job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/lkp-knl-f1/all/aim9/5s/0xffff0190
commit:
v5.12-rc3
83b62687a0 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
v5.12-rc3 83b62687a05205847d627f29126
---------------- ---------------------------
%stddev %change %stddev
\ | \
53961 +3.0% 55574 aim9.creat-clo.ops_per_sec
27705 -1.6% 27260 aim9.disk_src.ops_per_sec
195428 +1.4% 198232 aim9.disk_wrt.ops_per_sec
45889 -5.0% 43615 aim9.link_test.ops_per_sec
112653 -6.1% 105825 aim9.signal_test.ops_per_sec
83503 -3.7% 80386 aim9.stream_pipe.ops_per_sec
100185 +2.3% 102451 aim9.sync_disk_rw.ops_per_sec
17589 -1.1% 17397 aim9.tcp_test.ops_per_sec
102.10 +0.8% 102.97 aim9.time.system_time
3155 ? 4% -14.2% 2707 ? 7% slabinfo.file_lock_cache.active_objs
3155 ? 4% -14.2% 2707 ? 7% slabinfo.file_lock_cache.num_objs
0.07 ? 19% -29.4% 0.05 ? 2% perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.05 ? 10% -23.0% 0.04 ? 6% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.19 ? 61% -62.3% 0.07 ? 20% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
18.26 ? 69% -86.0% 2.56 ?217% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
38.06 ? 11% +30.9% 49.82 ? 14% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
8.07 -0.4 7.64 perf-stat.i.branch-miss-rate%
15.03 +0.5 15.48 perf-stat.i.cache-miss-rate%
96621516 -1.7% 95018174 perf-stat.i.cache-references
2.00 ? 2% -0.1 1.87 ? 2% perf-stat.i.iTLB-load-miss-rate%
56545998 -7.4% 52363118 perf-stat.i.iTLB-load-misses
52.58 ? 2% +7.7% 56.63 ? 2% perf-stat.i.instructions-per-iTLB-miss
7.87 -0.4 7.47 ? 2% perf-stat.overall.branch-miss-rate%
14.95 +0.4 15.39 perf-stat.overall.cache-miss-rate%
1.89 ? 2% -0.1 1.77 ? 2% perf-stat.overall.iTLB-load-miss-rate%
51.76 ? 2% +7.5% 55.62 ? 2% perf-stat.overall.instructions-per-iTLB-miss
96305190 -1.7% 94701917 perf-stat.ps.cache-references
56334421 -7.4% 52164265 perf-stat.ps.iTLB-load-misses
4272 ?131% +931.3% 44066 ?134% interrupts.35:IR-PCI-MSI.2621444-edge.eth0-TxRx-3
267.50 ? 31% +46.9% 393.00 ? 20% interrupts.CPU102.NMI:Non-maskable_interrupts
267.50 ? 31% +46.9% 393.00 ? 20% interrupts.CPU102.PMI:Performance_monitoring_interrupts
4272 ?131% +931.3% 44066 ?134% interrupts.CPU15.35:IR-PCI-MSI.2621444-edge.eth0-TxRx-3
213.00 ? 29% +35.4% 288.50 ? 32% interrupts.CPU157.NMI:Non-maskable_interrupts
213.00 ? 29% +35.4% 288.50 ? 32% interrupts.CPU157.PMI:Performance_monitoring_interrupts
1213 +7.4% 1303 ? 6% interrupts.CPU181.CAL:Function_call_interrupts
205.00 ? 17% +155.5% 523.83 ? 91% interrupts.CPU222.NMI:Non-maskable_interrupts
205.00 ? 17% +155.5% 523.83 ? 91% interrupts.CPU222.PMI:Performance_monitoring_interrupts
205.33 ? 4% +56.6% 321.50 ? 31% interrupts.CPU227.NMI:Non-maskable_interrupts
205.33 ? 4% +56.6% 321.50 ? 31% interrupts.CPU227.PMI:Performance_monitoring_interrupts
243.00 ? 32% +142.9% 590.33 ? 78% interrupts.CPU231.NMI:Non-maskable_interrupts
243.00 ? 32% +142.9% 590.33 ? 78% interrupts.CPU231.PMI:Performance_monitoring_interrupts
35.00 ? 53% +354.3% 159.00 ?128% interrupts.CPU24.RES:Rescheduling_interrupts
52.33 ? 26% +136.6% 123.83 ? 72% interrupts.CPU44.RES:Rescheduling_interrupts
6158 ? 4% +15.4% 7104 ? 2% interrupts.RES:Rescheduling_interrupts
0.82 ? 7% -0.2 0.60 ? 9% perf-profile.calltrace.cycles-pp.ktime_get.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.03 ? 6% -0.2 0.85 ? 8% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.61 ? 3% -0.2 1.45 ? 4% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.31 ? 5% -0.2 1.16 ? 3% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit_rcu.sysvec_apic_timer_interrupt
0.88 ? 3% -0.1 0.74 ? 3% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.rebalance_domains.__softirqentry_text_start
0.95 ? 4% -0.1 0.81 ? 3% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit_rcu
0.73 ? 4% -0.1 0.63 ? 5% perf-profile.calltrace.cycles-pp.cpuidle_governor_latency_req.menu_select.do_idle.cpu_startup_entry.start_secondary
3.18 ? 4% +0.2 3.39 ? 4% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.75 ? 14% +0.2 0.97 ? 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
2.88 ? 4% +0.2 3.10 ? 5% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.17 ? 7% +0.2 1.41 ? 8% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
11.41 ? 4% +1.2 12.60 ? 3% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
3.72 ? 4% +1.4 5.09 ? 3% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
9.67 ? 2% +1.4 11.05 ? 2% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
9.14 ? 3% +1.4 10.54 ? 2% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
21.22 ? 2% +1.6 22.86 ? 2% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
28.50 ? 2% +1.9 30.42 ? 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
29.95 ? 2% +2.0 31.92 ? 2% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
6.37 ? 2% -0.4 6.00 ? 5% perf-profile.children.cycles-pp.irq_exit_rcu
1.67 ? 7% -0.2 1.48 ? 3% perf-profile.children.cycles-pp.load_balance
0.49 ? 7% -0.2 0.30 ? 9% perf-profile.children.cycles-pp.hrtimer_forward
1.05 ? 6% -0.2 0.86 ? 8% perf-profile.children.cycles-pp.tick_nohz_irq_exit
1.30 ? 6% -0.2 1.12 ? 3% perf-profile.children.cycles-pp.find_busiest_group
1.23 ? 6% -0.2 1.05 ? 3% perf-profile.children.cycles-pp.update_sd_lb_stats
1.64 ? 4% -0.2 1.47 ? 4% perf-profile.children.cycles-pp.rebalance_domains
0.44 ? 9% -0.2 0.29 ? 5% perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.42 ? 8% -0.1 0.30 ? 7% perf-profile.children.cycles-pp.get_cpu_device
0.76 ? 4% -0.1 0.65 ? 4% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.30 ? 2% -0.1 0.22 ? 7% perf-profile.children.cycles-pp.cpumask_next_and
0.16 ? 37% -0.1 0.09 ? 5% perf-profile.children.cycles-pp.__dso__load_kallsyms
0.16 ? 6% -0.1 0.10 ? 6% perf-profile.children.cycles-pp.arch_cpu_idle_exit
0.08 ? 11% -0.1 0.03 ?100% perf-profile.children.cycles-pp.cpuidle_get_cpu_driver
0.53 ? 2% -0.0 0.49 ? 3% perf-profile.children.cycles-pp.idle_cpu
0.19 ? 6% -0.0 0.15 ? 14% perf-profile.children.cycles-pp.tick_sched_do_timer
0.13 ? 8% +0.0 0.16 ? 6% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.15 ? 6% +0.1 0.22 ? 7% perf-profile.children.cycles-pp.cpuidle_not_available
0.75 ? 4% +0.1 0.84 ? 2% perf-profile.children.cycles-pp.__x86_retpoline_rax
0.40 ? 4% +0.1 0.51 ? 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
1.85 ? 6% +0.2 2.03 ? 5% perf-profile.children.cycles-pp.get_next_timer_interrupt
3.26 ? 4% +0.2 3.46 ? 4% perf-profile.children.cycles-pp.irq_enter_rcu
11.91 ? 3% +1.1 13.05 ? 2% perf-profile.children.cycles-pp.tick_sched_timer
10.11 ? 2% +1.3 11.44 ? 2% perf-profile.children.cycles-pp.tick_sched_handle
9.59 ? 2% +1.4 10.95 perf-profile.children.cycles-pp.update_process_times
4.07 ? 4% +1.4 5.43 ? 3% perf-profile.children.cycles-pp.scheduler_tick
21.92 +1.5 23.46 ? 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
29.28 ? 2% +1.8 31.06 ? 2% perf-profile.children.cycles-pp.hrtimer_interrupt
30.69 +1.8 32.52 ? 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.48 ? 6% -0.2 0.29 ? 8% perf-profile.self.cycles-pp.hrtimer_forward
0.27 ? 23% -0.2 0.12 ? 15% perf-profile.self.cycles-pp.rcu_core
0.43 ? 8% -0.2 0.28 ? 5% perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.77 ? 4% -0.1 0.65 ? 6% perf-profile.self.cycles-pp.tick_nohz_next_event
0.41 ? 7% -0.1 0.30 ? 8% perf-profile.self.cycles-pp.get_cpu_device
0.12 ? 8% -0.1 0.05 ? 45% perf-profile.self.cycles-pp.arch_cpu_idle_exit
0.47 ? 3% -0.0 0.42 ? 4% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.15 ? 11% -0.0 0.11 ? 9% perf-profile.self.cycles-pp.tick_sched_do_timer
0.52 -0.0 0.49 ? 4% perf-profile.self.cycles-pp.idle_cpu
0.25 ? 8% -0.0 0.22 ? 6% perf-profile.self.cycles-pp.can_stop_idle_tick
0.27 ? 10% +0.1 0.32 ? 4% perf-profile.self.cycles-pp.tick_sched_timer
0.14 ? 6% +0.1 0.21 ? 7% perf-profile.self.cycles-pp.cpuidle_not_available
0.79 ? 8% +0.1 0.86 ? 3% perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.33 ? 5% +0.1 0.44 ? 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.94 ? 20% +0.4 1.35 ? 6% perf-profile.self.cycles-pp.update_process_times
0.82 ? 8% +1.3 2.14 ? 8% perf-profile.self.cycles-pp.scheduler_tick
aim9.signal_test.ops_per_sec
116000 +------------------------------------------------------------------+
| +.. |
114000 |-+ .+.. : +. |
| +.+.. .+. : +.. +.+.. +..+. +.+.. |
|.. +.+..+ + .. + +..+..+. .. +.+..|
112000 |-+ + + + |
| |
110000 |-+O O O |
| O |
108000 |-+ |
| O O O O O |
| O O O O |
106000 |-+ O O O O O |
| O O O O O O O |
104000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp9: 88 threads Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/mmap2/will-it-scale/0x5003006
commit:
v5.12-rc3
83b62687a0 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
v5.12-rc3 83b62687a05205847d627f29126
---------------- ---------------------------
%stddev %change %stddev
\ | \
25370616 -2.9% 24627565 will-it-scale.44.processes
576604 -2.9% 559716 will-it-scale.per_process_ops
25370616 -2.9% 24627565 will-it-scale.workload
0.01 ? 11% -21.4% 0.01 ? 6% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
24625 ? 22% -50.9% 12097 ? 78% softirqs.CPU59.SCHED
172.67 ? 25% -50.6% 85.33 ? 80% interrupts.CPU15.RES:Rescheduling_interrupts
586.33 ? 6% +25.7% 736.83 ? 11% interrupts.CPU43.CAL:Function_call_interrupts
4746 ? 47% +66.1% 7883 interrupts.CPU51.NMI:Non-maskable_interrupts
4746 ? 47% +66.1% 7883 interrupts.CPU51.PMI:Performance_monitoring_interrupts
3789 ? 75% +81.1% 6863 ? 22% interrupts.CPU58.NMI:Non-maskable_interrupts
3789 ? 75% +81.1% 6863 ? 22% interrupts.CPU58.PMI:Performance_monitoring_interrupts
4074 ? 42% +60.7% 6548 ? 27% interrupts.CPU66.NMI:Non-maskable_interrupts
4074 ? 42% +60.7% 6548 ? 27% interrupts.CPU66.PMI:Performance_monitoring_interrupts
6.539e+10 -2.9% 6.349e+10 perf-stat.i.branch-instructions
1.985e+08 -2.4% 1.938e+08 perf-stat.i.branch-misses
0.46 +3.0% 0.47 perf-stat.i.cpi
6.817e+10 -2.9% 6.616e+10 perf-stat.i.dTLB-loads
3.096e+10 -2.9% 3.006e+10 perf-stat.i.dTLB-stores
1.43e+08 -2.4% 1.396e+08 perf-stat.i.iTLB-load-misses
1725289 +13.4% 1956714 perf-stat.i.iTLB-loads
2.707e+11 -2.9% 2.628e+11 perf-stat.i.instructions
2.20 -2.9% 2.14 perf-stat.i.ipc
1869 -2.9% 1814 perf-stat.i.metric.M/sec
0.45 +3.0% 0.47 perf-stat.overall.cpi
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
2.20 -2.9% 2.14 perf-stat.overall.ipc
6.517e+10 -2.9% 6.327e+10 perf-stat.ps.branch-instructions
1.979e+08 -2.4% 1.932e+08 perf-stat.ps.branch-misses
6.793e+10 -2.9% 6.593e+10 perf-stat.ps.dTLB-loads
3.085e+10 -2.9% 2.996e+10 perf-stat.ps.dTLB-stores
1.425e+08 -2.4% 1.391e+08 perf-stat.ps.iTLB-load-misses
1719407 +13.4% 1950026 perf-stat.ps.iTLB-loads
2.698e+11 -2.9% 2.619e+11 perf-stat.ps.instructions
8.158e+13 -3.0% 7.911e+13 perf-stat.total.instructions
29.12 ? 8% -4.0 25.12 ? 12% perf-profile.calltrace.cycles-pp.__mmap
26.40 ? 7% -3.4 22.98 ? 11% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
25.94 ? 7% -3.4 22.56 ? 11% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
25.66 ? 7% -3.3 22.32 ? 11% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
24.35 ? 7% -3.2 21.15 ? 11% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
21.78 ? 7% -2.8 18.98 ? 11% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.80 ? 7% -2.2 15.55 ? 12% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
8.29 ? 7% -1.1 7.17 ? 11% perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
5.35 ? 8% -0.7 4.67 ? 12% perf-profile.calltrace.cycles-pp.__cond_resched.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
2.95 ? 8% -0.5 2.50 ? 11% perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
1.52 ? 19% -0.4 1.09 ? 33% perf-profile.calltrace.cycles-pp.__entry_text_start.__munmap
3.01 ? 7% -0.4 2.62 ? 11% perf-profile.calltrace.cycles-pp.d_path.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
2.68 ? 8% -0.4 2.33 ? 12% perf-profile.calltrace.cycles-pp.rcu_all_qs.__cond_resched.unmap_page_range.unmap_vmas.unmap_region
2.37 ? 7% -0.3 2.04 ? 11% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
0.60 ? 6% -0.3 0.28 ?100% perf-profile.calltrace.cycles-pp.prepend_name.prepend_path.d_path.perf_event_mmap.mmap_region
2.35 ? 8% -0.3 2.04 ? 13% perf-profile.calltrace.cycles-pp.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
1.95 ? 7% -0.3 1.64 ? 12% perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
1.33 ? 7% -0.3 1.07 ? 11% perf-profile.calltrace.cycles-pp.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.66 ? 7% -0.2 1.45 ? 11% perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.shmem_get_unmapped_area.get_unmapped_area.do_mmap.vm_mmap_pgoff
1.30 ? 7% -0.2 1.11 ? 11% perf-profile.calltrace.cycles-pp.prepend_path.d_path.perf_event_mmap.mmap_region.do_mmap
0.94 ? 7% -0.1 0.79 ? 12% perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.69 ? 8% -0.1 0.58 ? 12% perf-profile.calltrace.cycles-pp.kfree.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
29.21 ? 7% -3.8 25.42 ? 11% perf-profile.children.cycles-pp.__mmap
25.70 ? 7% -3.4 22.34 ? 11% perf-profile.children.cycles-pp.ksys_mmap_pgoff
24.41 ? 7% -3.2 21.20 ? 11% perf-profile.children.cycles-pp.vm_mmap_pgoff
21.82 ? 7% -2.8 19.01 ? 11% perf-profile.children.cycles-pp.do_mmap
17.88 ? 7% -2.3 15.63 ? 12% perf-profile.children.cycles-pp.mmap_region
8.36 ? 7% -1.1 7.23 ? 11% perf-profile.children.cycles-pp.perf_event_mmap
6.02 ? 7% -0.8 5.25 ? 12% perf-profile.children.cycles-pp.__cond_resched
2.97 ? 7% -0.5 2.52 ? 11% perf-profile.children.cycles-pp.get_unmapped_area
3.05 ? 7% -0.4 2.65 ? 12% perf-profile.children.cycles-pp.d_path
2.99 ? 8% -0.4 2.61 ? 11% perf-profile.children.cycles-pp.rcu_all_qs
2.39 ? 8% -0.3 2.05 ? 11% perf-profile.children.cycles-pp.zap_pte_range
2.01 ? 7% -0.3 1.70 ? 12% perf-profile.children.cycles-pp.perf_iterate_sb
1.33 ? 7% -0.3 1.08 ? 11% perf-profile.children.cycles-pp.security_mmap_file
1.68 ? 7% -0.2 1.47 ? 10% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
1.33 ? 7% -0.2 1.13 ? 11% perf-profile.children.cycles-pp.prepend_path
0.40 ? 8% -0.2 0.23 ? 10% perf-profile.children.cycles-pp.security_mmap_addr
0.33 ? 8% -0.2 0.16 ? 9% perf-profile.children.cycles-pp.cap_mmap_addr
0.97 ? 7% -0.1 0.83 ? 12% perf-profile.children.cycles-pp.find_vma
0.65 ? 6% -0.1 0.53 ? 11% perf-profile.children.cycles-pp.common_file_perm
0.69 ? 8% -0.1 0.58 ? 12% perf-profile.children.cycles-pp.kfree
0.54 ? 6% -0.1 0.44 ? 12% perf-profile.children.cycles-pp.perf_event_mmap_output
0.63 ? 6% -0.1 0.53 ? 11% perf-profile.children.cycles-pp.prepend_name
0.17 ? 11% -0.1 0.10 ? 14% perf-profile.children.cycles-pp.blocking_notifier_call_chain
0.34 ? 8% -0.1 0.28 ? 12% perf-profile.children.cycles-pp.__vm_enough_memory
0.07 ? 9% -0.0 0.03 ?100% perf-profile.children.cycles-pp.common_mmap
0.07 ? 10% -0.0 0.04 ? 71% perf-profile.children.cycles-pp.munmap@plt
0.17 ? 10% +0.1 0.26 ? 9% perf-profile.children.cycles-pp.testcase
7.17 ? 7% -0.8 6.33 ? 11% perf-profile.self.cycles-pp.unmap_page_range
3.00 ? 7% -0.4 2.60 ? 12% perf-profile.self.cycles-pp.__cond_resched
1.71 ? 7% -0.2 1.48 ? 11% perf-profile.self.cycles-pp.zap_pte_range
1.41 ? 7% -0.2 1.21 ? 12% perf-profile.self.cycles-pp.perf_iterate_sb
0.87 ? 7% -0.2 0.70 ? 12% perf-profile.self.cycles-pp.__mmap
0.32 ? 8% -0.2 0.15 ? 8% perf-profile.self.cycles-pp.cap_mmap_addr
0.97 ? 8% -0.1 0.82 ? 12% perf-profile.self.cycles-pp.__entry_text_start
0.83 ? 6% -0.1 0.73 ? 11% perf-profile.self.cycles-pp.vm_unmapped_area
0.52 ? 6% -0.1 0.41 ? 12% perf-profile.self.cycles-pp.common_file_perm
0.65 ? 7% -0.1 0.54 ? 11% perf-profile.self.cycles-pp.find_vma
0.62 ? 6% -0.1 0.52 ? 11% perf-profile.self.cycles-pp.prepend_name
0.48 ? 8% -0.1 0.39 ? 12% perf-profile.self.cycles-pp.perf_event_mmap_output
0.66 ? 7% -0.1 0.58 ? 12% perf-profile.self.cycles-pp.down_write
0.17 ? 10% -0.1 0.10 ? 16% perf-profile.self.cycles-pp.blocking_notifier_call_chain
0.41 ? 7% -0.1 0.35 ? 13% perf-profile.self.cycles-pp.__vm_munmap
0.15 ? 7% -0.1 0.10 ? 12% perf-profile.self.cycles-pp.remove_vma
0.22 ? 8% -0.0 0.18 ? 13% perf-profile.self.cycles-pp.unmap_vmas
0.15 ? 11% +0.1 0.22 ? 9% perf-profile.self.cycles-pp.testcase
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/mmap1/will-it-scale/0x2006a0a
commit:
v5.12-rc3
83b62687a0 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
v5.12-rc3 83b62687a05205847d627f29126
---------------- ---------------------------
%stddev %change %stddev
\ | \
24498098 -3.0% 23771515 will-it-scale.104.processes
235557 -3.0% 228571 will-it-scale.per_process_ops
24498098 -3.0% 23771515 will-it-scale.workload
353.67 ? 3% -14.4% 302.67 ? 6% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
79.00 +1.3% 80.00 vmstat.cpu.sy
19.00 -5.3% 18.00 vmstat.cpu.us
319.83 ? 5% +64.7% 526.83 ? 48% interrupts.CPU101.RES:Rescheduling_interrupts
334.33 ? 6% +21.1% 404.83 ? 18% interrupts.CPU83.RES:Rescheduling_interrupts
611.83 ? 13% +141.2% 1476 ? 53% interrupts.CPU91.CAL:Function_call_interrupts
5.627e+10 -2.9% 5.461e+10 perf-stat.i.branch-instructions
0.46 -0.0 0.45 perf-stat.i.branch-miss-rate%
2.501e+08 -4.9% 2.377e+08 perf-stat.i.branch-misses
1.19 +3.1% 1.23 perf-stat.i.cpi
48893509 -2.9% 47451418 perf-stat.i.dTLB-load-misses
5.744e+10 -2.9% 5.575e+10 perf-stat.i.dTLB-loads
2.603e+10 -2.9% 2.526e+10 perf-stat.i.dTLB-stores
49735573 -4.5% 47501047 perf-stat.i.iTLB-load-misses
2.327e+11 -2.9% 2.259e+11 perf-stat.i.instructions
0.84 -3.0% 0.81 perf-stat.i.ipc
1343 -2.9% 1304 perf-stat.i.metric.M/sec
21939 ? 4% -7.4% 20320 ? 2% perf-stat.i.node-store-misses
0.44 -0.0 0.44 perf-stat.overall.branch-miss-rate%
1.19 +3.1% 1.23 perf-stat.overall.cpi
4679 +1.6% 4755 perf-stat.overall.instructions-per-iTLB-miss
0.84 -3.0% 0.81 perf-stat.overall.ipc
5.608e+10 -2.9% 5.443e+10 perf-stat.ps.branch-instructions
2.493e+08 -4.9% 2.37e+08 perf-stat.ps.branch-misses
48739146 -2.9% 47302145 perf-stat.ps.dTLB-load-misses
5.725e+10 -2.9% 5.557e+10 perf-stat.ps.dTLB-loads
2.594e+10 -2.9% 2.518e+10 perf-stat.ps.dTLB-stores
49564030 -4.5% 47334654 perf-stat.ps.iTLB-load-misses
2.319e+11 -2.9% 2.251e+11 perf-stat.ps.instructions
21867 ? 4% -7.4% 20253 ? 2% perf-stat.ps.node-store-misses
7.016e+13 -3.1% 6.8e+13 perf-stat.total.instructions
36.13 -1.3 34.87 perf-profile.calltrace.cycles-pp.__mmap
1.39 -0.8 0.64 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__munmap
29.02 -0.8 28.27 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
1.36 -0.7 0.66 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__mmap
23.06 -0.6 22.44 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
22.17 -0.6 21.60 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
22.49 -0.6 21.93 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
15.37 -0.5 14.87 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
19.29 -0.4 18.84 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.54 -0.3 5.23 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.53 -0.3 0.25 ?100% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
5.92 -0.3 5.65 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__munmap
1.79 -0.3 1.52 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.85 -0.2 2.63 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__mmap
3.56 -0.2 3.35 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
2.75 -0.2 2.54 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__munmap
1.36 -0.2 1.16 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
2.67 ? 2% -0.2 2.52 ? 2% perf-profile.calltrace.cycles-pp.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
4.67 -0.1 4.53 perf-profile.calltrace.cycles-pp.vm_area_alloc.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.57 ? 2% -0.1 0.44 ? 44% perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
2.10 ? 3% -0.1 1.96 ? 3% perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
5.68 -0.1 5.56 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__mmap
2.11 -0.1 2.00 perf-profile.calltrace.cycles-pp.rcu_all_qs.__cond_resched.unmap_page_range.unmap_vmas.unmap_region
2.50 -0.1 2.40 perf-profile.calltrace.cycles-pp.__entry_text_start.__munmap
4.41 -0.1 4.31 perf-profile.calltrace.cycles-pp.__cond_resched.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
2.46 -0.1 2.37 perf-profile.calltrace.cycles-pp.__entry_text_start.__mmap
0.73 -0.1 0.65 perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.1 0.98 perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.64 ? 2% -0.1 0.58 ? 3% perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.17 -0.0 1.14 perf-profile.calltrace.cycles-pp.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.67 +0.0 0.70 perf-profile.calltrace.cycles-pp.cap_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff
0.54 +0.0 0.58 ? 4% perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.05 +0.1 1.10 perf-profile.calltrace.cycles-pp.vm_unmapped_area.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
1.95 +0.1 2.05 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.60 +0.2 2.76 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
0.86 +0.2 1.10 ? 2% perf-profile.calltrace.cycles-pp.__vma_rb_erase.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
7.82 +0.5 8.31 perf-profile.calltrace.cycles-pp.free_pgd_range.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
16.59 +0.6 17.16 perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
26.78 +1.2 27.98 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
63.40 +1.3 64.68 perf-profile.calltrace.cycles-pp.__munmap
30.03 +1.4 31.39 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
56.74 +1.8 58.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
40.19 +1.9 42.04 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
50.54 +2.1 52.62 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
50.27 +2.1 52.35 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
49.72 +2.1 51.82 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
48.25 +2.1 50.35 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.53 -1.3 35.27 perf-profile.children.cycles-pp.__mmap
22.23 -0.6 21.66 perf-profile.children.cycles-pp.vm_mmap_pgoff
22.52 -0.6 21.95 perf-profile.children.cycles-pp.ksys_mmap_pgoff
15.47 -0.5 14.95 perf-profile.children.cycles-pp.mmap_region
6.34 -0.5 5.88 perf-profile.children.cycles-pp.syscall_return_via_sysret
6.08 -0.4 5.64 perf-profile.children.cycles-pp.__entry_text_start
19.34 -0.4 18.90 perf-profile.children.cycles-pp.do_mmap
11.65 -0.4 11.26 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.66 -0.4 1.30 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
5.62 -0.3 5.32 perf-profile.children.cycles-pp.perf_event_mmap
1.86 -0.3 1.59 perf-profile.children.cycles-pp.vma_link
3.60 -0.2 3.38 perf-profile.children.cycles-pp.perf_iterate_sb
1.38 -0.2 1.16 perf-profile.children.cycles-pp.__vma_link_rb
2.72 ? 2% -0.2 2.56 ? 2% perf-profile.children.cycles-pp.remove_vma
4.67 -0.1 4.54 perf-profile.children.cycles-pp.vm_area_alloc
4.71 -0.1 4.58 perf-profile.children.cycles-pp.__cond_resched
2.14 ? 3% -0.1 2.00 ? 2% perf-profile.children.cycles-pp.kmem_cache_free
3.88 -0.1 3.76 perf-profile.children.cycles-pp.kmem_cache_alloc
1.44 -0.1 1.34 perf-profile.children.cycles-pp.down_write_killable
2.43 -0.1 2.33 perf-profile.children.cycles-pp.rcu_all_qs
0.52 ? 2% -0.1 0.43 ? 2% perf-profile.children.cycles-pp.cap_mmap_addr
1.08 -0.1 1.00 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.65 ? 2% -0.1 0.59 ? 3% perf-profile.children.cycles-pp.security_mmap_addr
0.65 ? 4% -0.1 0.59 ? 5% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.60 -0.1 0.54 ? 3% perf-profile.children.cycles-pp.free_pgtables
0.29 -0.0 0.24 ? 5% perf-profile.children.cycles-pp.unlink_anon_vmas
0.15 ? 5% -0.0 0.11 ? 3% perf-profile.children.cycles-pp.__rb_insert_augmented
0.15 ? 2% -0.0 0.11 ? 4% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
1.20 -0.0 1.17 perf-profile.children.cycles-pp.security_mmap_file
0.68 -0.0 0.65 perf-profile.children.cycles-pp.__might_sleep
0.33 -0.0 0.30 ? 2% perf-profile.children.cycles-pp.__x64_sys_mmap
0.19 ? 3% -0.0 0.16 ? 2% perf-profile.children.cycles-pp.blocking_notifier_call_chain
0.55 -0.0 0.52 perf-profile.children.cycles-pp.tlb_finish_mmu
0.15 ? 6% -0.0 0.13 ? 6% perf-profile.children.cycles-pp.kfree
0.11 ? 4% -0.0 0.09 ? 4% perf-profile.children.cycles-pp.vm_area_free
0.23 ? 2% -0.0 0.21 ? 3% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.17 ? 4% -0.0 0.15 ? 2% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.09 -0.0 0.08 ? 4% perf-profile.children.cycles-pp.unlink_file_vma
0.17 ? 2% +0.0 0.19 ? 2% perf-profile.children.cycles-pp.unmap_single_vma
0.13 ? 3% +0.0 0.15 ? 3% perf-profile.children.cycles-pp.rcu_nocb_flush_deferred_wakeup
0.20 +0.0 0.22 ? 2% perf-profile.children.cycles-pp.userfaultfd_unmap_complete
0.19 ? 3% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.ima_file_mmap
0.21 ? 2% +0.0 0.23 ? 2% perf-profile.children.cycles-pp.cap_capable
0.29 ? 3% +0.0 0.32 ? 2% perf-profile.children.cycles-pp.vmacache_update
0.76 +0.0 0.78 perf-profile.children.cycles-pp.cap_vm_enough_memory
0.50 ? 2% +0.0 0.53 ? 3% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.45 ? 3% +0.0 0.49 ? 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.23 ? 7% +0.0 0.27 perf-profile.children.cycles-pp.may_expand_vm
0.55 +0.0 0.59 ? 3% perf-profile.children.cycles-pp.lru_add_drain
0.38 +0.0 0.43 perf-profile.children.cycles-pp.sync_mm_rss
0.42 +0.0 0.47 ? 2% perf-profile.children.cycles-pp.tlb_gather_mmu
0.19 ? 2% +0.0 0.24 perf-profile.children.cycles-pp.vma_merge
1.06 +0.1 1.11 perf-profile.children.cycles-pp.vm_unmapped_area
0.11 ? 4% +0.1 0.17 ? 3% perf-profile.children.cycles-pp.strlen
0.31 +0.1 0.38 ? 2% perf-profile.children.cycles-pp.__vm_enough_memory
0.02 ?141% +0.1 0.09 perf-profile.children.cycles-pp.arch_vma_name
2.03 +0.1 2.12 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
2.65 +0.2 2.80 perf-profile.children.cycles-pp.zap_pte_range
0.87 +0.2 1.10 ? 2% perf-profile.children.cycles-pp.__vma_rb_erase
15.23 +0.4 15.61 perf-profile.children.cycles-pp.___might_sleep
7.83 +0.5 8.32 perf-profile.children.cycles-pp.free_pgd_range
85.88 +1.0 86.90 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
63.87 +1.3 65.15 perf-profile.children.cycles-pp.__munmap
29.17 +1.3 30.50 perf-profile.children.cycles-pp.unmap_page_range
30.05 +1.4 31.42 perf-profile.children.cycles-pp.unmap_vmas
73.70 +1.5 75.15 perf-profile.children.cycles-pp.do_syscall_64
40.24 +1.9 42.10 perf-profile.children.cycles-pp.unmap_region
50.32 +2.1 52.42 perf-profile.children.cycles-pp.__x64_sys_munmap
49.75 +2.1 51.87 perf-profile.children.cycles-pp.__vm_munmap
48.38 +2.1 50.49 perf-profile.children.cycles-pp.__do_munmap
6.32 -0.5 5.87 perf-profile.self.cycles-pp.syscall_return_via_sysret
10.95 -0.4 10.52 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
5.28 -0.4 4.87 perf-profile.self.cycles-pp.__entry_text_start
1.34 -0.2 1.13 perf-profile.self.cycles-pp.__vma_link_rb
1.83 ? 2% -0.2 1.62 perf-profile.self.cycles-pp.perf_iterate_sb
1.72 -0.1 1.58 perf-profile.self.cycles-pp.perf_event_mmap
1.67 ? 2% -0.1 1.55 perf-profile.self.cycles-pp.kmem_cache_alloc
0.54 -0.1 0.44 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.25 -0.1 0.17 ? 3% perf-profile.self.cycles-pp.security_vm_enough_memory_mm
0.41 ? 3% -0.1 0.33 ? 3% perf-profile.self.cycles-pp.cap_mmap_addr
1.76 -0.1 1.69 perf-profile.self.cycles-pp.rcu_all_qs
0.28 ? 2% -0.1 0.23 ? 4% perf-profile.self.cycles-pp.unlink_anon_vmas
0.53 -0.0 0.49 ? 2% perf-profile.self.cycles-pp.get_unmapped_area
2.30 -0.0 2.26 perf-profile.self.cycles-pp.__cond_resched
0.53 -0.0 0.50 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.29 -0.0 0.26 ? 3% perf-profile.self.cycles-pp.__x64_sys_mmap
0.19 ? 2% -0.0 0.16 ? 4% perf-profile.self.cycles-pp.blocking_notifier_call_chain
0.15 ? 3% -0.0 0.12 ? 4% perf-profile.self.cycles-pp.testcase
0.42 ? 3% -0.0 0.39 perf-profile.self.cycles-pp.vm_mmap_pgoff
0.55 -0.0 0.53 perf-profile.self.cycles-pp.down_write_killable
0.65 -0.0 0.63 perf-profile.self.cycles-pp.__mmap
0.63 -0.0 0.60 perf-profile.self.cycles-pp.__might_sleep
0.35 ? 2% -0.0 0.33 perf-profile.self.cycles-pp.tlb_finish_mmu
0.51 -0.0 0.49 perf-profile.self.cycles-pp.security_mmap_file
0.09 ? 5% -0.0 0.07 ? 5% perf-profile.self.cycles-pp.vm_area_free
0.13 ? 6% -0.0 0.11 ? 3% perf-profile.self.cycles-pp.__rb_insert_augmented
0.14 ? 6% -0.0 0.12 ? 6% perf-profile.self.cycles-pp.kfree
0.17 ? 2% -0.0 0.16 ? 2% perf-profile.self.cycles-pp.do_syscall_64
0.09 ? 4% -0.0 0.08 perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.16 ? 2% +0.0 0.18 ? 2% perf-profile.self.cycles-pp.unmap_single_vma
0.27 ? 3% +0.0 0.29 perf-profile.self.cycles-pp.vmacache_update
0.12 ? 4% +0.0 0.15 ? 3% perf-profile.self.cycles-pp.security_mmap_addr
0.19 ? 2% +0.0 0.22 ? 3% perf-profile.self.cycles-pp.cap_capable
0.81 +0.0 0.84 perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.12 ? 3% +0.0 0.15 ? 3% perf-profile.self.cycles-pp.__vm_enough_memory
0.22 ? 8% +0.0 0.26 perf-profile.self.cycles-pp.may_expand_vm
0.11 ? 3% +0.0 0.15 ? 6% perf-profile.self.cycles-pp.lru_add_drain
0.38 +0.0 0.42 perf-profile.self.cycles-pp.unmap_vmas
0.10 ? 4% +0.0 0.15 ? 3% perf-profile.self.cycles-pp.strlen
0.41 +0.0 0.46 ? 2% perf-profile.self.cycles-pp.tlb_gather_mmu
0.37 +0.0 0.42 perf-profile.self.cycles-pp.sync_mm_rss
0.18 ? 2% +0.0 0.23 ? 2% perf-profile.self.cycles-pp.vma_merge
1.02 +0.1 1.08 perf-profile.self.cycles-pp.vm_unmapped_area
0.00 +0.1 0.07 ? 5% perf-profile.self.cycles-pp.arch_vma_name
1.21 +0.1 1.29 perf-profile.self.cycles-pp.mmap_region
0.69 +0.1 0.78 perf-profile.self.cycles-pp.do_mmap
1.88 +0.1 1.99 perf-profile.self.cycles-pp.zap_pte_range
1.60 +0.1 1.75 perf-profile.self.cycles-pp.__do_munmap
0.85 +0.2 1.09 ? 2% perf-profile.self.cycles-pp.__vma_rb_erase
12.71 +0.3 12.98 perf-profile.self.cycles-pp.___might_sleep
7.77 +0.5 8.26 perf-profile.self.cycles-pp.free_pgd_range
10.90 +0.9 11.83 perf-profile.self.cycles-pp.unmap_page_range
***************************************************************************************************
lkp-csl-2sp4: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/30%/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2sp4/pipe/unixbench/0x4003006
commit:
v5.12-rc3
83b62687a0 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
v5.12-rc3 83b62687a05205847d627f29126
---------------- ---------------------------
%stddev %change %stddev
\ | \
27.05 +0.9% 27.30 boot-time.boot
11578 ? 36% +155.9% 29627 ? 64% numa-vmstat.node0.nr_anon_pages
6126 ? 33% -41.0% 3611 ? 12% proc-vmstat.numa_hint_faults
34497 ? 4% +11.2% 38375 ? 6% softirqs.CPU74.SCHED
32284 ? 5% +18.1% 38115 ? 4% softirqs.CPU86.SCHED
46301 ? 36% +156.1% 118593 ? 64% numa-meminfo.node0.AnonPages
65078 ? 25% +129.8% 149537 ? 44% numa-meminfo.node0.AnonPages.max
1105262 ? 5% +10.2% 1218242 ? 7% numa-meminfo.node0.MemUsed
1.41 ? 15% -15.7% 1.19 ? 7% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.01 ? 14% -72.0% 0.00 ? 76% perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork
2.82 ? 15% -15.7% 2.38 ? 7% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.01 ? 14% -72.0% 0.00 ? 76% perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork
1.808e+08 +1.0% 1.827e+08 perf-stat.i.iTLB-load-misses
27514 +15.0% 31648 ? 4% perf-stat.i.node-loads
1.805e+08 +1.0% 1.824e+08 perf-stat.ps.iTLB-load-misses
27463 +14.9% 31548 ? 4% perf-stat.ps.node-loads
0.70 ? 6% +0.8 1.51 ?105% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.82 ? 3% +1.3 2.16 ?126% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.68 ? 7% +0.6 1.23 ? 83% perf-profile.children.cycles-pp.hrtimer_interrupt
0.70 ? 7% +0.6 1.25 ? 83% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.98 ? 6% +0.8 1.80 ? 86% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.12 ? 5% +1.1 2.23 ? 95% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1592 ? 44% -50.3% 791.50 ? 51% interrupts.CPU11.CAL:Function_call_interrupts
463.00 ? 86% +403.2% 2329 ? 46% interrupts.CPU15.NMI:Non-maskable_interrupts
463.00 ? 86% +403.2% 2329 ? 46% interrupts.CPU15.PMI:Performance_monitoring_interrupts
609.33 ? 3% -12.8% 531.33 ? 2% interrupts.CPU20.CAL:Function_call_interrupts
1027 ?127% +299.5% 4105 ? 26% interrupts.CPU22.NMI:Non-maskable_interrupts
1027 ?127% +299.5% 4105 ? 26% interrupts.CPU22.PMI:Performance_monitoring_interrupts
445.67 ? 69% +357.1% 2037 ? 70% interrupts.CPU28.NMI:Non-maskable_interrupts
445.67 ? 69% +357.1% 2037 ? 70% interrupts.CPU28.PMI:Performance_monitoring_interrupts
914.00 ? 17% -28.5% 653.50 ? 16% interrupts.CPU33.CAL:Function_call_interrupts
4418 ? 6% -70.1% 1319 ?118% interrupts.CPU39.NMI:Non-maskable_interrupts
4418 ? 6% -70.1% 1319 ?118% interrupts.CPU39.PMI:Performance_monitoring_interrupts
3256 ? 71% -82.2% 580.83 ? 7% interrupts.CPU4.CAL:Function_call_interrupts
662.33 ? 15% +12.5% 745.00 ? 16% interrupts.CPU41.CAL:Function_call_interrupts
47.33 ? 95% -83.5% 7.83 ? 85% interrupts.CPU48.TLB:TLB_shootdowns
505.00 ?110% +332.9% 2186 ? 87% interrupts.CPU56.NMI:Non-maskable_interrupts
505.00 ?110% +332.9% 2186 ? 87% interrupts.CPU56.PMI:Performance_monitoring_interrupts
4940 ? 28% -61.3% 1914 ? 85% interrupts.CPU61.NMI:Non-maskable_interrupts
4940 ? 28% -61.3% 1914 ? 85% interrupts.CPU61.PMI:Performance_monitoring_interrupts
485.00 ? 13% +19.9% 581.33 ? 6% interrupts.CPU67.CAL:Function_call_interrupts
534.00 +49.1% 796.33 ? 58% interrupts.CPU69.CAL:Function_call_interrupts
2001 ? 64% -61.2% 776.83 ? 43% interrupts.CPU7.CAL:Function_call_interrupts
154.00 ? 11% +2162.1% 3483 ? 72% interrupts.CPU76.NMI:Non-maskable_interrupts
154.00 ? 11% +2162.1% 3483 ? 72% interrupts.CPU76.PMI:Performance_monitoring_interrupts
218.33 ? 90% +972.1% 2340 ? 90% interrupts.CPU8.NMI:Non-maskable_interrupts
218.33 ? 90% +972.1% 2340 ? 90% interrupts.CPU8.PMI:Performance_monitoring_interrupts
584.67 ? 6% +30.8% 764.83 ? 20% interrupts.CPU88.CAL:Function_call_interrupts
***************************************************************************************************
lkp-knl-f1: 256 threads Intel(R) Genuine Intel(R) CPU 0000 @ 1.30GHz with 112G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/lkp-knl-f1/dgram_pipe/aim9/300s/0xffff0190
commit:
v5.12-rc3
83b62687a0 ("workqueue/tracing: Copy workqueue name to buffer in trace event")
v5.12-rc3 83b62687a05205847d627f29126
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:5 -40% :6 stderr.Events_disabled
2:5 -40% :6 stderr.Events_enabled
2:5 -40% :6 stderr.[perf_record:Captured_and_wrote#MB/tmp/lkp/perf-sched.data(#samples)]
2:5 -40% :6 stderr.[perf_record:Woken_up#times_to_write_data]
2:5 -40% :6 stderr.has_stderr
1:5 4% 1:6 perf-profile.children.cycles-pp.error_return
2:5 10% 2:6 perf-profile.children.cycles-pp.error_entry
0:5 3% 0:6 perf-profile.self.cycles-pp.error_return
2:5 10% 2:6 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
16564 ?119% +436.8% 88920 ? 53% sched_debug.cfs_rq:/.spread0.avg
0.18 ? 60% -54.6% 0.08 ? 8% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.13 ? 51% -54.2% 0.06 ? 11% perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra
108.00 ? 24% +28.1% 138.33 ? 11% proc-vmstat.nr_active_file
108.00 ? 24% +28.1% 138.33 ? 11% proc-vmstat.nr_zone_active_file
1451 ? 3% +23.1% 1786 ? 17% slabinfo.dmaengine-unmap-16.active_objs
1451 ? 3% +24.2% 1802 ? 18% slabinfo.dmaengine-unmap-16.num_objs
20416 ? 31% +33.1% 27166 softirqs.CPU210.SCHED
22528 ? 23% +18.1% 26606 ? 3% softirqs.CPU27.SCHED
12.98 +0.5 13.52 perf-stat.i.cache-miss-rate%
1.047e+08 -1.8% 1.028e+08 perf-stat.i.cache-references
2.31 ? 2% -0.1 2.16 ? 2% perf-stat.i.iTLB-load-miss-rate%
59534460 -5.0% 56534759 perf-stat.i.iTLB-load-misses
43.43 ? 2% +6.7% 46.33 ? 2% perf-stat.i.instructions-per-iTLB-miss
12.99 +0.5 13.52 perf-stat.overall.cache-miss-rate%
2.24 ? 2% -0.1 2.10 ? 2% perf-stat.overall.iTLB-load-miss-rate%
43.55 ? 2% +6.7% 46.46 ? 2% perf-stat.overall.instructions-per-iTLB-miss
1.044e+08 -1.8% 1.025e+08 perf-stat.ps.cache-references
59329569 -5.0% 56341300 perf-stat.ps.iTLB-load-misses
345.20 ? 4% -31.7% 235.67 ? 31% interrupts.CPU11.NMI:Non-maskable_interrupts
345.20 ? 4% -31.7% 235.67 ? 31% interrupts.CPU11.PMI:Performance_monitoring_interrupts
35.00 ? 58% +197.6% 104.17 ? 71% interrupts.CPU12.RES:Rescheduling_interrupts
248.60 ? 30% +54.1% 383.17 ? 33% interrupts.CPU126.NMI:Non-maskable_interrupts
248.60 ? 30% +54.1% 383.17 ? 33% interrupts.CPU126.PMI:Performance_monitoring_interrupts
15.00 ? 66% +216.7% 47.50 ? 66% interrupts.CPU13.RES:Rescheduling_interrupts
1208 ? 3% +11.5% 1347 ? 12% interrupts.CPU169.CAL:Function_call_interrupts
289.80 ? 27% -29.4% 204.67 ? 2% interrupts.CPU204.NMI:Non-maskable_interrupts
289.80 ? 27% -29.4% 204.67 ? 2% interrupts.CPU204.PMI:Performance_monitoring_interrupts
57.20 ? 16% +55.3% 88.83 ? 22% interrupts.CPU22.RES:Rescheduling_interrupts
43.00 ? 63% +722.1% 353.50 ? 11% interrupts.CPU255.TLB:TLB_shootdowns
44.60 ? 51% +75.6% 78.33 ? 17% interrupts.CPU6.RES:Rescheduling_interrupts
12.20 ? 27% +214.2% 38.33 ? 78% interrupts.CPU76.RES:Rescheduling_interrupts
1700 ? 4% -16.4% 1422 ? 10% interrupts.CPU8.CAL:Function_call_interrupts
0.71 ? 5% -0.1 0.63 ? 3% perf-profile.calltrace.cycles-pp.cpuidle_governor_latency_req.menu_select.do_idle.cpu_startup_entry.start_secondary
1.52 ? 4% +0.1 1.66 ? 3% perf-profile.calltrace.cycles-pp.update_rq_clock.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.14 ? 7% +0.3 1.43 ? 12% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.66 ? 3% +0.4 2.05 ? 4% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.62 ? 9% +0.4 1.02 ? 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.29 ? 4% +0.4 1.71 ? 13% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
2.38 ? 4% +0.4 2.81 ? 9% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
11.17 ? 3% +1.6 12.74 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
3.60 ? 2% +1.6 5.23 ? 6% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
8.96 ? 5% +1.8 10.74 ? 3% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
9.44 ? 5% +1.8 11.23 ? 3% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
21.13 ? 2% +2.1 23.28 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
28.25 ? 3% +2.7 30.95 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
29.70 ? 3% +2.7 32.43 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.48 ? 8% -0.2 0.29 ? 5% perf-profile.children.cycles-pp.hrtimer_forward
0.56 ? 2% -0.1 0.44 ? 5% perf-profile.children.cycles-pp.rcu_core
1.52 ? 3% -0.1 1.42 ? 4% perf-profile.children.cycles-pp.load_balance
0.40 ? 4% -0.1 0.31 ? 4% perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.73 ? 5% -0.1 0.66 ? 3% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.39 ? 7% -0.1 0.32 ? 3% perf-profile.children.cycles-pp.get_cpu_device
0.08 ? 11% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.cpuidle_get_cpu_driver
0.15 ? 9% -0.0 0.11 ? 14% perf-profile.children.cycles-pp.arch_cpu_idle_exit
0.05 ? 7% +0.0 0.07 ? 7% perf-profile.children.cycles-pp.sched_clock_tick
0.05 ? 52% +0.0 0.07 ? 14% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.26 ? 5% +0.0 0.30 ? 5% perf-profile.children.cycles-pp.trigger_load_balance
0.17 ? 7% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.cpuidle_not_available
1.01 ? 4% +0.1 1.07 ? 4% perf-profile.children.cycles-pp.enqueue_hrtimer
0.78 ? 6% +0.1 0.86 ? 2% perf-profile.children.cycles-pp.__x86_retpoline_rax
0.39 ? 6% +0.1 0.52 ? 3% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
1.66 ? 3% +0.1 1.81 ? 2% perf-profile.children.cycles-pp.update_rq_clock
1.42 ? 5% +0.4 1.81 ? 4% perf-profile.children.cycles-pp._raw_spin_lock
1.68 ? 3% +0.4 2.08 ? 4% perf-profile.children.cycles-pp.get_next_timer_interrupt
2.44 ? 4% +0.5 2.91 ? 9% perf-profile.children.cycles-pp.clockevents_program_event
4.39 ? 4% +0.6 4.96 ? 7% perf-profile.children.cycles-pp.ktime_get
11.55 ? 3% +1.5 13.06 ? 2% perf-profile.children.cycles-pp.tick_sched_timer
3.86 +1.6 5.48 ? 6% perf-profile.children.cycles-pp.scheduler_tick
9.31 ? 5% +1.7 11.03 ? 3% perf-profile.children.cycles-pp.update_process_times
9.78 ? 5% +1.7 11.51 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
21.67 ? 2% +2.1 23.75 perf-profile.children.cycles-pp.__hrtimer_run_queues
28.82 ? 2% +2.6 31.45 perf-profile.children.cycles-pp.hrtimer_interrupt
30.23 ? 2% +2.7 32.89 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.47 ? 7% -0.2 0.28 ? 5% perf-profile.self.cycles-pp.hrtimer_forward
0.79 ? 8% -0.2 0.61 ? 6% perf-profile.self.cycles-pp.tick_nohz_next_event
0.24 ? 4% -0.2 0.09 ? 12% perf-profile.self.cycles-pp.rcu_core
0.39 ? 4% -0.1 0.30 ? 5% perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.39 ? 6% -0.1 0.32 ? 3% perf-profile.self.cycles-pp.get_cpu_device
0.09 ? 8% -0.1 0.03 ? 99% perf-profile.self.cycles-pp.cpumask_next_and
0.11 ? 11% -0.0 0.07 ? 18% perf-profile.self.cycles-pp.arch_cpu_idle_exit
0.15 ? 7% +0.0 0.17 ? 9% perf-profile.self.cycles-pp.sched_idle_set_state
0.05 ? 52% +0.0 0.07 ? 14% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.02 ?122% +0.0 0.06 ? 6% perf-profile.self.cycles-pp.sched_clock_tick
0.15 ? 7% +0.1 0.20 ? 4% perf-profile.self.cycles-pp.cpuidle_not_available
0.31 ? 8% +0.1 0.45 ? 3% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
1.38 ? 5% +0.4 1.78 ? 4% perf-profile.self.cycles-pp._raw_spin_lock
3.21 ? 3% +0.4 3.66 ? 10% perf-profile.self.cycles-pp.ktime_get
0.81 ? 21% +0.5 1.35 ? 6% perf-profile.self.cycles-pp.update_process_times
0.84 ? 6% +1.5 2.32 ? 14% perf-profile.self.cycles-pp.scheduler_tick
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang