Greeting,
FYI, we noticed a 24.4% improvement of stress-ng.tmpfs.ops_per_sec due to commit:
commit: 07ca760673088f262da57ff42c15558688565aa2 ("mm/munlock: maintain page->mlock_count while unevictable")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
class: memory
test: tmpfs
cpufreq_governor: performance
ucode: 0xd000331
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
memory/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/tmpfs/stress-ng/60s/0xd000331
commit:
b109b87050 ("mm/munlock: replace clear_page_mlock() by final clearance")
07ca760673 ("mm/munlock: maintain page->mlock_count while unevictable")
b109b87050df5438 07ca760673088f262da57ff42c1
---------------- ---------------------------
%stddev %change %stddev
\ | \
39.44 ? 5% -18.7% 32.08 ? 3% stress-ng.time.elapsed_time
39.44 ? 5% -18.7% 32.08 ? 3% stress-ng.time.elapsed_time.max
10529 ? 6% -22.7% 8134 ? 5% stress-ng.time.involuntary_context_switches
9988 ? 2% -3.4% 9646 stress-ng.time.percent_of_cpu_this_job_got
2663 ? 8% -33.3% 1776 ? 5% stress-ng.time.system_time
1276 +3.2% 1317 stress-ng.time.user_time
2955 ? 5% +24.4% 3677 ? 2% stress-ng.tmpfs.ops_per_sec
630625 ? 6% -13.9% 543204 ? 11% numa-numastat.node1.numa_hit
12458183 ? 9% -29.8% 8745870 ? 3% turbostat.IRQ
22.83 ? 6% +25.5% 28.67 ? 3% vmstat.cpu.us
3825 ? 2% +15.7% 4424 vmstat.system.cs
291800 ? 4% -15.1% 247782 vmstat.system.in
160426 +11.2% 178462 ? 2% meminfo.Active
160426 +11.2% 178462 ? 2% meminfo.Active(anon)
235963 ? 5% -16.1% 197931 ? 2% meminfo.Mapped
117810 ? 6% -24.1% 89437 ? 3% meminfo.Mlocked
23.66 ? 5% +3.1 26.73 ? 3% mpstat.cpu.all.idle%
0.02 ? 27% +0.0 0.03 ? 18% mpstat.cpu.all.soft%
50.99 ? 4% -9.5 41.53 ? 3% mpstat.cpu.all.sys%
24.59 ? 5% +6.3 30.87 ? 3% mpstat.cpu.all.usr%
78820 ? 3% +14.4% 90181 ? 3% numa-meminfo.node0.Active
78820 ? 3% +14.4% 90181 ? 3% numa-meminfo.node0.Active(anon)
59902 ? 10% -23.5% 45810 ? 4% numa-meminfo.node0.Mlocked
152030 ? 18% -25.1% 113836 ? 12% numa-meminfo.node1.Inactive
152030 ? 18% -25.1% 113836 ? 12% numa-meminfo.node1.Inactive(anon)
57489 ? 10% -26.6% 42196 ? 7% numa-meminfo.node1.Mlocked
233645 ? 5% -14.0% 201029 ? 3% numa-meminfo.node1.Shmem
19568 ? 2% +14.8% 22461 ? 3% numa-vmstat.node0.nr_active_anon
14599 ? 9% -23.2% 11217 ? 7% numa-vmstat.node0.nr_mlock
19562 ? 2% +14.8% 22462 ? 3% numa-vmstat.node0.nr_zone_active_anon
37654 ? 18% -24.1% 28589 ? 10% numa-vmstat.node1.nr_inactive_anon
14616 ? 10% -27.5% 10599 ? 10% numa-vmstat.node1.nr_mlock
58341 ? 5% -14.1% 50124 ? 3% numa-vmstat.node1.nr_shmem
37656 ? 18% -24.1% 28586 ? 10% numa-vmstat.node1.nr_zone_inactive_anon
39985 +10.5% 44174 ? 2% proc-vmstat.nr_active_anon
725611 -1.3% 716100 proc-vmstat.nr_file_pages
121814 ? 2% -6.2% 114239 proc-vmstat.nr_inactive_anon
59502 ? 5% -16.7% 49539 proc-vmstat.nr_mapped
29485 ? 6% -24.6% 22222 ? 3% proc-vmstat.nr_mlock
115710 ? 3% -8.2% 106199 proc-vmstat.nr_shmem
639255 -1.1% 632112 proc-vmstat.nr_unevictable
39985 +10.5% 44174 ? 2% proc-vmstat.nr_zone_active_anon
121814 ? 2% -6.2% 114239 proc-vmstat.nr_zone_inactive_anon
639256 -1.1% 632112 proc-vmstat.nr_zone_unevictable
1233200 -7.1% 1145498 proc-vmstat.numa_hit
1117527 -7.9% 1029792 proc-vmstat.numa_local
1233229 -7.1% 1145564 proc-vmstat.pgalloc_normal
1000156 -4.7% 952725 proc-vmstat.pgfree
17338 -7.4% 16058 ? 3% proc-vmstat.pgreuse
7.718e+10 ? 4% +20.3% 9.289e+10 ? 2% perf-stat.i.branch-instructions
2.104e+08 ? 5% +19.5% 2.515e+08 ? 3% perf-stat.i.branch-misses
1.232e+08 ? 2% +13.2% 1.395e+08 ? 2% perf-stat.i.cache-misses
5.521e+08 ? 6% +13.8% 6.284e+08 ? 3% perf-stat.i.cache-references
3388 ? 2% +15.3% 3905 perf-stat.i.context-switches
1.04 ? 4% -19.6% 0.83 ? 2% perf-stat.i.cpi
3.32e+11 ? 2% -4.3% 3.176e+11 perf-stat.i.cpu-cycles
281.54 ? 4% +11.5% 313.86 ? 3% perf-stat.i.cpu-migrations
2740 -8.8% 2497 ? 6% perf-stat.i.cycles-between-cache-misses
3.883e+10 ? 4% +17.8% 4.575e+10 ? 2% perf-stat.i.dTLB-loads
37861529 ? 5% +22.5% 46399080 ? 2% perf-stat.i.dTLB-store-misses
2.93e+10 ? 5% +22.1% 3.577e+10 ? 2% perf-stat.i.dTLB-stores
3.177e+11 ? 4% +20.4% 3.824e+11 ? 2% perf-stat.i.instructions
1.01 ? 5% +24.4% 1.25 ? 2% perf-stat.i.ipc
1385 ? 6% +28.2% 1776 ? 4% perf-stat.i.major-faults
2.59 ? 2% -4.3% 2.48 perf-stat.i.metric.GHz
670.12 +17.6% 788.18 perf-stat.i.metric.K/sec
1138 ? 4% +16.7% 1329 ? 2% perf-stat.i.metric.M/sec
4303675 ? 5% +22.3% 5264697 ? 2% perf-stat.i.minor-faults
4223176 ? 9% +21.5% 5132933 ? 5% perf-stat.i.node-loads
12619285 ? 9% +26.6% 15970682 ? 5% perf-stat.i.node-stores
4305061 ? 5% +22.3% 5266473 ? 2% perf-stat.i.page-faults
1.74 ? 2% -5.4% 1.64 perf-stat.overall.MPKI
1.05 ? 5% -20.6% 0.83 ? 3% perf-stat.overall.cpi
2694 ? 2% -15.5% 2276 perf-stat.overall.cycles-between-cache-misses
0.96 ? 5% +25.7% 1.20 ? 3% perf-stat.overall.ipc
7.499e+10 ? 4% +20.1% 9.003e+10 ? 2% perf-stat.ps.branch-instructions
2.044e+08 ? 5% +19.2% 2.437e+08 ? 3% perf-stat.ps.branch-misses
1.197e+08 ? 2% +13.0% 1.352e+08 ? 2% perf-stat.ps.cache-misses
5.364e+08 ? 6% +13.5% 6.09e+08 ? 3% perf-stat.ps.cache-references
3296 ? 2% +14.8% 3784 perf-stat.ps.context-switches
3.223e+11 ? 2% -4.5% 3.078e+11 perf-stat.ps.cpu-cycles
273.70 ? 4% +11.1% 304.20 ? 3% perf-stat.ps.cpu-migrations
3.773e+10 ? 4% +17.5% 4.434e+10 ? 2% perf-stat.ps.dTLB-loads
36783176 ? 5% +22.3% 44970404 ? 2% perf-stat.ps.dTLB-store-misses
2.847e+10 ? 5% +21.8% 3.467e+10 ? 2% perf-stat.ps.dTLB-stores
3.087e+11 ? 4% +20.1% 3.706e+11 ? 2% perf-stat.ps.instructions
1347 ? 6% +27.7% 1721 ? 4% perf-stat.ps.major-faults
4181461 ? 5% +22.0% 5102628 ? 2% perf-stat.ps.minor-faults
4097306 ? 9% +21.4% 4973600 ? 5% perf-stat.ps.node-loads
12251815 ? 9% +26.3% 15476164 ? 5% perf-stat.ps.node-stores
4182809 ? 5% +22.0% 5104350 ? 2% perf-stat.ps.page-faults
1.243e+13 -1.7% 1.222e+13 perf-stat.total.instructions
38.79 ? 13% -33.6 5.18 ?101% perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
39.23 ? 13% -33.6 5.65 ? 96% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
39.10 ? 13% -33.6 5.54 ? 97% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
40.08 ? 12% -33.5 6.58 ? 88% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
42.54 ? 10% -33.2 9.33 ? 75% perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
42.84 ? 10% -33.2 9.65 ? 73% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap.stress_oomable_child
42.75 ? 10% -33.2 9.57 ? 74% perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
42.77 ? 10% -33.2 9.60 ? 74% perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap.stress_oomable_child
42.96 ? 10% -33.1 9.81 ? 73% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap.stress_oomable_child
43.23 ? 10% -33.1 10.12 ? 72% perf-profile.calltrace.cycles-pp.__munmap.stress_oomable_child
79.55 -30.9 48.66 ? 51% perf-profile.calltrace.cycles-pp.stress_oomable_child
22.84 ? 14% -21.7 1.14 ?162% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
22.81 ? 14% -21.7 1.12 ?163% perf-profile.calltrace.cycles-pp.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range.unmap_page_range
23.00 ? 14% -21.7 1.33 ?142% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region
13.86 ? 15% -13.9 0.00 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range
13.83 ? 15% -13.8 0.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add.lru_add_drain_cpu.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu
13.62 ? 15% -13.6 0.00 perf-profile.calltrace.cycles-pp.isolate_lru_page.munlock_page.zap_pte_range.unmap_page_range.unmap_vmas
13.62 ? 15% -13.6 0.00 perf-profile.calltrace.cycles-pp.isolate_lru_page.mlock_page.do_set_pte.filemap_map_pages.do_fault
15.07 ? 14% -13.5 1.59 ?168% perf-profile.calltrace.cycles-pp.handle_mm_fault.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
15.07 ? 14% -13.5 1.59 ?168% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__get_user_pages.populate_vma_page_range.__mm_populate
15.06 ? 14% -13.5 1.59 ?167% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages.populate_vma_page_range
15.06 ? 14% -13.5 1.59 ?167% perf-profile.calltrace.cycles-pp.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages
15.18 ? 14% -13.5 1.71 ?155% perf-profile.calltrace.cycles-pp.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.18 ? 14% -13.5 1.71 ?155% perf-profile.calltrace.cycles-pp.populate_vma_page_range.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
15.01 ? 14% -13.5 1.54 ?169% perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
15.18 ? 14% -13.5 1.70 ?155% perf-profile.calltrace.cycles-pp.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
13.43 ? 15% -13.4 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.isolate_lru_page.munlock_page.zap_pte_range.unmap_page_range
13.40 ? 15% -13.4 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.isolate_lru_page.mlock_page.do_set_pte.filemap_map_pages
13.38 ? 15% -13.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.isolate_lru_page.munlock_page.zap_pte_range
13.38 ? 15% -13.4 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.isolate_lru_page.mlock_page.do_set_pte
13.35 ? 15% -13.3 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__pagevec_lru_add.lru_add_drain_cpu.lru_add_drain.free_pages_and_swap_cache
13.30 ? 15% -13.3 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.lru_add_drain_cpu.lru_add_drain
13.28 ? 15% -13.3 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.isolate_lru_page.mlock_page
13.26 ? 15% -13.3 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.isolate_lru_page.munlock_page
13.17 ? 16% -13.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.lru_add_drain_cpu
17.87 ? 9% -13.0 4.91 ? 76% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
17.97 ? 9% -12.9 5.02 ? 74% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
18.02 ? 9% -12.9 5.09 ? 74% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
18.01 ? 9% -12.9 5.08 ? 74% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
18.15 ? 9% -12.9 5.25 ? 73% perf-profile.calltrace.cycles-pp.__mmap
13.83 ? 15% -12.2 1.60 ?169% perf-profile.calltrace.cycles-pp.munlock_page.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region
13.65 ? 15% -12.1 1.51 ?169% perf-profile.calltrace.cycles-pp.mlock_page.do_set_pte.filemap_map_pages.do_fault.__handle_mm_fault
8.93 ? 14% -7.8 1.09 ?164% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu.zap_pte_range
8.53 ? 15% -7.6 0.93 ?170% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache.tlb_flush_mmu
8.47 ? 15% -7.6 0.91 ?171% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.lru_add_drain.free_pages_and_swap_cache
8.38 ? 15% -7.5 0.88 ?172% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.lru_add_drain
0.00 +2.9 2.95 ? 92% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.10 ?223% +3.9 3.98 ? 97% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.13 ?223% +5.9 6.02 ? 87% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.14 ?223% +6.0 6.18 ? 88% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.20 ?223% +9.5 9.66 ? 85% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.22 ?223% +12.0 12.18 ? 84% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.72 ?114% +21.5 22.26 ? 59% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.34 ? 87% +33.8 35.10 ? 68% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
1.39 ? 87% +34.5 35.90 ? 67% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.53 ? 89% +38.5 39.98 ? 70% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.53 ? 89% +38.5 40.04 ? 70% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
1.53 ? 89% +38.5 40.04 ? 70% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
1.54 ? 89% +38.8 40.32 ? 70% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
49.06 ? 15% -45.4 3.62 ?171% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
63.53 ? 9% -42.6 20.90 ? 55% perf-profile.children.cycles-pp.do_syscall_64
63.70 ? 9% -42.6 21.12 ? 55% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
38.83 ? 13% -33.6 5.24 ?100% perf-profile.children.cycles-pp.zap_pte_range
39.24 ? 13% -33.6 5.69 ? 95% perf-profile.children.cycles-pp.unmap_vmas
39.12 ? 13% -33.5 5.58 ? 96% perf-profile.children.cycles-pp.unmap_page_range
40.10 ? 12% -33.5 6.60 ? 88% perf-profile.children.cycles-pp.unmap_region
42.57 ? 10% -33.2 9.39 ? 74% perf-profile.children.cycles-pp.__do_munmap
42.76 ? 10% -33.2 9.60 ? 74% perf-profile.children.cycles-pp.__vm_munmap
42.77 ? 10% -33.2 9.61 ? 74% perf-profile.children.cycles-pp.__x64_sys_munmap
43.32 ? 10% -33.1 10.21 ? 72% perf-profile.children.cycles-pp.__munmap
79.55 -30.9 48.66 ? 51% perf-profile.children.cycles-pp.stress_oomable_child
27.28 ? 15% -27.3 0.00 perf-profile.children.cycles-pp.isolate_lru_page
26.84 ? 15% -24.0 2.82 ?169% perf-profile.children.cycles-pp.folio_lruvec_lock_irq
26.77 ? 15% -23.9 2.87 ?165% perf-profile.children.cycles-pp._raw_spin_lock_irq
22.82 ? 15% -21.9 0.96 ?166% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
22.90 ? 14% -21.6 1.30 ?136% perf-profile.children.cycles-pp.lru_add_drain
22.85 ? 14% -21.6 1.30 ?136% perf-profile.children.cycles-pp.free_pages_and_swap_cache
23.04 ? 14% -21.5 1.53 ?118% perf-profile.children.cycles-pp.tlb_flush_mmu
22.76 ? 15% -21.5 1.25 ?118% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
15.03 ? 14% -15.0 0.00 perf-profile.children.cycles-pp.__pagevec_lru_add
13.92 ? 14% -13.8 0.08 ? 55% perf-profile.children.cycles-pp.lru_add_drain_cpu
15.37 ? 14% -13.5 1.90 ?141% perf-profile.children.cycles-pp.do_set_pte
15.18 ? 14% -13.4 1.79 ?145% perf-profile.children.cycles-pp.__mm_populate
15.18 ? 14% -13.4 1.79 ?145% perf-profile.children.cycles-pp.populate_vma_page_range
15.18 ? 14% -13.4 1.80 ?143% perf-profile.children.cycles-pp.__get_user_pages
16.02 ? 13% -13.3 2.68 ?108% perf-profile.children.cycles-pp.do_fault
15.06 ? 14% -13.3 1.76 ?145% perf-profile.children.cycles-pp.filemap_map_pages
16.20 ? 12% -13.2 2.98 ? 97% perf-profile.children.cycles-pp.__handle_mm_fault
16.42 ? 12% -13.2 3.22 ? 92% perf-profile.children.cycles-pp.handle_mm_fault
17.88 ? 9% -12.9 4.96 ? 74% perf-profile.children.cycles-pp.vm_mmap_pgoff
17.97 ? 9% -12.9 5.06 ? 74% perf-profile.children.cycles-pp.ksys_mmap_pgoff
18.20 ? 9% -12.9 5.32 ? 72% perf-profile.children.cycles-pp.__mmap
13.83 ? 15% -12.1 1.75 ?150% perf-profile.children.cycles-pp.munlock_page
13.65 ? 15% -12.1 1.60 ?157% perf-profile.children.cycles-pp.mlock_page
8.94 ? 14% -7.8 1.18 ?148% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.23 ? 4% -0.1 0.10 ? 75% perf-profile.children.cycles-pp.__list_del_entry_valid
0.06 ? 16% +0.0 0.10 ? 20% perf-profile.children.cycles-pp.fput_many
0.01 ?223% +0.1 0.10 ? 52% perf-profile.children.cycles-pp.__fput
0.01 ?223% +0.1 0.10 ? 49% perf-profile.children.cycles-pp.task_work_run
0.02 ?141% +0.1 0.12 ? 37% perf-profile.children.cycles-pp._find_next_bit
0.10 ? 20% +0.1 0.22 ? 40% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.15 ? 18% +0.1 0.28 ? 35% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.00 +0.1 0.14 ? 57% perf-profile.children.cycles-pp.rcu_core
0.00 +0.1 0.14 ? 92% perf-profile.children.cycles-pp.exit_mmap
0.00 +0.1 0.14 ? 91% perf-profile.children.cycles-pp.mmput
0.00 +0.2 0.15 ? 82% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.00 +0.2 0.19 ?122% perf-profile.children.cycles-pp.__schedule
0.00 +0.2 0.19 ? 48% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.2 0.20 ?100% perf-profile.children.cycles-pp.do_group_exit
0.00 +0.2 0.20 ?100% perf-profile.children.cycles-pp.do_exit
0.00 +0.2 0.22 ? 62% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.22 ? 53% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.01 ?223% +0.3 0.28 ? 85% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.3 0.28 ? 63% perf-profile.children.cycles-pp.start_kernel
0.01 ?223% +0.3 0.29 ? 84% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.3 0.28 ? 66% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.3 0.31 ? 83% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.3 0.32 ? 81% perf-profile.children.cycles-pp.run_rebalance_domains
0.12 ? 22% +0.3 0.44 ? 57% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.3 0.33 ? 77% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.01 ?223% +0.3 0.34 ? 94% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.3 0.34 ? 48% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.01 ?223% +0.3 0.35 ? 91% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.3 0.34 ?123% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.3 0.35 ?107% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.4 0.36 ? 87% perf-profile.children.cycles-pp.rcu_idle_exit
0.01 ?223% +0.4 0.38 ?103% perf-profile.children.cycles-pp.timerqueue_del
0.01 ?223% +0.4 0.43 ? 92% perf-profile.children.cycles-pp.__remove_hrtimer
0.01 ?223% +0.5 0.47 ?104% perf-profile.children.cycles-pp.irqtime_account_irq
0.01 ?223% +0.5 0.50 ? 80% perf-profile.children.cycles-pp.load_balance
0.01 ?223% +0.5 0.53 ? 95% perf-profile.children.cycles-pp.native_sched_clock
0.01 ?223% +0.6 0.62 ? 89% perf-profile.children.cycles-pp.lapic_next_deadline
0.01 ?223% +0.6 0.63 ? 93% perf-profile.children.cycles-pp.sched_clock_cpu
0.01 ?223% +0.6 0.65 ? 71% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.01 ?223% +0.6 0.66 ? 66% perf-profile.children.cycles-pp.rebalance_domains
0.01 ?223% +0.7 0.66 ?100% perf-profile.children.cycles-pp.read_tsc
0.47 ? 18% +0.7 1.19 ? 11% perf-profile.children.cycles-pp.native_irq_return_iret
0.02 ?223% +0.8 0.86 ? 62% perf-profile.children.cycles-pp.tick_nohz_next_event
0.02 ?223% +0.9 0.94 ? 99% perf-profile.children.cycles-pp.tick_irq_enter
0.02 ?223% +1.0 0.99 ?101% perf-profile.children.cycles-pp.irq_enter_rcu
0.13 ? 43% +1.0 1.11 ? 90% perf-profile.children.cycles-pp.scheduler_tick
0.08 ? 36% +1.1 1.13 ? 65% perf-profile.children.cycles-pp.clockevents_program_event
0.02 ?223% +1.3 1.30 ? 82% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.04 ?124% +1.3 1.34 ? 66% perf-profile.children.cycles-pp.__softirqentry_text_start
0.08 ? 32% +1.4 1.50 ? 54% perf-profile.children.cycles-pp.ktime_get
0.24 ? 55% +1.6 1.84 ? 44% perf-profile.children.cycles-pp.kthread
0.08 ? 63% +1.6 1.68 ? 69% perf-profile.children.cycles-pp.irq_exit_rcu
0.24 ? 55% +1.6 1.84 ? 44% perf-profile.children.cycles-pp.ret_from_fork
0.18 ? 61% +1.8 2.02 ? 95% perf-profile.children.cycles-pp.update_process_times
0.20 ? 64% +2.0 2.18 ? 97% perf-profile.children.cycles-pp.tick_sched_handle
0.21 ? 64% +2.2 2.43 ? 93% perf-profile.children.cycles-pp.tick_sched_timer
0.09 ?129% +2.9 2.99 ? 92% perf-profile.children.cycles-pp.menu_select
0.30 ? 66% +3.8 4.12 ? 93% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.43 ? 60% +5.8 6.20 ? 84% perf-profile.children.cycles-pp.hrtimer_interrupt
0.44 ? 62% +5.9 6.36 ? 85% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.60 ? 64% +9.3 9.91 ? 83% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.71 ? 60% +10.7 11.44 ? 81% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.88 ? 82% +21.6 22.48 ? 59% perf-profile.children.cycles-pp.intel_idle
1.40 ? 87% +34.7 36.14 ? 67% perf-profile.children.cycles-pp.cpuidle_enter_state
1.40 ? 87% +34.8 36.18 ? 67% perf-profile.children.cycles-pp.cpuidle_enter
1.53 ? 89% +38.5 40.04 ? 70% perf-profile.children.cycles-pp.start_secondary
1.54 ? 89% +38.8 40.32 ? 70% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
1.54 ? 89% +38.8 40.32 ? 70% perf-profile.children.cycles-pp.cpu_startup_entry
1.54 ? 89% +38.8 40.32 ? 70% perf-profile.children.cycles-pp.do_idle
49.06 ? 15% -45.4 3.62 ?171% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.23 ? 4% -0.1 0.10 ? 75% perf-profile.self.cycles-pp.__list_del_entry_valid
0.27 ? 8% -0.1 0.14 ? 37% perf-profile.self.cycles-pp.release_pages
0.01 ?223% +0.1 0.11 ? 28% perf-profile.self.cycles-pp._find_next_bit
0.07 ? 20% +0.1 0.18 ? 39% perf-profile.self.cycles-pp.error_entry
0.00 +0.2 0.18 ? 67% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.2 0.19 ? 48% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.2 0.21 ? 71% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.00 +0.2 0.21 ? 76% perf-profile.self.cycles-pp.tick_nohz_next_event
0.00 +0.2 0.22 ? 53% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.2 0.23 ?121% perf-profile.self.cycles-pp.update_process_times
0.00 +0.2 0.24 ?100% perf-profile.self.cycles-pp.update_sd_lb_stats
0.01 ?223% +0.3 0.27 ? 82% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.3 0.28 ? 67% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.11 ? 24% +0.3 0.41 ? 57% perf-profile.self.cycles-pp._raw_spin_lock
0.01 ?223% +0.4 0.42 ?118% perf-profile.self.cycles-pp.do_idle
0.01 ?223% +0.5 0.51 ? 95% perf-profile.self.cycles-pp.native_sched_clock
0.01 ?223% +0.6 0.62 ? 89% perf-profile.self.cycles-pp.lapic_next_deadline
0.01 ?223% +0.6 0.64 ?100% perf-profile.self.cycles-pp.read_tsc
0.47 ? 18% +0.7 1.19 ? 11% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ? 33% +0.9 0.95 ? 35% perf-profile.self.cycles-pp.ktime_get
0.03 ?223% +1.4 1.39 ? 98% perf-profile.self.cycles-pp.menu_select
0.04 ?171% +2.0 2.02 ? 72% perf-profile.self.cycles-pp.cpuidle_enter_state
0.88 ? 82% +21.6 22.48 ? 59% perf-profile.self.cycles-pp.intel_idle
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation
Thanks,
Oliver Sang