Hello,
kernel test robot noticed a 39.9% improvement of vm-scalability.throughput on:
commit: 3db82b9374ca921b8b820a75e83809d5c4133d8f ("mm/memory: allow pte_offset_map[_lock]() to fail")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: vm-scalability
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
parameters:
runtime: 300s
size: 8T
test: anon-cow-seq-mt
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/debian-11.1-x86_64-20220510.cgz/300s/8T/lkp-csl-2sp3/anon-cow-seq-mt/vm-scalability
commit:
895f5ee464 ("mm/khugepaged: allow pte_offset_map[_lock]() to fail")
3db82b9374 ("mm/memory: allow pte_offset_map[_lock]() to fail")
895f5ee464cc90a5 3db82b9374ca921b8b820a75e83
---------------- ---------------------------
%stddev %change %stddev
\ | \
44083 +39.9% 61676 vm-scalability.median
4232016 +39.9% 5920976 vm-scalability.throughput
2.183e+08 +58.1% 3.451e+08 vm-scalability.time.minor_page_faults
26859 -6.5% 25105 vm-scalability.time.system_time
900.08 +185.0% 2565 vm-scalability.time.user_time
1.271e+09 +39.9% 1.778e+09 vm-scalability.workload
1.36 ? 4% +0.4 1.78 ? 3% mpstat.cpu.all.irq%
3.16 +5.7 8.89 mpstat.cpu.all.usr%
1742 ? 3% -8.9% 1586 ? 2% vmstat.system.cs
811989 +50.8% 1224248 vmstat.system.in
4.639e+08 +54.1% 7.151e+08 turbostat.IRQ
272.15 +2.2% 278.16 turbostat.PkgWatt
38.54 +4.1% 40.11 turbostat.RAMWatt
1.028e+08 +26.1% 1.297e+08 ? 2% numa-numastat.node0.local_node
1.029e+08 +26.1% 1.298e+08 ? 2% numa-numastat.node0.numa_hit
1.032e+08 +22.9% 1.268e+08 numa-numastat.node1.local_node
1.032e+08 +22.9% 1.269e+08 numa-numastat.node1.numa_hit
101.67 ? 7% +40.0% 142.33 ? 11% perf-c2c.DRAM.local
2203 ? 5% +13.1% 2491 ? 7% perf-c2c.DRAM.remote
2821 ? 5% +40.8% 3973 ? 7% perf-c2c.HITM.local
1631 ? 5% +15.5% 1883 ? 5% perf-c2c.HITM.remote
9071420 ?119% +223.0% 29299562 ? 36% numa-meminfo.node0.AnonPages
12743852 ? 77% +151.1% 32001756 ? 29% numa-meminfo.node0.AnonPages.max
9095440 ?119% +222.6% 29344203 ? 35% numa-meminfo.node0.Inactive
9094274 ?119% +222.7% 29343307 ? 35% numa-meminfo.node0.Inactive(anon)
54437513 ? 20% -38.1% 33706726 ? 31% numa-meminfo.node0.MemFree
11366617 ? 99% +182.4% 32097404 ? 32% numa-meminfo.node0.MemUsed
22950 ?204% +366.5% 107057 ? 43% numa-meminfo.node0.PageTables
107196 ? 43% -78.2% 23373 ?200% numa-meminfo.node1.PageTables
2268035 ?119% +222.7% 7318000 ? 36% numa-vmstat.node0.nr_anon_pages
13609253 ? 20% -38.0% 8436220 ? 31% numa-vmstat.node0.nr_free_pages
2273746 ?119% +222.2% 7326279 ? 35% numa-vmstat.node0.nr_inactive_anon
5737 ?204% +366.1% 26741 ? 43% numa-vmstat.node0.nr_page_table_pages
2273746 ?119% +222.2% 7326277 ? 35% numa-vmstat.node0.nr_zone_inactive_anon
1.029e+08 +26.1% 1.298e+08 ? 2% numa-vmstat.node0.numa_hit
1.028e+08 +26.1% 1.297e+08 ? 2% numa-vmstat.node0.numa_local
26811 ? 43% -78.2% 5842 ?200% numa-vmstat.node1.nr_page_table_pages
1.032e+08 +22.9% 1.269e+08 numa-vmstat.node1.numa_hit
1.032e+08 +22.9% 1.268e+08 numa-vmstat.node1.numa_local
2550138 ?221% -99.1% 23707 ? 7% sched_debug.cfs_rq:/.load.max
190971 ? 18% -25.9% 141526 ? 10% sched_debug.cfs_rq:/.min_vruntime.stddev
6.70 ? 18% +1194.0% 86.64 ?134% sched_debug.cfs_rq:/.removed.load_avg.avg
32.66 ? 9% +1826.5% 629.27 ?135% sched_debug.cfs_rq:/.removed.load_avg.stddev
2.64 ? 29% +67.9% 4.44 ? 14% sched_debug.cfs_rq:/.removed.runnable_avg.avg
13.55 ? 17% +32.4% 17.94 ? 5% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
2.64 ? 29% +67.9% 4.44 ? 14% sched_debug.cfs_rq:/.removed.util_avg.avg
13.55 ? 17% +32.4% 17.94 ? 5% sched_debug.cfs_rq:/.removed.util_avg.stddev
190872 ? 18% -25.9% 141397 ? 10% sched_debug.cfs_rq:/.spread0.stddev
21367 ? 8% +42.1% 30364 ? 53% sched_debug.cpu.avg_idle.min
9282723 +7.2% 9947921 proc-vmstat.nr_anon_pages
2247296 -2.9% 2181210 proc-vmstat.nr_dirty_background_threshold
4500089 -2.9% 4367755 proc-vmstat.nr_dirty_threshold
22659550 -2.9% 21997719 proc-vmstat.nr_free_pages
9304294 +7.1% 9966815 proc-vmstat.nr_inactive_anon
19877 +1.3% 20136 proc-vmstat.nr_kernel_stack
9304294 +7.1% 9966815 proc-vmstat.nr_zone_inactive_anon
2.061e+08 +24.5% 2.566e+08 proc-vmstat.numa_hit
2.06e+08 +24.5% 2.565e+08 proc-vmstat.numa_local
10189086 ? 2% +14.1% 11621781 proc-vmstat.numa_pte_updates
2.144e+08 +23.6% 2.649e+08 proc-vmstat.pgalloc_normal
2.191e+08 +57.9% 3.46e+08 proc-vmstat.pgfault
2.113e+08 ? 2% +25.1% 2.643e+08 proc-vmstat.pgfree
4455 +57.9% 7033 proc-vmstat.thp_split_pmd
3.13 ? 16% +24.9% 3.90 ? 4% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.04 ? 4% -34.8% 0.03 ? 8% perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.03 ? 9% -25.3% 0.02 ? 10% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.04 ? 25% -41.0% 0.02 ? 2% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.04 ? 5% -19.0% 0.03 ? 8% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
18.15 ? 11% -25.3% 13.57 ? 24% perf-sched.sch_delay.max.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.07 ? 11% -30.4% 0.05 ? 17% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.11 ? 4% -29.2% 0.08 ? 11% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
4.58 ?219% -98.4% 0.07 ? 3% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.08 ? 8% -19.4% 0.07 ? 9% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
127.50 ? 3% +10.0% 140.24 ? 4% perf-sched.total_wait_and_delay.average.ms
127.37 ? 3% +10.0% 140.10 ? 4% perf-sched.total_wait_time.average.ms
4.64 ? 4% +38.1% 6.41 ? 7% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
1151 ? 3% -49.0% 587.00 ? 7% perf-sched.wait_and_delay.count.__cond_resched.__alloc_pages.__folio_alloc.vma_alloc_folio.wp_page_copy
79.33 ? 8% +133.2% 185.00 ? 9% perf-sched.wait_and_delay.count.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
1082 ? 4% -27.9% 780.33 ? 7% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
90.44 ? 80% +231.2% 299.53 ? 11% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.83 ? 44% +231.7% 2.74 ? 10% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
4.60 ? 4% +38.7% 6.39 ? 7% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.85 ? 44% +487.1% 5.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork
90.37 ? 80% +231.4% 299.50 ? 11% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
6.11 ? 5% +34.5% 8.22 perf-stat.i.MPKI
9.598e+09 +18.8% 1.14e+10 perf-stat.i.branch-instructions
0.21 ? 11% -0.0 0.19 ? 4% perf-stat.i.branch-miss-rate%
11862691 +1.7% 12069954 perf-stat.i.branch-misses
32.45 -2.1 30.37 perf-stat.i.cache-miss-rate%
68318956 ? 2% +53.7% 1.05e+08 perf-stat.i.cache-misses
2.101e+08 ? 2% +64.5% 3.456e+08 perf-stat.i.cache-references
1654 ? 3% -9.7% 1493 ? 2% perf-stat.i.context-switches
7.76 -13.7% 6.70 perf-stat.i.cpi
123.01 +3.1% 126.76 perf-stat.i.cpu-migrations
4168 ? 2% -34.7% 2724 perf-stat.i.cycles-between-cache-misses
1099264 ? 17% +158.9% 2845760 ? 37% perf-stat.i.dTLB-load-misses
9.027e+09 +15.7% 1.045e+10 perf-stat.i.dTLB-loads
0.14 +0.0 0.16 ? 5% perf-stat.i.dTLB-store-miss-rate%
2231340 +65.1% 3684026 ? 5% perf-stat.i.dTLB-store-misses
1.606e+09 +42.8% 2.292e+09 perf-stat.i.dTLB-stores
34.26 -8.8 25.48 perf-stat.i.iTLB-load-miss-rate%
4704797 +59.0% 7482354 perf-stat.i.iTLB-loads
3.671e+10 +15.9% 4.255e+10 perf-stat.i.instructions
16351 +12.7% 18425 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.14 +14.9% 0.16 perf-stat.i.ipc
304.16 +74.8% 531.52 perf-stat.i.metric.K/sec
212.91 +19.8% 255.03 perf-stat.i.metric.M/sec
725948 +58.0% 1147192 perf-stat.i.minor-faults
6009534 ? 3% +21.2% 7284919 perf-stat.i.node-load-misses
277285 ? 5% +36.9% 379705 ? 3% perf-stat.i.node-loads
13317338 +115.3% 28678842 perf-stat.i.node-store-misses
725948 +58.0% 1147192 perf-stat.i.page-faults
5.72 ? 2% +42.0% 8.12 perf-stat.overall.MPKI
0.12 -0.0 0.11 perf-stat.overall.branch-miss-rate%
32.52 -2.1 30.38 perf-stat.overall.cache-miss-rate%
7.84 -13.6% 6.78 perf-stat.overall.cpi
4217 ? 2% -34.8% 2748 perf-stat.overall.cycles-between-cache-misses
0.01 ? 17% +0.0 0.03 ? 37% perf-stat.overall.dTLB-load-miss-rate%
0.14 +0.0 0.16 ? 5% perf-stat.overall.dTLB-store-miss-rate%
34.59 -9.2 25.35 perf-stat.overall.iTLB-load-miss-rate%
14765 +13.5% 16758 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.13 +15.7% 0.15 perf-stat.overall.ipc
8706 -17.2% 7210 perf-stat.overall.path-length
9.566e+09 +18.8% 1.136e+10 perf-stat.ps.branch-instructions
11819705 +1.7% 12024429 perf-stat.ps.branch-misses
68088826 ? 2% +53.7% 1.046e+08 perf-stat.ps.cache-misses
2.094e+08 ? 2% +64.5% 3.444e+08 perf-stat.ps.cache-references
1648 ? 3% -9.6% 1489 ? 2% perf-stat.ps.context-switches
122.76 +3.1% 126.57 perf-stat.ps.cpu-migrations
1096296 ? 17% +158.8% 2836984 ? 37% perf-stat.ps.dTLB-load-misses
8.998e+09 +15.7% 1.041e+10 perf-stat.ps.dTLB-loads
2223854 +65.1% 3671206 ? 5% perf-stat.ps.dTLB-store-misses
1.6e+09 +42.8% 2.284e+09 perf-stat.ps.dTLB-stores
4688785 +59.0% 7455950 perf-stat.ps.iTLB-loads
3.659e+10 +15.9% 4.24e+10 perf-stat.ps.instructions
723488 +58.0% 1143177 perf-stat.ps.minor-faults
5989152 ? 3% +21.2% 7259410 perf-stat.ps.node-load-misses
277102 ? 5% +36.7% 378929 ? 3% perf-stat.ps.node-loads
13272143 +115.3% 28578623 perf-stat.ps.node-store-misses
723488 +58.0% 1143178 perf-stat.ps.page-faults
1.107e+13 +15.9% 1.282e+13 perf-stat.total.instructions
54.54 -7.2 47.29 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__pte_offset_map_lock.wp_page_copy.__handle_mm_fault
54.79 -7.2 47.61 perf-profile.calltrace.cycles-pp._raw_spin_lock.__pte_offset_map_lock.wp_page_copy.__handle_mm_fault.handle_mm_fault
54.80 -7.2 47.62 perf-profile.calltrace.cycles-pp.__pte_offset_map_lock.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
95.87 -6.7 89.13 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
55.94 -6.7 49.25 perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
95.93 -6.7 89.25 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
96.36 -6.1 90.26 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
96.36 -6.1 90.29 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
96.83 -5.4 91.46 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
0.60 ? 47% +0.3 0.92 ? 8% perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.native_queued_spin_lock_slowpath._raw_spin_lock.__pte_offset_map_lock.wp_page_copy
0.42 ? 71% +0.3 0.76 ? 9% perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.native_queued_spin_lock_slowpath._raw_spin_lock.__pte_offset_map_lock
0.30 ?100% +0.4 0.70 ? 9% perf-profile.calltrace.cycles-pp.__flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.native_queued_spin_lock_slowpath
0.30 ?100% +0.4 0.71 ? 10% perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.native_queued_spin_lock_slowpath._raw_spin_lock
0.00 +0.7 0.71 ? 6% perf-profile.calltrace.cycles-pp.lock_vma_under_rcu.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
1.32 ? 2% +1.1 2.46 ? 2% perf-profile.calltrace.cycles-pp.do_rw_once
99.21 +2.3 101.48 perf-profile.calltrace.cycles-pp.do_access
94.25 -7.5 86.78 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
94.52 -7.3 87.22 perf-profile.children.cycles-pp._raw_spin_lock
54.86 -7.2 47.69 perf-profile.children.cycles-pp.__pte_offset_map_lock
95.88 -6.7 89.13 perf-profile.children.cycles-pp.__handle_mm_fault
55.94 -6.7 49.25 perf-profile.children.cycles-pp.wp_page_copy
95.93 -6.7 89.26 perf-profile.children.cycles-pp.handle_mm_fault
96.36 -6.1 90.27 perf-profile.children.cycles-pp.do_user_addr_fault
96.37 -6.1 90.29 perf-profile.children.cycles-pp.exc_page_fault
96.83 -5.5 91.30 perf-profile.children.cycles-pp.asm_exc_page_fault
98.54 -1.0 97.52 perf-profile.children.cycles-pp.do_access
0.08 ? 12% +0.0 0.11 ? 15% perf-profile.children.cycles-pp.mas_walk
0.06 ? 11% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.06 ? 7% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.sync_regs
0.00 +0.1 0.05 perf-profile.children.cycles-pp.propagate_protected_usage
0.08 ? 10% +0.1 0.13 ? 5% perf-profile.children.cycles-pp.llist_reverse_order
0.05 ? 8% +0.1 0.10 ? 4% perf-profile.children.cycles-pp.___perf_sw_event
0.00 +0.1 0.05 ? 7% perf-profile.children.cycles-pp.irqtime_account_irq
0.08 ? 4% +0.1 0.13 ? 3% perf-profile.children.cycles-pp.llist_add_batch
0.02 ?141% +0.1 0.07 ? 6% perf-profile.children.cycles-pp.__count_memcg_events
0.06 ? 7% +0.1 0.12 ? 5% perf-profile.children.cycles-pp.__perf_sw_event
0.26 ? 3% +0.1 0.34 ? 3% perf-profile.children.cycles-pp.copy_mc_fragile
0.26 ? 4% +0.1 0.34 ? 4% perf-profile.children.cycles-pp.__wp_page_copy_user
0.19 ? 12% +0.1 0.28 ? 5% perf-profile.children.cycles-pp.ptep_clear_flush
0.19 ? 12% +0.1 0.28 ? 5% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.19 ? 12% +0.1 0.28 ? 5% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
0.19 ? 12% +0.1 0.28 ? 5% perf-profile.children.cycles-pp.smp_call_function_many_cond
0.01 ?223% +0.1 0.10 ? 8% perf-profile.children.cycles-pp.do_huge_pmd_wp_page
0.16 ? 4% +0.1 0.25 ? 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.10 ? 3% +0.1 0.22 ? 4% perf-profile.children.cycles-pp.up_read
0.17 ? 6% +0.2 0.32 ? 4% perf-profile.children.cycles-pp.page_counter_uncharge
0.20 ? 5% +0.2 0.38 ? 3% perf-profile.children.cycles-pp.uncharge_batch
0.19 ? 5% +0.2 0.38 ? 4% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.22 ? 6% +0.2 0.42 ? 4% perf-profile.children.cycles-pp.__mem_cgroup_uncharge
0.23 ? 4% +0.2 0.42 ? 4% perf-profile.children.cycles-pp.__folio_put
0.32 ? 31% +0.3 0.58 ? 11% perf-profile.children.cycles-pp.flush_tlb_func
0.14 ? 7% +0.3 0.42 ? 7% perf-profile.children.cycles-pp.down_read_trylock
0.54 ? 22% +0.4 0.95 ? 10% perf-profile.children.cycles-pp.__sysvec_call_function
0.53 ? 23% +0.4 0.94 ? 9% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.28 ? 5% +0.4 0.71 ? 6% perf-profile.children.cycles-pp.lock_vma_under_rcu
0.58 ? 22% +0.4 1.03 ? 9% perf-profile.children.cycles-pp.sysvec_call_function
0.72 ? 18% +0.5 1.25 ? 8% perf-profile.children.cycles-pp.asm_sysvec_call_function
2.26 ? 2% +4.8 7.04 ? 3% perf-profile.children.cycles-pp.do_rw_once
93.23 -7.9 85.31 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.06 ? 7% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.05 perf-profile.self.cycles-pp.propagate_protected_usage
0.08 ? 10% +0.1 0.13 ? 5% perf-profile.self.cycles-pp.llist_reverse_order
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.smp_call_function_many_cond
0.08 ? 4% +0.1 0.13 ? 3% perf-profile.self.cycles-pp.llist_add_batch
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.__count_memcg_events
0.02 ? 99% +0.1 0.10 ? 5% perf-profile.self.cycles-pp.___perf_sw_event
0.26 ? 3% +0.1 0.34 ? 3% perf-profile.self.cycles-pp.copy_mc_fragile
0.04 ? 71% +0.1 0.11 ? 14% perf-profile.self.cycles-pp.__handle_mm_fault
0.16 ? 5% +0.1 0.25 ? 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.13 ? 12% +0.1 0.24 ? 9% perf-profile.self.cycles-pp.__flush_smp_call_function_queue
0.10 ? 3% +0.1 0.22 ? 4% perf-profile.self.cycles-pp.up_read
0.15 ? 7% +0.1 0.27 ? 5% perf-profile.self.cycles-pp.page_counter_uncharge
0.06 ? 11% +0.1 0.19 ? 6% perf-profile.self.cycles-pp.lock_vma_under_rcu
0.14 ? 4% +0.2 0.29 perf-profile.self.cycles-pp.wp_page_copy
0.28 ? 5% +0.2 0.45 perf-profile.self.cycles-pp._raw_spin_lock
0.18 ? 5% +0.2 0.38 ? 4% perf-profile.self.cycles-pp.irqentry_exit_to_user_mode
0.22 ? 39% +0.2 0.45 ? 15% perf-profile.self.cycles-pp.flush_tlb_func
0.14 ? 7% +0.3 0.42 ? 7% perf-profile.self.cycles-pp.down_read_trylock
0.85 ? 2% +1.1 1.93 ? 2% perf-profile.self.cycles-pp.do_access
2.19 ? 2% +4.7 6.91 ? 3% perf-profile.self.cycles-pp.do_rw_once
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki