Hello,
kernel test robot noticed a -1.4% regression of will-it-scale.per_process_ops on:
commit: 0d940a9b270b9220dcff74d8e9123c9788365751 ("mm/pgtable: allow pte_offset_map[_lock]() to fail")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: will-it-scale
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
parameters:
nr_task: 16
mode: process
test: page_fault3
cpufreq_governor: performance
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -1.1% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=16 |
| | test=page_fault3 |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -1.4% regression |
| test machine | 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=16 |
| | test=page_fault3 |
+------------------+------------------------------------------------------------------------------------------------+
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-lkp/[email protected]
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/process/16/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp9/page_fault3/will-it-scale
commit:
46c475bd67 ("mm/pgtable: kmap_local_page() instead of kmap_atomic()")
0d940a9b27 ("mm/pgtable: allow pte_offset_map[_lock]() to fail")
46c475bd676bb050 0d940a9b270b9220dcff74d8e91
---------------- ---------------------------
%stddev %change %stddev
\ | \
63.50 ? 22% +89.3% 120.20 ? 29% perf-c2c.DRAM.local
31.50 ? 9% -49.2% 16.00 ? 82% perf-sched.wait_and_delay.count.__cond_resched.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault
21210417 -1.4% 20918211 will-it-scale.16.processes
1325650 -1.4% 1307388 will-it-scale.per_process_ops
21210417 -1.4% 20918211 will-it-scale.workload
13851644 -1.3% 13678409 proc-vmstat.numa_hit
13784295 -1.2% 13612106 proc-vmstat.numa_local
13921317 -1.2% 13747304 proc-vmstat.pgalloc_normal
6.382e+09 -1.4% 6.295e+09 proc-vmstat.pgfault
13874545 -1.3% 13699760 proc-vmstat.pgfree
1.42 -3.2% 1.37 perf-stat.i.MPKI
1.342e+10 +1.6% 1.364e+10 perf-stat.i.branch-instructions
90676362 -1.4% 89440359 perf-stat.i.cache-references
0.91 -1.8% 0.89 perf-stat.i.cpi
1.594e+10 +1.2% 1.612e+10 perf-stat.i.dTLB-loads
7.71 -0.2 7.50 perf-stat.i.dTLB-store-miss-rate%
7.401e+08 -1.3% 7.302e+08 perf-stat.i.dTLB-store-misses
8.852e+09 +1.7% 9.005e+09 perf-stat.i.dTLB-stores
6.408e+10 +1.9% 6.528e+10 perf-stat.i.instructions
1.10 +1.9% 1.12 perf-stat.i.ipc
610.02 +1.4% 618.52 perf-stat.i.metric.M/sec
21159992 -1.4% 20874280 perf-stat.i.minor-faults
18.07 ? 16% -7.9 10.15 ? 21% perf-stat.i.node-load-miss-rate%
287596 ? 18% +114.8% 617694 ? 31% perf-stat.i.node-loads
21436939 -1.5% 21105390 perf-stat.i.node-stores
21159992 -1.4% 20874280 perf-stat.i.page-faults
1.42 -3.2% 1.37 perf-stat.overall.MPKI
0.91 -1.8% 0.89 perf-stat.overall.cpi
7.72 -0.2 7.50 perf-stat.overall.dTLB-store-miss-rate%
1.10 +1.9% 1.12 perf-stat.overall.ipc
18.57 ? 15% -8.2 10.42 ? 20% perf-stat.overall.node-load-miss-rate%
909803 +3.3% 939804 perf-stat.overall.path-length
1.337e+10 +1.6% 1.359e+10 perf-stat.ps.branch-instructions
90373478 -1.4% 89143201 perf-stat.ps.cache-references
1.589e+10 +1.2% 1.607e+10 perf-stat.ps.dTLB-loads
7.377e+08 -1.3% 7.278e+08 perf-stat.ps.dTLB-store-misses
8.823e+09 +1.7% 8.975e+09 perf-stat.ps.dTLB-stores
6.386e+10 +1.9% 6.506e+10 perf-stat.ps.instructions
21089973 -1.4% 20805155 perf-stat.ps.minor-faults
286631 ? 18% +114.8% 615645 ? 31% perf-stat.ps.node-loads
21365727 -1.5% 21035342 perf-stat.ps.node-stores
21089973 -1.4% 20805155 perf-stat.ps.page-faults
1.93e+13 +1.9% 1.966e+13 perf-stat.total.instructions
2.22 -0.3 1.90 ? 4% perf-profile.calltrace.cycles-pp.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
2.85 -0.3 2.56 ? 2% perf-profile.calltrace.cycles-pp.do_set_pte.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
1.85 -0.3 1.57 ? 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault
1.73 ? 2% -0.3 1.48 ? 3% perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.finish_fault.do_fault.__handle_mm_fault
11.84 -0.2 11.65 perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.testcase
4.94 -0.1 4.83 perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
5.08 -0.1 5.01 perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap
5.09 -0.1 5.02 perf-profile.calltrace.cycles-pp.__munmap
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
5.06 -0.1 5.00 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
5.06 -0.1 5.00 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap
5.06 -0.1 5.00 perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
5.08 -0.1 5.02 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
1.02 ? 2% -0.1 0.96 perf-profile.calltrace.cycles-pp.tlb_batch_pages_flush.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
0.83 ? 2% -0.1 0.78 perf-profile.calltrace.cycles-pp.release_pages.tlb_batch_pages_flush.zap_pte_range.zap_pmd_range.unmap_page_range
0.81 ? 2% -0.1 0.76 ? 3% perf-profile.calltrace.cycles-pp.up_read.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
0.96 -0.0 0.93 perf-profile.calltrace.cycles-pp.error_entry.testcase
1.48 +0.1 1.54 perf-profile.calltrace.cycles-pp.mtree_range_walk.mt_find.find_vma.do_user_addr_fault.exc_page_fault
1.84 ? 2% +0.1 1.94 perf-profile.calltrace.cycles-pp.mt_find.find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
1.98 ? 2% +0.1 2.11 perf-profile.calltrace.cycles-pp.find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
6.31 +0.2 6.51 perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
5.93 +0.2 6.14 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault
5.14 +0.3 5.40 perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault.__handle_mm_fault
2.94 +0.3 3.24 ? 2% perf-profile.calltrace.cycles-pp.filemap_get_entry.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault
1.54 ? 2% +0.3 1.87 ? 4% perf-profile.calltrace.cycles-pp.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault.__do_fault
0.84 ? 4% +0.3 1.17 ? 5% perf-profile.calltrace.cycles-pp.xas_descend.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault
4.42 +0.4 4.82 perf-profile.calltrace.cycles-pp.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
14.24 +0.6 14.82 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.7 0.67 ? 2% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.8 0.76 ? 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault
21.49 +0.9 22.34 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
32.17 +0.9 33.05 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
31.34 +0.9 32.23 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
17.63 +1.0 18.66 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +1.6 1.59 perf-profile.calltrace.cycles-pp.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
4.54 -0.4 4.19 ? 2% perf-profile.children.cycles-pp.__perf_sw_event
3.92 -0.3 3.61 ? 2% perf-profile.children.cycles-pp.___perf_sw_event
2.95 -0.3 2.66 ? 2% perf-profile.children.cycles-pp.do_set_pte
1.78 ? 2% -0.2 1.53 ? 2% perf-profile.children.cycles-pp.page_add_file_rmap
9.52 -0.2 9.30 perf-profile.children.cycles-pp.native_irq_return_iret
11.88 -0.2 11.69 perf-profile.children.cycles-pp.sync_regs
5.08 -0.1 5.01 perf-profile.children.cycles-pp.unmap_region
5.09 -0.1 5.02 perf-profile.children.cycles-pp.do_vmi_munmap
5.09 -0.1 5.02 perf-profile.children.cycles-pp.__munmap
5.09 -0.1 5.02 perf-profile.children.cycles-pp.do_vmi_align_munmap
5.06 -0.1 5.00 perf-profile.children.cycles-pp.unmap_vmas
5.06 -0.1 5.00 perf-profile.children.cycles-pp.unmap_page_range
5.06 -0.1 5.00 perf-profile.children.cycles-pp.zap_pmd_range
5.15 -0.1 5.09 perf-profile.children.cycles-pp.do_syscall_64
5.08 -0.1 5.02 perf-profile.children.cycles-pp.__vm_munmap
5.15 -0.1 5.09 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
5.08 -0.1 5.02 perf-profile.children.cycles-pp.__x64_sys_munmap
5.06 -0.1 5.00 perf-profile.children.cycles-pp.zap_pte_range
1.03 ? 2% -0.1 0.97 perf-profile.children.cycles-pp.tlb_batch_pages_flush
0.84 ? 2% -0.1 0.79 perf-profile.children.cycles-pp.release_pages
0.85 -0.0 0.80 ? 2% perf-profile.children.cycles-pp.up_read
1.22 -0.0 1.18 perf-profile.children.cycles-pp.error_entry
0.18 ? 3% -0.0 0.16 ? 3% perf-profile.children.cycles-pp.noop_dirty_folio
0.12 ? 5% +0.0 0.14 ? 3% perf-profile.children.cycles-pp.vm_normal_page
0.17 ? 6% +0.0 0.20 ? 3% perf-profile.children.cycles-pp.set_page_dirty
0.33 ? 2% +0.0 0.37 ? 13% perf-profile.children.cycles-pp.__mod_node_page_state
0.46 ? 3% +0.0 0.50 perf-profile.children.cycles-pp.access_error
0.02 ?141% +0.0 0.06 ? 6% perf-profile.children.cycles-pp.queued_spin_unlock
0.58 ? 2% +0.1 0.66 ? 5% perf-profile.children.cycles-pp.__mod_lruvec_state
0.57 ? 3% +0.1 0.65 perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
1.88 ? 2% +0.1 2.00 perf-profile.children.cycles-pp.mt_find
2.04 ? 2% +0.1 2.17 perf-profile.children.cycles-pp.find_vma
0.37 ? 2% +0.1 0.51 ? 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
6.42 +0.2 6.62 perf-profile.children.cycles-pp.__do_fault
6.04 +0.2 6.26 perf-profile.children.cycles-pp.shmem_fault
0.50 ? 2% +0.3 0.76 ? 2% perf-profile.children.cycles-pp.handle_pte_fault
5.25 +0.3 5.52 perf-profile.children.cycles-pp.shmem_get_folio_gfp
3.01 +0.3 3.31 ? 2% perf-profile.children.cycles-pp.filemap_get_entry
1.72 ? 2% +0.3 2.05 ? 4% perf-profile.children.cycles-pp.xas_load
0.96 ? 4% +0.3 1.29 ? 5% perf-profile.children.cycles-pp.xas_descend
79.53 +0.3 79.86 perf-profile.children.cycles-pp.asm_exc_page_fault
4.56 +0.4 4.95 perf-profile.children.cycles-pp.finish_fault
14.41 +0.6 15.01 perf-profile.children.cycles-pp.do_fault
0.00 +0.8 0.76 ? 2% perf-profile.children.cycles-pp.__pte_offset_map
21.64 +0.9 22.49 perf-profile.children.cycles-pp.handle_mm_fault
32.32 +0.9 33.21 perf-profile.children.cycles-pp.exc_page_fault
31.80 +0.9 32.69 perf-profile.children.cycles-pp.do_user_addr_fault
17.74 +1.0 18.77 perf-profile.children.cycles-pp.__handle_mm_fault
0.00 +1.7 1.71 perf-profile.children.cycles-pp.__pte_offset_map_lock
36.88 -0.4 36.51 perf-profile.self.cycles-pp.testcase
1.00 ? 3% -0.3 0.72 perf-profile.self.cycles-pp.page_add_file_rmap
3.34 -0.3 3.07 ? 2% perf-profile.self.cycles-pp.___perf_sw_event
0.83 ? 2% -0.3 0.57 ? 2% perf-profile.self.cycles-pp.finish_fault
9.52 -0.2 9.30 perf-profile.self.cycles-pp.native_irq_return_iret
11.84 -0.2 11.66 perf-profile.self.cycles-pp.sync_regs
0.45 ? 2% -0.1 0.35 ? 3% perf-profile.self.cycles-pp.__mod_lruvec_page_state
0.46 ? 2% -0.1 0.40 perf-profile.self.cycles-pp.handle_pte_fault
0.62 -0.1 0.56 ? 5% perf-profile.self.cycles-pp.__perf_sw_event
0.83 ? 2% -0.1 0.78 perf-profile.self.cycles-pp.release_pages
0.72 ? 2% -0.1 0.67 ? 3% perf-profile.self.cycles-pp.do_set_pte
0.78 ? 2% -0.0 0.73 ? 3% perf-profile.self.cycles-pp.up_read
0.90 ? 2% -0.0 0.86 perf-profile.self.cycles-pp.asm_exc_page_fault
1.07 -0.0 1.04 perf-profile.self.cycles-pp.error_entry
0.18 ? 5% +0.0 0.20 ? 2% perf-profile.self.cycles-pp.__mod_lruvec_state
0.40 ? 4% +0.0 0.43 perf-profile.self.cycles-pp.access_error
0.31 ? 2% +0.0 0.35 ? 13% perf-profile.self.cycles-pp.__mod_node_page_state
0.45 ? 4% +0.0 0.49 ? 2% perf-profile.self.cycles-pp.do_fault
0.37 ? 6% +0.1 0.43 perf-profile.self.cycles-pp.mt_find
0.36 ? 4% +0.1 0.42 ? 2% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.36 ? 3% +0.1 0.47 ? 3% perf-profile.self.cycles-pp.percpu_counter_add_batch
1.28 ? 2% +0.1 1.42 perf-profile.self.cycles-pp.handle_mm_fault
2.74 +0.2 2.91 perf-profile.self.cycles-pp.__handle_mm_fault
0.85 ? 4% +0.3 1.17 ? 5% perf-profile.self.cycles-pp.xas_descend
0.00 +0.6 0.60 ? 2% perf-profile.self.cycles-pp.__pte_offset_map_lock
0.00 +0.7 0.66 ? 2% perf-profile.self.cycles-pp.__pte_offset_map
***************************************************************************************************
lkp-icl-2sp6: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/process/16/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp6/page_fault3/will-it-scale
commit:
46c475bd67 ("mm/pgtable: kmap_local_page() instead of kmap_atomic()")
0d940a9b27 ("mm/pgtable: allow pte_offset_map[_lock]() to fail")
46c475bd676bb050 0d940a9b270b9220dcff74d8e91
---------------- ---------------------------
%stddev %change %stddev
\ | \
19917517 -1.1% 19697102 will-it-scale.16.processes
1244844 -1.1% 1231068 will-it-scale.per_process_ops
19917517 -1.1% 19697102 will-it-scale.workload
13308883 -1.0% 13177388 proc-vmstat.numa_hit
13176349 -1.0% 13044481 proc-vmstat.numa_local
13410479 -1.0% 13278301 proc-vmstat.pgalloc_normal
5.992e+09 -1.1% 5.926e+09 proc-vmstat.pgfault
13364680 -1.0% 13230662 proc-vmstat.pgfree
8.49 ? 43% +1158.1% 106.85 ? 39% sched_debug.cfs_rq:/.removed.load_avg.avg
244.62 ? 38% +5090.7% 12697 ? 43% sched_debug.cfs_rq:/.removed.load_avg.max
41.40 ? 24% +2631.8% 1131 ? 42% sched_debug.cfs_rq:/.removed.load_avg.stddev
125.96 ? 36% +106.9% 260.64 ? 32% sched_debug.cfs_rq:/.removed.runnable_avg.max
18.25 ? 34% +88.6% 34.41 ? 21% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
125.88 ? 36% +107.0% 260.53 ? 32% sched_debug.cfs_rq:/.removed.util_avg.max
18.24 ? 34% +88.6% 34.39 ? 21% sched_debug.cfs_rq:/.removed.util_avg.stddev
2.91 -3.1% 2.82 perf-stat.i.MPKI
1.269e+10 +1.8% 1.292e+10 perf-stat.i.branch-instructions
0.95 -2.1% 0.93 perf-stat.i.cpi
1.508e+10 +1.4% 1.529e+10 perf-stat.i.dTLB-loads
7.69 -0.2 7.47 perf-stat.i.dTLB-store-miss-rate%
6.974e+08 -1.1% 6.897e+08 perf-stat.i.dTLB-store-misses
8.374e+09 +2.0% 8.539e+09 perf-stat.i.dTLB-stores
6.06e+10 +2.1% 6.186e+10 perf-stat.i.instructions
1.05 +2.2% 1.08 perf-stat.i.ipc
289.16 +1.6% 293.84 perf-stat.i.metric.M/sec
19885360 -1.1% 19664922 perf-stat.i.minor-faults
38.99 ? 2% -7.3 31.66 ? 10% perf-stat.i.node-load-miss-rate%
185130 ? 4% +47.7% 273509 ? 18% perf-stat.i.node-loads
20061282 -1.1% 19842078 perf-stat.i.node-stores
19885361 -1.1% 19664922 perf-stat.i.page-faults
2.91 -3.1% 2.82 perf-stat.overall.MPKI
0.95 -2.1% 0.93 perf-stat.overall.cpi
7.69 -0.2 7.47 perf-stat.overall.dTLB-store-miss-rate%
1.05 +2.2% 1.08 perf-stat.overall.ipc
38.90 ? 2% -7.4 31.55 ? 10% perf-stat.overall.node-load-miss-rate%
915055 +3.2% 944642 perf-stat.overall.path-length
1.264e+10 +1.8% 1.288e+10 perf-stat.ps.branch-instructions
1.503e+10 +1.4% 1.524e+10 perf-stat.ps.dTLB-loads
6.95e+08 -1.1% 6.874e+08 perf-stat.ps.dTLB-store-misses
8.345e+09 +2.0% 8.51e+09 perf-stat.ps.dTLB-stores
6.039e+10 +2.1% 6.165e+10 perf-stat.ps.instructions
19818139 -1.1% 19598337 perf-stat.ps.minor-faults
184442 ? 4% +47.8% 272530 ? 18% perf-stat.ps.node-loads
19991813 -1.1% 19773420 perf-stat.ps.node-stores
19818139 -1.1% 19598337 perf-stat.ps.page-faults
1.823e+13 +2.1% 1.861e+13 perf-stat.total.instructions
2.12 ? 3% -0.3 1.80 ? 3% perf-profile.calltrace.cycles-pp.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
2.72 -0.3 2.42 ? 2% perf-profile.calltrace.cycles-pp.do_set_pte.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
1.78 ? 4% -0.3 1.49 ? 3% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault
1.66 -0.2 1.42 ? 2% perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.finish_fault.do_fault.__handle_mm_fault
4.65 -0.1 4.54 perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap
4.77 -0.1 4.70 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
4.76 -0.1 4.70 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap
4.76 -0.1 4.70 perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap
4.79 -0.1 4.73 perf-profile.calltrace.cycles-pp.__munmap
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
4.79 -0.1 4.72 perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
2.11 -0.1 2.06 perf-profile.calltrace.cycles-pp.__perf_sw_event.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
0.76 -0.0 0.72 ? 2% perf-profile.calltrace.cycles-pp.up_read.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
1.77 +0.0 1.81 perf-profile.calltrace.cycles-pp.mt_find.find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
1.90 +0.1 1.98 perf-profile.calltrace.cycles-pp.find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
1.11 ? 4% +0.1 1.19 ? 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.43 ? 3% +0.1 1.55 ? 3% perf-profile.calltrace.cycles-pp.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault.__do_fault
0.78 ? 4% +0.1 0.93 ? 3% perf-profile.calltrace.cycles-pp.xas_descend.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault
4.22 +0.3 4.54 perf-profile.calltrace.cycles-pp.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
30.00 +0.4 30.41 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
30.78 +0.4 31.21 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
20.56 +0.5 21.01 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
0.00 +0.6 0.62 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
16.88 +0.6 17.51 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +0.7 0.71 perf-profile.calltrace.cycles-pp._raw_spin_lock.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault
0.00 +1.5 1.48 perf-profile.calltrace.cycles-pp.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
4.37 -0.4 3.97 perf-profile.children.cycles-pp.__perf_sw_event
3.78 -0.4 3.42 perf-profile.children.cycles-pp.___perf_sw_event
2.82 -0.3 2.52 ? 2% perf-profile.children.cycles-pp.do_set_pte
1.72 ? 2% -0.3 1.47 ? 2% perf-profile.children.cycles-pp.page_add_file_rmap
9.03 -0.2 8.85 perf-profile.children.cycles-pp.native_irq_return_iret
4.77 -0.1 4.70 perf-profile.children.cycles-pp.unmap_vmas
4.79 -0.1 4.72 perf-profile.children.cycles-pp.unmap_region
4.77 -0.1 4.70 perf-profile.children.cycles-pp.unmap_page_range
4.77 -0.1 4.70 perf-profile.children.cycles-pp.zap_pmd_range
4.76 -0.1 4.70 perf-profile.children.cycles-pp.zap_pte_range
4.79 -0.1 4.73 perf-profile.children.cycles-pp.do_vmi_munmap
4.79 -0.1 4.73 perf-profile.children.cycles-pp.do_vmi_align_munmap
4.79 -0.1 4.73 perf-profile.children.cycles-pp.__munmap
4.79 -0.1 4.72 perf-profile.children.cycles-pp.__x64_sys_munmap
4.79 -0.1 4.72 perf-profile.children.cycles-pp.__vm_munmap
4.86 -0.1 4.80 perf-profile.children.cycles-pp.do_syscall_64
4.86 -0.1 4.80 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.80 -0.0 0.76 ? 3% perf-profile.children.cycles-pp.up_read
0.42 ? 3% -0.0 0.39 ? 2% perf-profile.children.cycles-pp.perf_exclude_event
0.17 ? 3% -0.0 0.14 ? 3% perf-profile.children.cycles-pp.noop_dirty_folio
0.10 ? 4% -0.0 0.08 perf-profile.children.cycles-pp.page_rmapping
0.12 ? 4% +0.0 0.14 ? 5% perf-profile.children.cycles-pp.vm_normal_page
0.16 ? 4% +0.0 0.19 ? 2% perf-profile.children.cycles-pp.set_page_dirty
1.81 +0.1 1.86 perf-profile.children.cycles-pp.mt_find
0.57 ? 2% +0.1 0.63 ? 3% perf-profile.children.cycles-pp.__mod_lruvec_state
0.55 ? 2% +0.1 0.61 ? 3% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
1.96 +0.1 2.03 perf-profile.children.cycles-pp.find_vma
0.35 ? 4% +0.1 0.47 ? 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
1.60 ? 3% +0.1 1.72 ? 2% perf-profile.children.cycles-pp.xas_load
0.89 ? 3% +0.1 1.03 ? 3% perf-profile.children.cycles-pp.xas_descend
0.49 ? 3% +0.2 0.72 perf-profile.children.cycles-pp.handle_pte_fault
4.34 +0.3 4.66 perf-profile.children.cycles-pp.finish_fault
30.93 +0.4 31.35 perf-profile.children.cycles-pp.exc_page_fault
30.43 +0.4 30.86 perf-profile.children.cycles-pp.do_user_addr_fault
20.70 +0.5 21.16 perf-profile.children.cycles-pp.handle_mm_fault
16.99 +0.6 17.62 perf-profile.children.cycles-pp.__handle_mm_fault
0.00 +0.7 0.71 ? 2% perf-profile.children.cycles-pp.__pte_offset_map
0.00 +1.6 1.60 perf-profile.children.cycles-pp.__pte_offset_map_lock
35.47 -0.5 34.98 perf-profile.self.cycles-pp.testcase
3.22 -0.3 2.92 perf-profile.self.cycles-pp.___perf_sw_event
0.96 ? 2% -0.3 0.69 ? 3% perf-profile.self.cycles-pp.page_add_file_rmap
0.78 -0.2 0.53 ? 2% perf-profile.self.cycles-pp.finish_fault
9.03 -0.2 8.85 perf-profile.self.cycles-pp.native_irq_return_iret
0.43 ? 2% -0.1 0.33 ? 3% perf-profile.self.cycles-pp.__mod_lruvec_page_state
0.46 ? 3% -0.1 0.38 ? 3% perf-profile.self.cycles-pp.handle_pte_fault
1.88 -0.1 1.82 ? 2% perf-profile.self.cycles-pp.shmem_get_folio_gfp
0.69 ? 2% -0.1 0.63 ? 2% perf-profile.self.cycles-pp.do_set_pte
0.59 ? 3% -0.1 0.54 ? 3% perf-profile.self.cycles-pp.__perf_sw_event
0.37 ? 3% -0.0 0.34 ? 3% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.73 -0.0 0.69 ? 3% perf-profile.self.cycles-pp.up_read
0.29 ? 4% -0.0 0.26 ? 2% perf-profile.self.cycles-pp.perf_exclude_event
0.21 ? 5% -0.0 0.19 ? 3% perf-profile.self.cycles-pp.perf_swevent_event
0.07 ? 10% -0.0 0.05 ? 7% perf-profile.self.cycles-pp.page_rmapping
0.09 ? 5% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.set_page_dirty
0.18 ? 4% +0.0 0.20 ? 3% perf-profile.self.cycles-pp.__mod_lruvec_state
0.35 +0.1 0.40 ? 2% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.35 ? 4% +0.1 0.41 ? 4% perf-profile.self.cycles-pp.mt_find
0.34 ? 3% +0.1 0.43 ? 3% perf-profile.self.cycles-pp.percpu_counter_add_batch
1.21 ? 2% +0.1 1.35 ? 3% perf-profile.self.cycles-pp.handle_mm_fault
0.78 ? 4% +0.1 0.93 ? 3% perf-profile.self.cycles-pp.xas_descend
2.64 ? 2% +0.2 2.80 ? 2% perf-profile.self.cycles-pp.__handle_mm_fault
0.00 +0.6 0.56 ? 2% perf-profile.self.cycles-pp.__pte_offset_map_lock
0.00 +0.6 0.61 ? 3% perf-profile.self.cycles-pp.__pte_offset_map
***************************************************************************************************
lkp-spr-r02: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/process/16/debian-11.1-x86_64-20220510.cgz/lkp-spr-r02/page_fault3/will-it-scale
commit:
46c475bd67 ("mm/pgtable: kmap_local_page() instead of kmap_atomic()")
0d940a9b27 ("mm/pgtable: allow pte_offset_map[_lock]() to fail")
46c475bd676bb050 0d940a9b270b9220dcff74d8e91
---------------- ---------------------------
%stddev %change %stddev
\ | \
16865407 -1.4% 16637533 will-it-scale.16.processes
1054087 -1.4% 1039845 will-it-scale.per_process_ops
16865407 -1.4% 16637533 will-it-scale.workload
11730835 -1.2% 11592835 proc-vmstat.numa_hit
11498079 -1.2% 11356534 proc-vmstat.numa_local
11858670 -1.1% 11727921 proc-vmstat.pgalloc_normal
5.076e+09 -1.4% 5.007e+09 proc-vmstat.pgfault
11809018 -1.1% 11680282 proc-vmstat.pgfree
11259585 ? 69% -76.0% 2698150 ?207% sched_debug.cfs_rq:/.load.max
16328 ? 9% -9.8% 14721 ? 9% sched_debug.cfs_rq:/.load_avg.max
135.57 ? 53% -82.0% 24.41 ?126% sched_debug.cfs_rq:/.removed.load_avg.avg
1293 ? 47% -73.5% 342.60 ?132% sched_debug.cfs_rq:/.removed.load_avg.stddev
3.37 ? 29% -59.9% 1.35 ? 55% sched_debug.cfs_rq:/.removed.runnable_avg.avg
3.37 ? 29% -59.9% 1.35 ? 55% sched_debug.cfs_rq:/.removed.util_avg.avg
1.47 -3.5% 1.42 perf-stat.i.MPKI
1.088e+10 +1.5% 1.105e+10 perf-stat.i.branch-instructions
17490795 -1.3% 17271931 perf-stat.i.cache-misses
76483659 -1.9% 75024552 perf-stat.i.cache-references
0.95 -1.8% 0.93 perf-stat.i.cpi
2838 +1.3% 2876 perf-stat.i.cycles-between-cache-misses
1.296e+10 +1.1% 1.31e+10 perf-stat.i.dTLB-loads
11.99 -0.3 11.66 perf-stat.i.dTLB-store-miss-rate%
9.791e+08 -1.5% 9.643e+08 perf-stat.i.dTLB-store-misses
7.181e+09 +1.7% 7.3e+09 perf-stat.i.dTLB-stores
5.201e+10 +1.8% 5.293e+10 perf-stat.i.instructions
1.05 +1.8% 1.07 perf-stat.i.ipc
492.16 -1.7% 483.81 perf-stat.i.metric.K/sec
142.82 +1.3% 144.67 perf-stat.i.metric.M/sec
16831483 -1.4% 16596776 perf-stat.i.minor-faults
76.07 -6.8 69.29 perf-stat.i.node-load-miss-rate%
57413 ? 5% +37.8% 79120 ? 3% perf-stat.i.node-loads
16831483 -1.4% 16596777 perf-stat.i.page-faults
1.47 -3.6% 1.42 perf-stat.overall.MPKI
0.95 -1.8% 0.93 perf-stat.overall.cpi
2819 +1.2% 2853 perf-stat.overall.cycles-between-cache-misses
12.00 -0.3 11.67 perf-stat.overall.dTLB-store-miss-rate%
1.05 +1.8% 1.07 perf-stat.overall.ipc
75.62 -5.9 69.71 ? 2% perf-stat.overall.node-load-miss-rate%
928491 +3.1% 957621 perf-stat.overall.path-length
1.084e+10 +1.5% 1.101e+10 perf-stat.ps.branch-instructions
17429386 -1.3% 17211044 perf-stat.ps.cache-misses
76218954 -1.9% 74763857 perf-stat.ps.cache-references
1.291e+10 +1.1% 1.305e+10 perf-stat.ps.dTLB-loads
9.758e+08 -1.5% 9.611e+08 perf-stat.ps.dTLB-store-misses
7.156e+09 +1.7% 7.276e+09 perf-stat.ps.dTLB-stores
5.183e+10 +1.8% 5.275e+10 perf-stat.ps.instructions
16775304 -1.4% 16541627 perf-stat.ps.minor-faults
57246 ? 5% +37.7% 78833 ? 3% perf-stat.ps.node-loads
16775304 -1.4% 16541627 perf-stat.ps.page-faults
1.566e+13 +1.7% 1.593e+13 perf-stat.total.instructions
2.61 -0.5 2.15 perf-profile.calltrace.cycles-pp.do_set_pte.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
1.60 -0.4 1.23 perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.finish_fault.do_fault.__handle_mm_fault
11.46 -0.2 11.27 perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.testcase
0.70 ? 2% -0.1 0.60 perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.page_add_file_rmap.do_set_pte.finish_fault.do_fault
4.42 -0.1 4.36 perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
4.58 -0.1 4.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
4.58 -0.1 4.52 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
4.57 -0.1 4.52 perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap
4.55 -0.1 4.50 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap
4.55 -0.1 4.50 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vmi_munmap
4.55 -0.1 4.50 perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap
4.58 -0.1 4.52 perf-profile.calltrace.cycles-pp.__munmap
4.57 -0.1 4.52 perf-profile.calltrace.cycles-pp.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.57 -0.1 4.52 perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vmi_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
4.57 -0.1 4.52 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
4.57 -0.1 4.52 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
1.06 -0.0 1.02 perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault
1.39 -0.0 1.36 perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
1.28 -0.0 1.25 perf-profile.calltrace.cycles-pp.__perf_sw_event.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4.06 +0.2 4.30 perf-profile.calltrace.cycles-pp.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.38 +0.3 4.71 perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault.__handle_mm_fault
4.91 +0.3 5.24 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault
64.56 +0.3 64.90 perf-profile.calltrace.cycles-pp.testcase
5.28 +0.3 5.62 perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.10 ? 2% +0.4 1.46 perf-profile.calltrace.cycles-pp.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault.__do_fault
2.52 +0.4 2.88 perf-profile.calltrace.cycles-pp.filemap_get_entry.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault
12.53 +0.5 13.04 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.6 0.60 ? 2% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
0.00 +0.7 0.67 perf-profile.calltrace.cycles-pp._raw_spin_lock.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault
0.09 ?223% +0.8 0.84 ? 2% perf-profile.calltrace.cycles-pp.xas_descend.xas_load.filemap_get_entry.shmem_get_folio_gfp.shmem_fault
26.56 +0.9 27.48 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
25.91 +0.9 26.83 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
14.61 +1.1 15.75 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
17.06 +1.1 18.20 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
0.00 +1.5 1.46 perf-profile.calltrace.cycles-pp.__pte_offset_map_lock.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
2.72 -0.5 2.24 perf-profile.children.cycles-pp.do_set_pte
1.66 -0.4 1.27 perf-profile.children.cycles-pp.page_add_file_rmap
11.50 -0.2 11.30 perf-profile.children.cycles-pp.sync_regs
10.28 -0.2 10.13 perf-profile.children.cycles-pp.native_irq_return_iret
0.50 ? 7% -0.1 0.39 ? 10% perf-profile.children.cycles-pp.up_read
1.44 -0.1 1.34 perf-profile.children.cycles-pp.__mod_lruvec_page_state
2.64 -0.1 2.57 perf-profile.children.cycles-pp.___perf_sw_event
4.67 -0.1 4.60 perf-profile.children.cycles-pp.do_syscall_64
4.67 -0.1 4.61 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
4.57 -0.1 4.52 perf-profile.children.cycles-pp.unmap_region
4.55 -0.1 4.50 perf-profile.children.cycles-pp.unmap_page_range
4.55 -0.1 4.50 perf-profile.children.cycles-pp.zap_pmd_range
4.58 -0.1 4.52 perf-profile.children.cycles-pp.do_vmi_munmap
4.58 -0.1 4.52 perf-profile.children.cycles-pp.do_vmi_align_munmap
4.58 -0.1 4.52 perf-profile.children.cycles-pp.__vm_munmap
4.58 -0.1 4.52 perf-profile.children.cycles-pp.__x64_sys_munmap
4.55 -0.1 4.50 perf-profile.children.cycles-pp.unmap_vmas
4.55 -0.1 4.50 perf-profile.children.cycles-pp.zap_pte_range
4.58 -0.1 4.52 perf-profile.children.cycles-pp.__munmap
0.31 ? 3% -0.0 0.29 perf-profile.children.cycles-pp.folio_mapping
0.22 ? 2% +0.0 0.24 ? 3% perf-profile.children.cycles-pp.access_error
0.28 ? 3% +0.2 0.47 ? 3% perf-profile.children.cycles-pp.percpu_counter_add_batch
4.18 +0.2 4.39 perf-profile.children.cycles-pp.finish_fault
0.37 ? 3% +0.3 0.70 perf-profile.children.cycles-pp.handle_pte_fault
4.48 +0.3 4.81 perf-profile.children.cycles-pp.shmem_get_folio_gfp
5.07 +0.3 5.40 perf-profile.children.cycles-pp.shmem_fault
1.28 ? 2% +0.3 1.62 perf-profile.children.cycles-pp.xas_load
5.38 +0.3 5.72 perf-profile.children.cycles-pp.__do_fault
0.60 ? 2% +0.3 0.94 ? 2% perf-profile.children.cycles-pp.xas_descend
2.60 +0.4 2.95 perf-profile.children.cycles-pp.filemap_get_entry
12.68 +0.5 13.20 perf-profile.children.cycles-pp.do_fault
0.00 +0.7 0.74 perf-profile.children.cycles-pp.__pte_offset_map
26.66 +0.9 27.57 perf-profile.children.cycles-pp.exc_page_fault
26.22 +0.9 27.14 perf-profile.children.cycles-pp.do_user_addr_fault
17.18 +1.2 18.33 perf-profile.children.cycles-pp.handle_mm_fault
14.69 +1.2 15.84 perf-profile.children.cycles-pp.__handle_mm_fault
0.00 +1.6 1.59 perf-profile.children.cycles-pp.__pte_offset_map_lock
37.82 -0.6 37.20 perf-profile.self.cycles-pp.testcase
0.91 -0.3 0.59 perf-profile.self.cycles-pp.page_add_file_rmap
11.46 -0.2 11.27 perf-profile.self.cycles-pp.sync_regs
0.72 ? 3% -0.2 0.54 ? 3% perf-profile.self.cycles-pp.finish_fault
0.74 -0.2 0.56 perf-profile.self.cycles-pp.do_set_pte
10.28 -0.2 10.13 perf-profile.self.cycles-pp.native_irq_return_iret
0.48 ? 8% -0.1 0.37 ? 11% perf-profile.self.cycles-pp.up_read
0.40 -0.1 0.32 ? 2% perf-profile.self.cycles-pp.__mod_lruvec_page_state
2.10 -0.1 2.04 perf-profile.self.cycles-pp.___perf_sw_event
3.02 -0.1 2.96 perf-profile.self.cycles-pp.mtree_range_walk
0.20 ? 2% +0.0 0.22 ? 3% perf-profile.self.cycles-pp.access_error
0.35 ? 2% +0.0 0.37 ? 2% perf-profile.self.cycles-pp.handle_pte_fault
0.08 ? 11% +0.0 0.10 ? 5% perf-profile.self.cycles-pp.p4d_offset
0.26 ? 5% +0.2 0.44 ? 4% perf-profile.self.cycles-pp.percpu_counter_add_batch
1.57 +0.3 1.87 ? 2% perf-profile.self.cycles-pp.__handle_mm_fault
0.49 ? 2% +0.4 0.84 ? 2% perf-profile.self.cycles-pp.xas_descend
0.00 +0.6 0.58 ? 2% perf-profile.self.cycles-pp.__pte_offset_map_lock
0.00 +0.6 0.60 perf-profile.self.cycles-pp.__pte_offset_map
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki