2022-04-19 18:23:21

by kernel test robot

[permalink] [raw]
Subject: [mm/readahead] 793917d997: fio.read_iops -18.8% regression



Greeting,

FYI, we noticed a -18.8% regression of fio.read_iops due to commit:


commit: 793917d997df2e432f3e9ac126e4482d68256d01 ("mm/readahead: Add large folio readahead")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: fio-basic
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:

disk: 2pmem
fs: xfs
runtime: 200s
nr_task: 50%
time_based: tb
rw: read
bs: 2M
ioengine: sync
test_size: 200G
cpufreq_governor: performance
ucode: 0x500320a

test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio

In addition to that, the commit also has significant impact on the following tests:

+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 241.0% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=mmap-pread-seq |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 64.8% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=mmap-pread-seq-mt |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 24.1% improvement |
| test machine | 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=migrate |
| | ucode=0x42e |
+------------------+-------------------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 45.0% improvement |
| test machine | 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=lru-file-mmap-read |
| | ucode=0x500320a |
+------------------+-------------------------------------------------------------------------------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-11/performance/2pmem/xfs/sync/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/200s/read/lkp-csl-2sp7/200G/fio-basic/tb/0x500320a

commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")

18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.57 ?204% -0.6 0.00 ?223% fio.latency_1000us%
16.62 ? 22% +24.7 41.29 ? 3% fio.latency_20ms%
1.64 ?103% -1.6 0.08 ?112% fio.latency_2ms%
24.33 ? 30% -23.0 1.35 ? 17% fio.latency_4ms%
0.10 ? 45% +0.2 0.33 ? 17% fio.latency_50ms%
13189 ? 7% -18.8% 10710 fio.read_bw_MBps
11643562 ? 6% +33.0% 15488341 fio.read_clat_90%_us
14265002 ? 5% +13.3% 16165546 fio.read_clat_95%_us
6493088 ? 5% +37.0% 8897953 fio.read_clat_mean_us
3583181 ? 6% +16.5% 4173165 ? 2% fio.read_clat_stddev
6594 ? 7% -18.8% 5355 fio.read_iops
5.417e+09 ? 6% -19.0% 4.388e+09 fio.time.file_system_inputs
36028 ? 2% -35.5% 23253 fio.time.involuntary_context_switches
31.69 ? 9% +42.8% 45.26 ? 4% fio.time.user_time
21307 -5.2% 20198 fio.time.voluntary_context_switches
1322167 ? 6% -19.0% 1071118 fio.workload
451.73 -2.9% 438.55 pmeter.Average_Active_Power
13277785 ? 7% -18.6% 10808063 vmstat.io.bi
2158 -14.8% 1839 vmstat.system.cs
1.72 ? 5% +0.5 2.20 mpstat.cpu.all.irq%
0.22 ? 13% +0.1 0.33 ? 3% mpstat.cpu.all.soft%
0.27 ? 12% +0.2 0.46 ? 3% mpstat.cpu.all.usr%
341045 -67.6% 110363 meminfo.KReclaimable
64800 ? 8% +21.6% 78827 ? 7% meminfo.Mapped
341045 -67.6% 110363 meminfo.SReclaimable
555633 -40.9% 328399 meminfo.Slab
0.06 ? 6% -67.6% 0.02 turbostat.IPC
13881 ? 56% -60.3% 5510 ? 62% turbostat.POLL
66.00 -3.0% 64.00 turbostat.PkgTmp
276.27 -2.7% 268.67 turbostat.PkgWatt
3.121e+08 ? 16% -99.4% 1987161 ? 5% numa-numastat.node0.local_node
1.059e+08 ? 12% -98.6% 1510071 ? 4% numa-numastat.node0.numa_foreign
3.113e+08 ? 16% -99.3% 2032533 ? 4% numa-numastat.node0.numa_hit
2.627e+08 ? 3% -99.0% 2626722 ? 3% numa-numastat.node1.local_node
2.621e+08 ? 3% -99.0% 2668324 ? 2% numa-numastat.node1.numa_hit
1.059e+08 ? 12% -98.6% 1510159 ? 4% numa-numastat.node1.numa_miss
1.061e+08 ? 12% -98.5% 1551712 ? 2% numa-numastat.node1.other_node
163410 ? 3% -64.4% 58180 ? 38% numa-meminfo.node0.KReclaimable
163410 ? 3% -64.4% 58180 ? 38% numa-meminfo.node0.SReclaimable
272215 ? 6% -34.0% 179749 ? 13% numa-meminfo.node0.Slab
64115515 ? 4% +11.0% 71186782 numa-meminfo.node1.FilePages
177499 ? 3% -70.6% 52181 ? 44% numa-meminfo.node1.KReclaimable
65243028 ? 4% +10.8% 72267036 numa-meminfo.node1.MemUsed
177499 ? 3% -70.6% 52181 ? 44% numa-meminfo.node1.SReclaimable
283269 ? 6% -47.5% 148640 ? 18% numa-meminfo.node1.Slab
931356 ? 2% +17.3% 1092480 sched_debug.cpu.avg_idle.avg
1444016 ? 8% +19.3% 1722423 ? 3% sched_debug.cpu.avg_idle.max
339528 ? 20% +70.6% 579295 ? 16% sched_debug.cpu.avg_idle.min
187998 ? 9% +47.3% 276979 ? 4% sched_debug.cpu.avg_idle.stddev
8.03 ? 42% +204.5% 24.45 ? 25% sched_debug.cpu.clock.stddev
516781 +11.8% 577621 sched_debug.cpu.max_idle_balance_cost.avg
705916 ? 7% +21.6% 858128 ? 3% sched_debug.cpu.max_idle_balance_cost.max
38577 ? 31% +150.1% 96487 ? 9% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ? 17% +122.4% 0.00 ? 49% sched_debug.cpu.next_balance.stddev
40844 ? 3% -64.4% 14543 ? 38% numa-vmstat.node0.nr_slab_reclaimable
1.059e+08 ? 12% -98.6% 1510071 ? 4% numa-vmstat.node0.numa_foreign
3.113e+08 ? 16% -99.3% 2032392 ? 4% numa-vmstat.node0.numa_hit
3.121e+08 ? 16% -99.4% 1987020 ? 5% numa-vmstat.node0.numa_local
3202 ?101% -98.8% 38.17 ? 60% numa-vmstat.node0.workingset_nodes
16031245 ? 4% +10.9% 17773973 numa-vmstat.node1.nr_file_pages
44374 ? 3% -70.6% 13045 ? 44% numa-vmstat.node1.nr_slab_reclaimable
2.62e+08 ? 3% -99.0% 2668367 ? 2% numa-vmstat.node1.numa_hit
2.626e+08 ? 3% -99.0% 2626765 ? 3% numa-vmstat.node1.numa_local
1.059e+08 ? 12% -98.6% 1510159 ? 4% numa-vmstat.node1.numa_miss
1.061e+08 ? 12% -98.5% 1551712 ? 2% numa-vmstat.node1.numa_other
299.83 ? 34% -95.6% 13.33 ? 74% numa-vmstat.node1.workingset_nodes
318998 ?113% -99.4% 1970 ?141% proc-vmstat.compact_daemon_free_scanned
193995 ? 28% +430.3% 1028832 ? 27% proc-vmstat.compact_daemon_migrate_scanned
42975962 ? 42% -95.3% 2030090 ? 41% proc-vmstat.compact_free_scanned
7148833 ? 16% -98.6% 97725 ? 18% proc-vmstat.compact_isolated
26398894 ? 11% -30.4% 18367268 ? 31% proc-vmstat.compact_migrate_scanned
9977 ? 2% -4.6% 9519 ? 3% proc-vmstat.nr_active_anon
26107465 +4.9% 27388404 proc-vmstat.nr_file_pages
50836830 -2.5% 49578796 proc-vmstat.nr_free_pages
25502602 +5.0% 26782320 proc-vmstat.nr_inactive_file
16509 ? 8% +20.7% 19932 ? 6% proc-vmstat.nr_mapped
85277 -67.6% 27589 proc-vmstat.nr_slab_reclaimable
53644 +1.6% 54503 proc-vmstat.nr_slab_unreclaimable
9977 ? 2% -4.6% 9519 ? 3% proc-vmstat.nr_zone_active_anon
25502515 +5.0% 26782297 proc-vmstat.nr_zone_inactive_file
1.059e+08 ? 12% -98.6% 1510071 ? 4% proc-vmstat.numa_foreign
5.734e+08 ? 10% -99.2% 4702875 ? 2% proc-vmstat.numa_hit
5.748e+08 ? 10% -99.2% 4615902 ? 2% proc-vmstat.numa_local
1.059e+08 ? 12% -98.6% 1510159 ? 4% proc-vmstat.numa_miss
1.062e+08 ? 12% -98.5% 1597130 ? 3% proc-vmstat.numa_other
6.695e+08 ? 6% -18.7% 5.444e+08 proc-vmstat.pgalloc_normal
6.592e+08 ? 6% -21.1% 5.201e+08 ? 2% proc-vmstat.pgfree
3598703 ? 16% -97.8% 80835 ? 11% proc-vmstat.pgmigrate_success
2.708e+09 ? 6% -19.0% 2.194e+09 proc-vmstat.pgpgin
13829 ? 86% -100.0% 1.00 ?100% proc-vmstat.pgrotated
29010332 ? 58% -85.8% 4115913 ? 55% proc-vmstat.pgscan_file
29010332 ? 58% -85.8% 4114473 ? 55% proc-vmstat.pgscan_kswapd
28974103 ? 58% -85.8% 4114957 ? 55% proc-vmstat.pgsteal_file
28974103 ? 58% -85.8% 4113517 ? 55% proc-vmstat.pgsteal_kswapd
3588 ? 96% -98.6% 51.50 ? 58% proc-vmstat.workingset_nodes
28.72 ? 3% +109.8% 60.27 perf-stat.i.MPKI
5.903e+09 ? 4% -68.8% 1.841e+09 perf-stat.i.branch-instructions
0.24 ? 3% +0.1 0.36 perf-stat.i.branch-miss-rate%
14069166 ? 5% -54.5% 6402039 perf-stat.i.branch-misses
85.42 +5.1 90.52 perf-stat.i.cache-miss-rate%
7.328e+08 ? 7% -20.1% 5.852e+08 perf-stat.i.cache-misses
8.581e+08 ? 7% -24.9% 6.444e+08 perf-stat.i.cache-references
2034 -16.3% 1703 perf-stat.i.context-switches
5.10 ? 5% +188.5% 14.72 perf-stat.i.cpi
230.67 ? 7% +28.4% 296.26 perf-stat.i.cycles-between-cache-misses
0.02 ? 14% -0.0 0.00 ? 12% perf-stat.i.dTLB-load-miss-rate%
1298538 ? 17% -93.1% 90201 ? 11% perf-stat.i.dTLB-load-misses
6.904e+09 ? 4% -71.4% 1.976e+09 perf-stat.i.dTLB-loads
0.02 ? 12% -0.0 0.00 ? 20% perf-stat.i.dTLB-store-miss-rate%
1002153 ? 17% -95.1% 49149 ? 20% perf-stat.i.dTLB-store-misses
4.381e+09 ? 6% -61.1% 1.703e+09 perf-stat.i.dTLB-stores
55.38 ? 2% -11.7 43.67 perf-stat.i.iTLB-load-miss-rate%
2015561 ? 7% -42.1% 1166138 perf-stat.i.iTLB-load-misses
3.049e+10 ? 4% -65.3% 1.057e+10 perf-stat.i.instructions
15085 ? 3% -40.6% 8965 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.22 ? 4% -65.3% 0.08 perf-stat.i.ipc
14.43 +5.4% 15.21 perf-stat.i.major-faults
1115 ? 15% +37.6% 1534 perf-stat.i.metric.K/sec
190.01 ? 5% -65.5% 65.55 perf-stat.i.metric.M/sec
27.84 ? 14% +10.8 38.67 ? 2% perf-stat.i.node-load-miss-rate%
33175146 ? 4% +14.4% 37941465 ? 2% perf-stat.i.node-load-misses
25.76 ? 20% +12.6 38.33 ? 2% perf-stat.i.node-store-miss-rate%
35188134 ? 10% +35.5% 47664398 ? 2% perf-stat.i.node-store-misses
28.13 ? 3% +116.5% 60.90 perf-stat.overall.MPKI
0.24 ? 2% +0.1 0.35 perf-stat.overall.branch-miss-rate%
85.41 +5.4 90.82 perf-stat.overall.cache-miss-rate%
4.52 ? 4% +192.8% 13.24 perf-stat.overall.cpi
188.69 ? 8% +26.9% 239.44 perf-stat.overall.cycles-between-cache-misses
0.02 ? 14% -0.0 0.00 ? 11% perf-stat.overall.dTLB-load-miss-rate%
0.02 ? 13% -0.0 0.00 ? 21% perf-stat.overall.dTLB-store-miss-rate%
56.41 ? 2% -12.7 43.76 perf-stat.overall.iTLB-load-miss-rate%
15146 ? 3% -40.7% 8984 perf-stat.overall.instructions-per-iTLB-miss
0.22 ? 4% -65.9% 0.08 perf-stat.overall.ipc
23.53 ? 13% +7.7 31.24 ? 2% perf-stat.overall.node-load-miss-rate%
22.91 ? 19% +9.8 32.76 ? 2% perf-stat.overall.node-store-miss-rate%
4618032 ? 3% -57.6% 1956685 perf-stat.overall.path-length
5.845e+09 ? 4% -69.0% 1.811e+09 perf-stat.ps.branch-instructions
13926188 ? 5% -54.7% 6307104 perf-stat.ps.branch-misses
7.264e+08 ? 7% -20.8% 5.75e+08 perf-stat.ps.cache-misses
8.503e+08 ? 7% -25.5% 6.331e+08 perf-stat.ps.cache-references
2016 -16.4% 1685 perf-stat.ps.context-switches
1283289 ? 17% -93.1% 88877 ? 11% perf-stat.ps.dTLB-load-misses
6.836e+09 ? 4% -71.6% 1.944e+09 perf-stat.ps.dTLB-loads
989408 ? 17% -95.1% 48472 ? 20% perf-stat.ps.dTLB-store-misses
4.338e+09 ? 6% -61.4% 1.674e+09 perf-stat.ps.dTLB-stores
1997784 ? 7% -42.1% 1157224 perf-stat.ps.iTLB-load-misses
3.019e+10 ? 4% -65.6% 1.04e+10 perf-stat.ps.instructions
33161221 ? 4% +15.1% 38152076 ? 2% perf-stat.ps.node-load-misses
35138617 ? 10% +35.8% 47712507 ? 2% perf-stat.ps.node-store-misses
6.093e+12 ? 4% -65.6% 2.096e+12 perf-stat.total.instructions
40.24 ? 5% -40.2 0.00 perf-profile.calltrace.cycles-pp.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
26.04 ? 4% -26.0 0.00 perf-profile.calltrace.cycles-pp.read_pages.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read
26.03 ? 4% -26.0 0.00 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_get_pages.filemap_read
24.65 ? 5% -24.6 0.00 perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_get_pages
24.65 ? 5% -24.6 0.00 perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages.page_cache_ra_unbounded
8.82 ? 25% -8.8 0.00 perf-profile.calltrace.cycles-pp.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages.filemap_read.xfs_file_buffered_read
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.posix_fadvise64
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe.posix_fadvise64
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.78 ? 37% -7.8 0.00 perf-profile.calltrace.cycles-pp.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64.do_syscall_64
6.88 ? 34% -6.9 0.00 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages.filemap_read
6.82 ? 34% -6.8 0.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_get_pages
6.09 ? 41% -6.1 0.00 perf-profile.calltrace.cycles-pp.__pagevec_release.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64.__x64_sys_fadvise64
6.08 ? 41% -6.1 0.00 perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.invalidate_mapping_pagevec.generic_fadvise.ksys_fadvise64_64
0.00 +0.5 0.54 ? 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_get_pages
0.00 +0.6 0.58 ? 3% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_get_pages.filemap_read
0.00 +0.6 0.60 ? 2% perf-profile.calltrace.cycles-pp.folio_alloc.page_cache_ra_order.filemap_get_pages.filemap_read.xfs_file_buffered_read
0.00 +1.0 0.98 perf-profile.calltrace.cycles-pp.page_cache_ra_order.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
64.14 ? 4% +7.3 71.44 perf-profile.calltrace.cycles-pp.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read.vfs_read
64.15 ? 4% +7.3 71.45 perf-profile.calltrace.cycles-pp.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
64.16 ? 4% +7.3 71.47 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
64.16 ? 4% +7.3 71.47 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.17 ? 4% +7.3 71.50 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.19 ? 4% +7.3 71.52 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.18 ? 4% +7.3 71.51 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
64.19 ? 4% +7.3 71.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
64.20 ? 4% +7.3 71.54 perf-profile.calltrace.cycles-pp.read
24.55 ? 3% +10.8 35.40 perf-profile.calltrace.cycles-pp.copy_mc_fragile.pmem_do_read.pmem_submit_bio.__submit_bio.__submit_bio_noacct
24.01 ? 5% +11.8 35.76 perf-profile.calltrace.cycles-pp.pmem_submit_bio.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages
23.92 ? 5% +11.8 35.70 perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_submit_bio.__submit_bio.__submit_bio_noacct.iomap_readahead
19.40 ? 4% +13.5 32.92 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.filemap_read.xfs_file_buffered_read
19.64 ? 4% +13.7 33.39 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
19.77 ? 4% +14.0 33.75 perf-profile.calltrace.cycles-pp.copy_page_to_iter.filemap_read.xfs_file_buffered_read.xfs_file_read_iter.new_sync_read
0.00 +36.0 35.98 perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_readahead.read_pages.filemap_get_pages
0.00 +36.0 35.99 perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_readahead.read_pages.filemap_get_pages.filemap_read
0.00 +36.3 36.34 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.filemap_get_pages.filemap_read.xfs_file_buffered_read
0.00 +36.4 36.38 perf-profile.calltrace.cycles-pp.read_pages.filemap_get_pages.filemap_read.xfs_file_buffered_read.xfs_file_read_iter
40.24 ? 5% -40.2 0.00 perf-profile.children.cycles-pp.page_cache_ra_unbounded
13.34 ? 17% -13.2 0.15 ? 13% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
9.26 ? 30% -9.3 0.00 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
9.34 ? 30% -9.1 0.22 ? 7% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
8.83 ? 25% -8.5 0.37 ? 4% perf-profile.children.cycles-pp.filemap_add_folio
7.78 ? 37% -7.4 0.39 ? 7% perf-profile.children.cycles-pp.posix_fadvise64
7.78 ? 37% -7.4 0.39 ? 7% perf-profile.children.cycles-pp.__x64_sys_fadvise64
7.78 ? 37% -7.4 0.39 ? 7% perf-profile.children.cycles-pp.ksys_fadvise64_64
7.78 ? 37% -7.4 0.39 ? 7% perf-profile.children.cycles-pp.generic_fadvise
7.78 ? 37% -7.4 0.39 ? 7% perf-profile.children.cycles-pp.invalidate_mapping_pagevec
6.90 ? 34% -6.8 0.08 ? 8% perf-profile.children.cycles-pp.folio_add_lru
6.85 ? 34% -6.8 0.08 ? 8% perf-profile.children.cycles-pp.__pagevec_lru_add
6.20 ? 40% -6.0 0.18 ? 9% perf-profile.children.cycles-pp.release_pages
6.09 ? 41% -5.9 0.17 ? 10% perf-profile.children.cycles-pp.__pagevec_release
5.05 ? 14% -4.5 0.60 ? 2% perf-profile.children.cycles-pp.folio_alloc
5.01 ? 14% -4.4 0.60 ? 2% perf-profile.children.cycles-pp.__alloc_pages
4.88 ? 14% -4.3 0.56 ? 2% perf-profile.children.cycles-pp.get_page_from_freelist
4.74 ? 15% -4.2 0.51 ? 3% perf-profile.children.cycles-pp.rmqueue
1.93 ? 14% -1.6 0.29 ? 3% perf-profile.children.cycles-pp.__filemap_add_folio
1.07 ? 38% -0.9 0.19 ? 6% perf-profile.children.cycles-pp.iomap_readpage_iter
0.98 ? 24% -0.8 0.17 ? 7% perf-profile.children.cycles-pp.__mem_cgroup_charge
0.81 ? 7% -0.7 0.11 ? 5% perf-profile.children.cycles-pp.filemap_get_read_batch
0.75 ? 18% -0.7 0.08 ? 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.69 ? 10% -0.6 0.04 ? 45% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.75 ? 41% -0.6 0.20 ? 10% perf-profile.children.cycles-pp.remove_mapping
0.74 ? 41% -0.5 0.20 ? 10% perf-profile.children.cycles-pp.__remove_mapping
0.64 ? 33% -0.5 0.12 ? 7% perf-profile.children.cycles-pp.charge_memcg
0.58 ? 20% -0.5 0.06 ? 7% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.58 ? 5% -0.5 0.07 ? 6% perf-profile.children.cycles-pp.iomap_read_end_io
0.58 ? 11% -0.5 0.08 ? 4% perf-profile.children.cycles-pp.xas_load
0.44 ? 48% -0.4 0.09 ? 10% perf-profile.children.cycles-pp.try_charge_memcg
0.32 ? 52% -0.2 0.08 ? 12% perf-profile.children.cycles-pp.page_counter_try_charge
0.23 ? 20% -0.2 0.06 ? 7% perf-profile.children.cycles-pp.__mod_lruvec_state
0.20 ? 19% -0.1 0.06 ? 7% perf-profile.children.cycles-pp.__mod_node_page_state
0.14 ? 6% -0.1 0.06 ? 8% perf-profile.children.cycles-pp.kmem_cache_alloc
0.05 ? 74% +0.0 0.10 ? 11% perf-profile.children.cycles-pp.generic_file_write_iter
0.01 ?223% +0.0 0.06 ? 11% perf-profile.children.cycles-pp.__x64_sys_execve
0.01 ?223% +0.0 0.06 ? 11% perf-profile.children.cycles-pp.do_execveat_common
0.01 ?223% +0.0 0.06 ? 11% perf-profile.children.cycles-pp.execve
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.mempool_alloc
0.05 ? 76% +0.1 0.11 ? 13% perf-profile.children.cycles-pp.new_sync_write
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.update_sd_lb_stats
0.05 ? 76% +0.1 0.11 ? 12% perf-profile.children.cycles-pp.vfs_write
0.05 ? 76% +0.1 0.11 ? 11% perf-profile.children.cycles-pp.ksys_write
0.02 ?141% +0.1 0.08 ? 9% perf-profile.children.cycles-pp.exc_page_fault
0.00 +0.1 0.06 ? 6% perf-profile.children.cycles-pp.bio_free
0.00 +0.1 0.06 ? 17% perf-profile.children.cycles-pp.schedule
0.06 ? 76% +0.1 0.12 ? 11% perf-profile.children.cycles-pp.write
0.06 ? 11% +0.1 0.12 ? 46% perf-profile.children.cycles-pp.__might_resched
0.02 ?141% +0.1 0.08 ? 7% perf-profile.children.cycles-pp.asm_exc_page_fault
0.01 ?223% +0.1 0.07 ? 9% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.07 ? 11% perf-profile.children.cycles-pp.bio_alloc_bioset
0.00 +0.1 0.07 ? 7% perf-profile.children.cycles-pp.__might_fault
0.00 +0.1 0.07 ? 11% perf-profile.children.cycles-pp.__handle_mm_fault
0.01 ?223% +0.1 0.08 ? 9% perf-profile.children.cycles-pp.do_user_addr_fault
0.00 +0.1 0.07 ? 8% perf-profile.children.cycles-pp.__kmalloc
0.03 ? 70% +0.1 0.10 ? 10% perf-profile.children.cycles-pp.iomap_iter
0.00 +0.1 0.08 ? 12% perf-profile.children.cycles-pp.__free_pages_ok
0.00 +0.1 0.08 ? 9% perf-profile.children.cycles-pp.iomap_page_create
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.__schedule
0.01 ?223% +0.1 0.09 ? 11% perf-profile.children.cycles-pp.xfs_read_iomap_begin
0.00 +0.2 0.16 ? 11% perf-profile.children.cycles-pp.__cond_resched
0.00 +0.2 0.18 ? 9% perf-profile.children.cycles-pp.folio_mapped
0.00 +1.0 0.98 perf-profile.children.cycles-pp.page_cache_ra_order
64.15 ? 4% +7.3 71.45 perf-profile.children.cycles-pp.xfs_file_buffered_read
64.14 ? 4% +7.3 71.44 perf-profile.children.cycles-pp.filemap_read
64.16 ? 4% +7.3 71.47 perf-profile.children.cycles-pp.xfs_file_read_iter
64.18 ? 4% +7.3 71.50 perf-profile.children.cycles-pp.new_sync_read
64.20 ? 4% +7.3 71.54 perf-profile.children.cycles-pp.vfs_read
64.21 ? 4% +7.4 71.56 perf-profile.children.cycles-pp.ksys_read
64.23 ? 4% +7.4 71.60 perf-profile.children.cycles-pp.read
26.03 ? 4% +10.3 36.34 perf-profile.children.cycles-pp.iomap_readahead
26.04 ? 4% +10.3 36.39 perf-profile.children.cycles-pp.read_pages
25.55 ? 4% +10.4 35.98 perf-profile.children.cycles-pp.__submit_bio
25.55 ? 4% +10.4 35.99 perf-profile.children.cycles-pp.__submit_bio_noacct
24.68 ? 4% +10.9 35.54 perf-profile.children.cycles-pp.copy_mc_fragile
24.89 ? 4% +10.9 35.77 perf-profile.children.cycles-pp.pmem_submit_bio
24.80 ? 4% +10.9 35.70 perf-profile.children.cycles-pp.pmem_do_read
19.64 ? 4% +13.8 33.39 perf-profile.children.cycles-pp.copyout
19.64 ? 4% +13.8 33.42 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
19.78 ? 4% +14.0 33.77 perf-profile.children.cycles-pp.copy_page_to_iter
13.34 ? 17% -13.2 0.15 ? 13% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.74 ? 18% -0.7 0.08 ? 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.52 ? 11% -0.5 0.06 ? 6% perf-profile.self.cycles-pp.xas_load
0.36 ? 3% -0.3 0.10 ? 6% perf-profile.self.cycles-pp.filemap_read
0.25 ? 52% -0.2 0.06 ? 11% perf-profile.self.cycles-pp.page_counter_try_charge
0.20 ? 19% -0.1 0.06 ? 7% perf-profile.self.cycles-pp.__mod_node_page_state
0.17 ? 30% -0.1 0.07 ? 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.11 ? 6% +0.0 0.16 ? 2% perf-profile.self.cycles-pp.pmem_do_read
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.kmem_cache_free
0.00 +0.1 0.13 ? 10% perf-profile.self.cycles-pp.__cond_resched
0.14 ? 6% +0.2 0.31 ? 2% perf-profile.self.cycles-pp.rmqueue
0.00 +0.2 0.18 ? 9% perf-profile.self.cycles-pp.folio_mapped
24.40 ? 4% +10.8 35.18 perf-profile.self.cycles-pp.copy_mc_fragile
19.38 ? 4% +13.7 33.04 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string


***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-11/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/mmap-pread-seq/vm-scalability/0x500320a

commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")

18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
46.87 ? 10% -97.9% 0.96 ? 4% vm-scalability.free_time
191595 ? 2% +241.0% 653394 ? 13% vm-scalability.median
0.19 ? 20% +1.2 1.37 ? 20% vm-scalability.stddev%
36786240 ? 2% +241.0% 1.255e+08 ? 13% vm-scalability.throughput
180.40 -77.2% 41.08 ? 12% vm-scalability.time.elapsed_time
180.40 -77.2% 41.08 ? 12% vm-scalability.time.elapsed_time.max
84631 -66.5% 28382 ? 21% vm-scalability.time.involuntary_context_switches
31087438 ? 5% -46.7% 16568180 ? 28% vm-scalability.time.major_page_faults
1.614e+08 -46.5% 86356129 ? 7% vm-scalability.time.minor_page_faults
18216 -3.2% 17637 vm-scalability.time.percent_of_cpu_this_job_got
28597 -86.4% 3901 ? 19% vm-scalability.time.system_time
4268 -21.8% 3336 ? 2% vm-scalability.time.user_time
46509474 ? 8% -45.0% 25597276 ? 33% vm-scalability.time.voluntary_context_switches
227.97 -60.7% 89.66 ? 3% uptime.boot
1.48e+09 ? 5% -45.4% 8.087e+08 ? 9% cpuidle..time
44196685 ? 8% -40.6% 26236295 ? 32% cpuidle..usage
4036916 ? 40% -82.9% 691729 ? 7% numa-numastat.node2.local_node
4117121 ? 39% -81.4% 766207 ? 7% numa-numastat.node2.numa_hit
1861867 ? 84% -62.8% 692093 ? 6% numa-numastat.node3.local_node
1942475 ? 80% -59.9% 778947 ? 5% numa-numastat.node3.numa_hit
3.67 ? 4% +4.2 7.85 ? 4% mpstat.cpu.all.idle%
0.88 ? 17% +1.0 1.90 ? 35% mpstat.cpu.all.iowait%
0.88 +0.2 1.06 mpstat.cpu.all.irq%
0.14 ? 10% +0.1 0.21 ? 11% mpstat.cpu.all.soft%
82.08 -34.6 47.48 ? 8% mpstat.cpu.all.sys%
12.35 +29.2 41.51 ? 10% mpstat.cpu.all.usr%
4.00 +175.0% 11.00 ? 7% vmstat.cpu.id
81.67 -44.1% 45.67 ? 8% vmstat.cpu.sy
12.00 +227.8% 39.33 ? 10% vmstat.cpu.us
16429426 -22.0% 12818057 vmstat.memory.cache
182.33 -10.4% 163.33 vmstat.procs.r
509931 ? 9% +119.5% 1119546 ? 25% vmstat.system.cs
2966302 ? 8% -89.8% 303438 ? 23% meminfo.Active
20863 ? 33% -74.0% 5428 meminfo.Active(anon)
2945438 ? 8% -89.9% 298009 ? 24% meminfo.Active(file)
180004 ? 2% -49.7% 90484 ? 5% meminfo.AnonHugePages
16278300 -20.6% 12927130 meminfo.Cached
24189299 -18.6% 19701239 meminfo.Memused
5160427 -23.6% 3940730 meminfo.PageTables
153951 ? 2% -26.3% 113466 meminfo.Shmem
24513374 -14.7% 20907068 meminfo.max_used_kB
2755746 ? 8% -72.3% 762359 ? 59% turbostat.C1
4033941 ? 8% -58.5% 1674613 ? 32% turbostat.C1E
2.54 ? 7% +2.8 5.35 ? 4% turbostat.C1E%
0.22 ? 40% +1.3 1.53 ? 95% turbostat.C6%
2.82 ? 4% +139.6% 6.76 ? 17% turbostat.CPU%c1
0.07 ? 11% +304.8% 0.28 ? 3% turbostat.CPU%c6
0.05 +9800.0% 4.95 ?136% turbostat.IPC
79897140 -76.2% 19045918 ? 13% turbostat.IRQ
37324544 ? 8% -36.7% 23632422 ? 33% turbostat.POLL
1.37 ? 9% +1.4 2.78 ? 31% turbostat.POLL%
51.33 +2.6% 52.67 turbostat.PkgTmp
254.83 +22.1% 311.09 ? 2% turbostat.PkgWatt
137924 ? 9% -49.1% 70157 ? 20% numa-meminfo.node0.AnonHugePages
355569 ? 8% +649.5% 2665047 ?108% numa-meminfo.node0.Inactive
1305569 ? 2% -23.2% 1003009 numa-meminfo.node0.PageTables
1287418 ? 2% -21.9% 1005960 numa-meminfo.node1.PageTables
1758487 ? 54% -92.5% 132350 ? 85% numa-meminfo.node2.Active
1757440 ? 54% -92.5% 131820 ? 86% numa-meminfo.node2.Active(file)
27560 ? 45% +38.2% 38100 ? 13% numa-meminfo.node2.AnonPages
8204369 ? 46% -64.0% 2956327 ? 76% numa-meminfo.node2.FilePages
37157 ? 20% -38.1% 22997 ? 40% numa-meminfo.node2.KReclaimable
10086474 ? 38% -54.6% 4583950 ? 48% numa-meminfo.node2.MemUsed
1265170 -21.8% 988848 numa-meminfo.node2.PageTables
37157 ? 20% -38.1% 22997 ? 40% numa-meminfo.node2.SReclaimable
78887 ?141% -100.0% 12.00 ?118% numa-meminfo.node2.Unevictable
14660 ? 37% -99.2% 124.33 ? 24% numa-meminfo.node3.Active(anon)
28679 ?119% -95.6% 1269 ? 5% numa-meminfo.node3.AnonPages
76530 ? 69% -96.5% 2646 ? 18% numa-meminfo.node3.AnonPages.max
51139 ? 71% -89.1% 5573 ? 83% numa-meminfo.node3.Inactive(anon)
1286708 ? 2% -22.9% 991696 numa-meminfo.node3.PageTables
37154 ? 10% -91.5% 3148 ?129% numa-meminfo.node3.Shmem
323291 -23.7% 246709 numa-vmstat.node0.nr_page_table_pages
318784 ? 2% -22.3% 247722 ? 3% numa-vmstat.node1.nr_page_table_pages
427728 ? 53% -92.1% 33763 ?101% numa-vmstat.node2.nr_active_file
6907 ? 45% +37.8% 9516 ? 13% numa-vmstat.node2.nr_anon_pages
313337 -22.3% 243536 ? 2% numa-vmstat.node2.nr_page_table_pages
9253 ? 20% -37.7% 5769 ? 40% numa-vmstat.node2.nr_slab_reclaimable
19721 ?141% -100.0% 3.00 ?118% numa-vmstat.node2.nr_unevictable
427713 ? 53% -92.1% 33773 ?101% numa-vmstat.node2.nr_zone_active_file
19721 ?141% -100.0% 3.00 ?118% numa-vmstat.node2.nr_zone_unevictable
4117235 ? 39% -81.4% 765901 ? 7% numa-vmstat.node2.numa_hit
4037029 ? 40% -82.9% 691422 ? 7% numa-vmstat.node2.numa_local
3813 ? 37% -99.2% 30.67 ? 24% numa-vmstat.node3.nr_active_anon
7118 ?119% -95.5% 319.67 ? 4% numa-vmstat.node3.nr_anon_pages
12533 ? 71% -88.9% 1394 ? 82% numa-vmstat.node3.nr_inactive_anon
318650 ? 2% -23.4% 244204 ? 2% numa-vmstat.node3.nr_page_table_pages
9236 ? 11% -91.5% 786.67 ?129% numa-vmstat.node3.nr_shmem
3813 ? 37% -99.2% 30.67 ? 24% numa-vmstat.node3.nr_zone_active_anon
12533 ? 71% -88.9% 1394 ? 82% numa-vmstat.node3.nr_zone_inactive_anon
1942945 ? 80% -59.9% 779066 ? 5% numa-vmstat.node3.numa_hit
1862337 ? 84% -62.8% 692212 ? 6% numa-vmstat.node3.numa_local
5229 ? 31% -74.1% 1356 proc-vmstat.nr_active_anon
734568 ? 8% -86.7% 98029 ? 20% proc-vmstat.nr_active_file
78115 -2.6% 76120 proc-vmstat.nr_anon_pages
4067457 -19.8% 3262058 proc-vmstat.nr_file_pages
43384997 +2.5% 44462783 proc-vmstat.nr_free_pages
111285 -7.1% 103410 proc-vmstat.nr_inactive_anon
2728035 -5.8% 2569881 proc-vmstat.nr_inactive_file
2741812 -4.6% 2616807 proc-vmstat.nr_mapped
1288800 ? 2% -22.7% 996178 proc-vmstat.nr_page_table_pages
38433 ? 2% -26.2% 28368 proc-vmstat.nr_shmem
41758 -6.1% 39207 proc-vmstat.nr_slab_reclaimable
79625 -2.7% 77448 proc-vmstat.nr_slab_unreclaimable
5229 ? 31% -74.1% 1356 proc-vmstat.nr_zone_active_anon
734568 ? 8% -86.7% 98029 ? 20% proc-vmstat.nr_zone_active_file
111285 -7.1% 103410 proc-vmstat.nr_zone_inactive_anon
2728035 -5.8% 2569881 proc-vmstat.nr_zone_inactive_file
29202 ? 11% -29.6% 20544 ? 20% proc-vmstat.numa_hint_faults
8906395 -63.8% 3221607 ? 2% proc-vmstat.numa_hit
8647246 -65.8% 2961022 ? 2% proc-vmstat.numa_local
33133 ? 4% -79.5% 6790 ? 29% proc-vmstat.numa_pages_migrated
115948 ? 9% -56.1% 50895 ? 29% proc-vmstat.numa_pte_updates
8910054 -1.5% 8776609 proc-vmstat.pgalloc_normal
2.246e+08 -46.6% 1.199e+08 ? 13% proc-vmstat.pgfault
8615822 -1.7% 8473592 proc-vmstat.pgfree
1799 ? 10% -99.8% 4.00 ? 35% proc-vmstat.pgmajfault
33133 ? 4% -79.5% 6790 ? 29% proc-vmstat.pgmigrate_success
2108 -1.3% 2081 proc-vmstat.pgpgout
53952 -63.2% 19870 ? 4% proc-vmstat.pgreuse
68595 ? 40% -99.9% 67.08 ?141% sched_debug.cfs_rq:/.MIN_vruntime.avg
5806823 ? 32% -99.9% 7023 ?141% sched_debug.cfs_rq:/.MIN_vruntime.max
620191 ? 35% -99.9% 655.14 ?141% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.65 ? 15% -89.8% 0.07 ? 9% sched_debug.cfs_rq:/.h_nr_running.avg
1.64 ? 6% -39.0% 1.00 sched_debug.cfs_rq:/.h_nr_running.max
0.22 ? 5% +12.5% 0.25 ? 4% sched_debug.cfs_rq:/.h_nr_running.stddev
5552 ? 12% -69.5% 1695 ? 5% sched_debug.cfs_rq:/.load.avg
13.57 ? 19% +34.1% 18.20 ? 2% sched_debug.cfs_rq:/.load_avg.avg
66.62 ? 42% +62.5% 108.26 ? 2% sched_debug.cfs_rq:/.load_avg.stddev
68595 ? 40% -99.9% 67.08 ?141% sched_debug.cfs_rq:/.max_vruntime.avg
5806823 ? 32% -99.9% 7023 ?141% sched_debug.cfs_rq:/.max_vruntime.max
620191 ? 35% -99.9% 655.14 ?141% sched_debug.cfs_rq:/.max_vruntime.stddev
13104524 ? 19% -99.9% 15496 ? 30% sched_debug.cfs_rq:/.min_vruntime.avg
13419952 ? 18% -99.7% 38089 ? 18% sched_debug.cfs_rq:/.min_vruntime.max
11781278 ? 16% -100.0% 4704 ? 36% sched_debug.cfs_rq:/.min_vruntime.min
239246 ? 22% -97.9% 5113 ? 11% sched_debug.cfs_rq:/.min_vruntime.stddev
0.64 ? 15% -89.6% 0.07 ? 9% sched_debug.cfs_rq:/.nr_running.avg
0.20 ? 11% +26.5% 0.25 ? 4% sched_debug.cfs_rq:/.nr_running.stddev
3.29 ? 17% +58.9% 5.23 ? 3% sched_debug.cfs_rq:/.removed.load_avg.avg
363.44 ? 29% +175.1% 1000 ? 3% sched_debug.cfs_rq:/.removed.load_avg.max
33.82 ? 22% +113.2% 72.10 ? 3% sched_debug.cfs_rq:/.removed.load_avg.stddev
180.33 ? 24% +79.1% 323.00 ? 36% sched_debug.cfs_rq:/.removed.runnable_avg.max
13.81 ? 30% +68.6% 23.29 ? 36% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
180.33 ? 24% +79.1% 323.00 ? 36% sched_debug.cfs_rq:/.removed.util_avg.max
13.81 ? 30% +68.6% 23.29 ? 36% sched_debug.cfs_rq:/.removed.util_avg.stddev
603.68 ? 16% -78.0% 132.63 sched_debug.cfs_rq:/.runnable_avg.avg
66.03 ? 51% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_avg.min
165.10 ? 13% +65.5% 273.26 ? 2% sched_debug.cfs_rq:/.runnable_avg.stddev
-173513 -93.7% -10949 sched_debug.cfs_rq:/.spread0.avg
145970 ? 52% -92.0% 11642 ? 76% sched_debug.cfs_rq:/.spread0.max
-1500915 -98.6% -21741 sched_debug.cfs_rq:/.spread0.min
240582 ? 22% -97.9% 5113 ? 11% sched_debug.cfs_rq:/.spread0.stddev
597.31 ? 16% -77.8% 132.59 sched_debug.cfs_rq:/.util_avg.avg
65.36 ? 51% -100.0% 0.00 sched_debug.cfs_rq:/.util_avg.min
159.11 ? 15% +71.7% 273.19 ? 2% sched_debug.cfs_rq:/.util_avg.stddev
527.10 ? 16% -97.8% 11.48 ? 5% sched_debug.cfs_rq:/.util_est_enqueued.avg
1208 ? 13% -31.5% 828.00 sched_debug.cfs_rq:/.util_est_enqueued.max
144.31 ? 11% -45.9% 78.03 sched_debug.cfs_rq:/.util_est_enqueued.stddev
339313 ? 25% +142.8% 823971 sched_debug.cpu.avg_idle.avg
935254 ? 7% +21.5% 1136425 ? 8% sched_debug.cpu.avg_idle.max
22494 ? 99% -93.8% 1391 ? 20% sched_debug.cpu.avg_idle.min
146018 ? 28% +77.2% 258706 ? 2% sched_debug.cpu.avg_idle.stddev
126960 ? 13% -62.4% 47674 ? 2% sched_debug.cpu.clock.avg
127006 ? 13% -62.5% 47684 ? 2% sched_debug.cpu.clock.max
126910 ? 13% -62.4% 47665 ? 2% sched_debug.cpu.clock.min
26.99 ? 13% -77.9% 5.98 ? 2% sched_debug.cpu.clock.stddev
126018 ? 13% -62.3% 47544 ? 2% sched_debug.cpu.clock_task.avg
126163 ? 13% -62.2% 47659 ? 2% sched_debug.cpu.clock_task.max
118809 ? 11% -67.8% 38275 ? 2% sched_debug.cpu.clock_task.min
4965 ? 16% -93.4% 327.73 ? 17% sched_debug.cpu.curr->pid.avg
8887 ? 2% -40.1% 5324 sched_debug.cpu.curr->pid.max
0.00 ? 14% -73.5% 0.00 ? 4% sched_debug.cpu.next_balance.stddev
0.64 ? 15% -89.5% 0.07 ? 16% sched_debug.cpu.nr_running.avg
1.64 ? 6% -39.0% 1.00 sched_debug.cpu.nr_running.max
262669 ? 6% -99.1% 2383 ? 2% sched_debug.cpu.nr_switches.avg
339518 ? 7% -94.9% 17226 ? 23% sched_debug.cpu.nr_switches.max
133856 ? 14% -99.5% 608.33 ? 23% sched_debug.cpu.nr_switches.min
73154 ? 15% -96.4% 2659 ? 10% sched_debug.cpu.nr_switches.stddev
2.33e+09 +12.3% 2.617e+09 ? 5% sched_debug.cpu.nr_uninterruptible.avg
126908 ? 13% -62.4% 47668 ? 2% sched_debug.cpu_clk
125894 ? 13% -62.9% 46652 ? 2% sched_debug.ktime
129346 ? 10% -62.8% 48145 ? 2% sched_debug.sched_clk
12.39 ? 8% -57.1% 5.31 perf-stat.i.MPKI
3.154e+10 ? 2% +307.2% 1.284e+11 ? 11% perf-stat.i.branch-instructions
0.16 ? 4% -0.1 0.05 ? 14% perf-stat.i.branch-miss-rate%
39015811 ? 4% +46.0% 56949389 ? 11% perf-stat.i.branch-misses
27.47 -15.6 11.85 ? 9% perf-stat.i.cache-miss-rate%
1.869e+08 +27.5% 2.382e+08 ? 2% perf-stat.i.cache-misses
6.927e+08 +197.5% 2.06e+09 ? 7% perf-stat.i.cache-references
532481 ? 9% +129.5% 1221898 ? 26% perf-stat.i.context-switches
11.00 ? 10% -86.7% 1.46 ? 11% perf-stat.i.cpi
5.805e+11 -1.8% 5.703e+11 perf-stat.i.cpu-cycles
1227 ? 11% +159.7% 3188 ? 26% perf-stat.i.cpu-migrations
3324 -27.0% 2426 ? 2% perf-stat.i.cycles-between-cache-misses
0.02 ? 2% -0.0 0.01 ? 7% perf-stat.i.dTLB-load-miss-rate%
6130936 +109.1% 12821616 ? 2% perf-stat.i.dTLB-load-misses
2.569e+10 ? 2% +299.8% 1.027e+11 ? 11% perf-stat.i.dTLB-loads
0.02 ? 3% -0.0 0.02 ? 7% perf-stat.i.dTLB-store-miss-rate%
1267107 ? 3% +135.4% 2983372 ? 3% perf-stat.i.dTLB-store-misses
4.635e+09 ? 2% +270.6% 1.718e+10 ? 10% perf-stat.i.dTLB-stores
6815222 ? 3% +53.9% 10485254 ? 16% perf-stat.i.iTLB-load-misses
9.9e+10 ? 2% +299.0% 3.95e+11 ? 10% perf-stat.i.instructions
13347 +197.8% 39746 ? 28% perf-stat.i.instructions-per-iTLB-miss
0.17 ? 2% +303.5% 0.70 ? 11% perf-stat.i.ipc
177593 ? 6% +123.8% 397415 ? 20% perf-stat.i.major-faults
3.02 -1.7% 2.97 perf-stat.i.metric.GHz
325.59 ? 2% +300.2% 1303 ? 11% perf-stat.i.metric.M/sec
926874 ? 2% +129.5% 2126907 ? 3% perf-stat.i.minor-faults
95.92 -1.3 94.58 perf-stat.i.node-load-miss-rate%
1082968 ? 3% +42.9% 1547522 ? 4% perf-stat.i.node-loads
98.43 -1.8 96.63 perf-stat.i.node-store-miss-rate%
135314 ? 21% +108.5% 282144 ? 23% perf-stat.i.node-stores
1104467 ? 3% +128.6% 2524323 ? 3% perf-stat.i.page-faults
7.13 -26.5% 5.24 ? 3% perf-stat.overall.MPKI
0.12 ? 3% -0.1 0.05 ? 20% perf-stat.overall.branch-miss-rate%
27.42 -15.8 11.66 ? 9% perf-stat.overall.cache-miss-rate%
6.00 -75.6% 1.46 ? 11% perf-stat.overall.cpi
3065 -22.0% 2389 ? 2% perf-stat.overall.cycles-between-cache-misses
0.02 -0.0 0.01 ? 8% perf-stat.overall.dTLB-load-miss-rate%
0.03 -0.0 0.02 ? 7% perf-stat.overall.dTLB-store-miss-rate%
14472 +172.7% 39462 ? 28% perf-stat.overall.instructions-per-iTLB-miss
0.17 +316.0% 0.69 ? 12% perf-stat.overall.ipc
98.95 -1.7 97.30 perf-stat.overall.node-store-miss-rate%
3605 -8.3% 3305 perf-stat.overall.path-length
3.058e+10 +308.8% 1.25e+11 ? 10% perf-stat.ps.branch-instructions
37933954 ? 4% +46.3% 55497860 ? 11% perf-stat.ps.branch-misses
1.88e+08 +23.7% 2.325e+08 ? 2% perf-stat.ps.cache-misses
6.856e+08 +192.8% 2.008e+09 ? 7% perf-stat.ps.cache-references
514435 ? 8% +131.2% 1189347 ? 26% perf-stat.ps.context-switches
190639 -1.9% 187097 perf-stat.ps.cpu-clock
5.764e+11 -3.7% 5.553e+11 perf-stat.ps.cpu-cycles
1188 ? 11% +162.8% 3124 ? 26% perf-stat.ps.cpu-migrations
6014890 +107.4% 12475043 ? 2% perf-stat.ps.dTLB-load-misses
2.494e+10 +300.9% 9.998e+10 ? 10% perf-stat.ps.dTLB-loads
1226138 +136.8% 2902969 ? 2% perf-stat.ps.dTLB-store-misses
4.503e+09 +271.6% 1.673e+10 ? 9% perf-stat.ps.dTLB-stores
6644280 ? 2% +53.6% 10202828 ? 16% perf-stat.ps.iTLB-load-misses
9.61e+10 +300.1% 3.845e+11 ? 10% perf-stat.ps.instructions
171562 ? 6% +125.4% 386762 ? 21% perf-stat.ps.major-faults
895259 +131.2% 2069711 ? 3% perf-stat.ps.minor-faults
1077758 ? 2% +39.9% 1508290 ? 4% perf-stat.ps.node-loads
12361604 ? 3% -15.9% 10395011 ? 15% perf-stat.ps.node-store-misses
131225 ? 22% +109.1% 274420 ? 23% perf-stat.ps.node-stores
1066822 ? 2% +130.3% 2456474 ? 2% perf-stat.ps.page-faults
190639 -1.9% 187097 perf-stat.ps.task-clock
1.742e+13 -8.3% 1.597e+13 perf-stat.total.instructions
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
99.45 -99.4 0.00 perf-profile.calltrace.cycles-pp.munmap
99.44 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
99.44 -99.4 0.00 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
99.44 -99.2 0.20 ?141% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
99.42 -99.2 0.20 ?141% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
80.45 -80.4 0.00 perf-profile.calltrace.cycles-pp.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
50.41 ? 2% -50.4 0.00 perf-profile.calltrace.cycles-pp.workingset_activation.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range
49.47 ? 2% -49.5 0.00 perf-profile.calltrace.cycles-pp.workingset_age_nonresident.workingset_activation.folio_mark_accessed.zap_pte_range.zap_pmd_range
25.91 ? 3% -25.9 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range.zap_pmd_range.unmap_page_range
24.60 ? 3% -24.6 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range.zap_pmd_range
24.59 ? 3% -24.6 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed.zap_pte_range
24.53 ? 3% -24.5 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.pagevec_lru_move_fn.folio_mark_accessed
8.21 ? 4% -8.2 0.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
6.14 ? 4% -6.1 0.00 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range
5.40 ? 4% -5.4 0.00 perf-profile.calltrace.cycles-pp.page_remove_rmap.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
0.00 +0.7 0.69 ? 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ? 13% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ? 13% perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ? 13% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.__close
0.00 +0.7 0.69 ? 13% perf-profile.calltrace.cycles-pp.task_state.proc_pid_status.proc_single_show.seq_read_iter.seq_read
0.00 +0.9 0.89 ? 25% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.9 0.89 ? 25% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +0.9 0.92 ? 33% perf-profile.calltrace.cycles-pp.io_serial_out.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock
0.00 +0.9 0.92 ? 33% perf-profile.calltrace.cycles-pp._free_event.perf_event_release_kernel.perf_release.__fput.task_work_run
0.00 +0.9 0.92 ? 33% perf-profile.calltrace.cycles-pp.fork
0.00 +1.0 1.01 ? 15% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +1.0 1.01 ? 15% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +1.1 1.08 ? 39% perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +1.1 1.09 ? 46% perf-profile.calltrace.cycles-pp.__close
0.00 +1.2 1.17 ? 34% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common
0.00 +1.2 1.21 ? 35% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter
0.00 +1.2 1.21 ? 35% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
0.00 +1.2 1.21 ? 35% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.00 +1.2 1.24 ? 69% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.sys_imageblit.drm_fbdev_fb_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.con_scroll.lf.vt_console_print.call_console_drivers.console_unlock
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.vt_console_print.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.lf.vt_console_print.call_console_drivers.console_unlock.vprintk_emit
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.fbcon_scroll.con_scroll.lf.vt_console_print.call_console_drivers
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.fbcon_redraw.fbcon_scroll.con_scroll.lf.vt_console_print
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll.lf
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll
0.00 +1.3 1.25 ? 4% perf-profile.calltrace.cycles-pp.drm_fbdev_fb_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll
0.00 +1.3 1.27 ? 44% perf-profile.calltrace.cycles-pp.seq_read_iter.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read
0.00 +1.3 1.27 ? 44% perf-profile.calltrace.cycles-pp.proc_reg_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
0.00 +1.3 1.28 ? 20% perf-profile.calltrace.cycles-pp.search_binary_handler.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve
0.00 +1.3 1.28 ? 20% perf-profile.calltrace.cycles-pp.exec_binprm.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64
0.00 +1.4 1.38 ? 76% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.00 +1.5 1.50 ? 14% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +1.5 1.50 ? 14% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +1.6 1.55 ? 41% perf-profile.calltrace.cycles-pp.bprm_execve.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ? 45% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ? 45% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.6 1.64 ? 45% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.68 ? 19% perf-profile.calltrace.cycles-pp.fnmatch
0.00 +1.7 1.68 ? 46% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.7 1.69 ? 49% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ? 49% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ? 49% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ? 49% perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.00 +1.7 1.69 ? 49% perf-profile.calltrace.cycles-pp.execve
0.00 +1.7 1.71 ? 65% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +1.7 1.71 ? 65% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +1.9 1.88 ? 26% perf-profile.calltrace.cycles-pp.proc_pid_status.proc_single_show.seq_read_iter.seq_read.vfs_read
0.00 +1.9 1.88 ? 26% perf-profile.calltrace.cycles-pp.proc_single_show.seq_read_iter.seq_read.vfs_read.ksys_read
0.00 +2.1 2.10 ? 29% perf-profile.calltrace.cycles-pp.seq_read_iter.seq_read.vfs_read.ksys_read.do_syscall_64
0.00 +2.1 2.10 ? 29% perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.2 2.23 ? 46% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_event_release_kernel.perf_release.__fput
0.00 +2.3 2.30 ? 57% perf-profile.calltrace.cycles-pp.update_sg_lb_stats.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance
0.00 +2.3 2.30 ? 57% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
0.00 +2.3 2.30 ? 57% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
0.00 +2.3 2.33 ?103% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.4 2.43 ? 40% perf-profile.calltrace.cycles-pp.event_function_call.perf_event_release_kernel.perf_release.__fput.task_work_run
0.00 +2.6 2.60 ? 76% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +2.6 2.60 ? 76% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.7 2.67 ? 16% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.00 +2.7 2.67 ? 16% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.00 +2.7 2.69 ? 59% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +2.7 2.69 ? 59% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe._dl_catch_error
0.00 +2.9 2.93 ? 57% perf-profile.calltrace.cycles-pp._dl_catch_error
0.00 +3.7 3.66 ? 49% perf-profile.calltrace.cycles-pp.generic_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +3.7 3.66 ? 49% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +3.7 3.66 ? 49% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
0.00 +3.7 3.67 ? 56% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +3.7 3.67 ? 56% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
0.00 +3.7 3.68 ? 27% perf-profile.calltrace.cycles-pp.perf_release.__fput.task_work_run.do_exit.do_group_exit
0.00 +3.7 3.68 ? 27% perf-profile.calltrace.cycles-pp.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
0.00 +3.8 3.78 ? 17% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ? 17% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ? 17% perf-profile.calltrace.cycles-pp.read
0.00 +3.8 3.78 ? 17% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +3.8 3.78 ? 17% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +4.2 4.22 ? 34% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart
0.00 +4.2 4.22 ? 34% perf-profile.calltrace.cycles-pp.__fput.task_work_run.do_exit.do_group_exit.get_signal
0.00 +4.6 4.58 ? 34% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop.exit_to_user_mode_prepare
0.00 +4.6 4.58 ? 34% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart.exit_to_user_mode_loop
0.00 +6.7 6.66 ? 45% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +7.7 7.74 ? 43% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.call_console_drivers
0.00 +7.7 7.74 ? 43% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock
0.00 +8.0 8.04 ? 75% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_helper_damage_blit.drm_fb_helper_damage_work.process_one_work.worker_thread
0.00 +8.1 8.15 ? 76% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +8.1 8.15 ? 76% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +8.1 8.15 ? 76% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_blit.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
0.00 +8.6 8.62 ? 68% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +8.7 8.66 ? 41% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.call_console_drivers.console_unlock.vprintk_emit
0.00 +8.9 8.87 ? 67% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +8.9 8.87 ? 67% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +9.5 9.47 ? 39% perf-profile.calltrace.cycles-pp.serial8250_console_write.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit
0.00 +10.7 10.72 ? 35% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write
0.00 +10.7 10.72 ? 35% perf-profile.calltrace.cycles-pp.call_console_drivers.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold
0.00 +16.3 16.26 ? 19% perf-profile.calltrace.cycles-pp.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +16.3 16.26 ? 19% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write
0.00 +16.3 16.26 ? 19% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write
0.00 +20.0 20.03 ? 19% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +20.1 20.14 ? 20% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ? 19% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ? 19% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ? 19% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.00 +20.3 20.27 ? 19% perf-profile.calltrace.cycles-pp.write
0.00 +34.4 34.37 ? 18% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.00 +34.8 34.84 ? 18% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.00 +39.3 39.35 ? 19% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +39.3 39.35 ? 19% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
0.00 +39.9 39.90 ? 19% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.4 41.39 ? 16% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.4 41.39 ? 16% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.00 +41.5 41.53 ? 16% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
99.45 -99.4 0.00 perf-profile.children.cycles-pp.munmap
99.46 -99.3 0.14 ?141% perf-profile.children.cycles-pp.__vm_munmap
99.45 -99.3 0.14 ?141% perf-profile.children.cycles-pp.__x64_sys_munmap
99.47 -99.3 0.20 ?141% perf-profile.children.cycles-pp.unmap_region
99.47 -98.9 0.61 ? 82% perf-profile.children.cycles-pp.__do_munmap
99.45 -98.4 1.08 ? 39% perf-profile.children.cycles-pp.zap_pte_range
99.45 -98.4 1.08 ? 39% perf-profile.children.cycles-pp.unmap_page_range
99.45 -98.4 1.08 ? 39% perf-profile.children.cycles-pp.zap_pmd_range
99.45 -98.2 1.28 ? 20% perf-profile.children.cycles-pp.unmap_vmas
80.46 -80.5 0.00 perf-profile.children.cycles-pp.folio_mark_accessed
99.61 -62.0 37.61 ? 9% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.61 -62.0 37.61 ? 9% perf-profile.children.cycles-pp.do_syscall_64
50.42 ? 2% -50.4 0.00 perf-profile.children.cycles-pp.workingset_activation
49.67 ? 2% -49.7 0.00 perf-profile.children.cycles-pp.workingset_age_nonresident
26.03 ? 3% -26.0 0.00 perf-profile.children.cycles-pp.pagevec_lru_move_fn
24.79 ? 3% -24.8 0.00 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
24.71 ? 3% -24.7 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
24.81 ? 3% -24.7 0.11 ?141% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
8.21 ? 4% -8.2 0.00 perf-profile.children.cycles-pp.tlb_flush_mmu
6.58 ? 3% -6.5 0.11 ?141% perf-profile.children.cycles-pp.release_pages
5.42 ? 4% -4.9 0.52 ? 99% perf-profile.children.cycles-pp.page_remove_rmap
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.__irq_exit_rcu
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.setlocale
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.__alloc_pages
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.do_read_fault
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.finish_fault
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.kfree
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.4 0.45 ? 25% perf-profile.children.cycles-pp.single_release
0.10 ? 4% +0.5 0.58 ? 34% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.tick_nohz_next_event
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.dup_mm
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.dup_mmap
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.filename_lookup
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.menu_select
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.path_lookupat
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +0.6 0.56 ? 19% perf-profile.children.cycles-pp.user_path_at_empty
0.00 +0.6 0.58 ? 34% perf-profile.children.cycles-pp.page_counter_uncharge
0.00 +0.6 0.58 ? 34% perf-profile.children.cycles-pp.do_fault
0.00 +0.6 0.58 ? 34% perf-profile.children.cycles-pp.memcg_slab_post_alloc_hook
0.00 +0.6 0.58 ? 34% perf-profile.children.cycles-pp.mod_objcg_state
0.00 +0.6 0.58 ? 34% perf-profile.children.cycles-pp.obj_cgroup_uncharge_pages
0.00 +0.7 0.67 ? 36% perf-profile.children.cycles-pp.__do_sys_clone
0.00 +0.7 0.67 ? 36% perf-profile.children.cycles-pp.kernel_clone
0.00 +0.7 0.67 ? 36% perf-profile.children.cycles-pp.copy_process
0.00 +0.7 0.67 ? 36% perf-profile.children.cycles-pp.sw_perf_event_destroy
0.00 +0.7 0.69 ? 13% perf-profile.children.cycles-pp.dput
0.00 +0.7 0.69 ? 13% perf-profile.children.cycles-pp.__do_sys_newstat
0.00 +0.7 0.69 ? 13% perf-profile.children.cycles-pp.task_state
0.00 +0.7 0.69 ? 13% perf-profile.children.cycles-pp.vfs_statx
0.00 +0.7 0.72 ? 52% perf-profile.children.cycles-pp.__cond_resched
0.00 +0.7 0.72 ? 52% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.13 +0.8 0.89 ? 25% perf-profile.children.cycles-pp.update_process_times
0.13 +0.8 0.89 ? 25% perf-profile.children.cycles-pp.tick_sched_handle
0.24 +0.8 1.01 ? 15% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.9 0.86 ? 65% perf-profile.children.cycles-pp.kmem_cache_alloc
0.14 +0.9 1.01 ? 15% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.9 0.89 ? 25% perf-profile.children.cycles-pp.__close
0.00 +0.9 0.92 ? 33% perf-profile.children.cycles-pp._free_event
0.00 +0.9 0.92 ? 45% perf-profile.children.cycles-pp.begin_new_exec
0.00 +0.9 0.92 ? 45% perf-profile.children.cycles-pp.exec_mmap
0.00 +1.0 1.03 ? 44% perf-profile.children.cycles-pp.io_serial_out
0.00 +1.1 1.07 ? 53% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.00 +1.1 1.07 ? 53% perf-profile.children.cycles-pp.shmem_write_begin
0.00 +1.1 1.08 ? 39% perf-profile.children.cycles-pp.delay_tsc
0.00 +1.1 1.12 ? 19% perf-profile.children.cycles-pp.fork
0.02 ?141% +1.2 1.17 ? 34% perf-profile.children.cycles-pp.load_elf_binary
0.00 +1.2 1.18 ? 49% perf-profile.children.cycles-pp.vsnprintf
0.00 +1.2 1.18 ? 49% perf-profile.children.cycles-pp.seq_printf
0.00 +1.2 1.21 ? 35% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +1.2 1.21 ? 35% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.00 +1.2 1.21 ? 35% perf-profile.children.cycles-pp.copyin
0.05 +1.2 1.28 ? 20% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.sys_imageblit
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.con_scroll
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.vt_console_print
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.lf
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.fbcon_scroll
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.fbcon_redraw
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.fbcon_putcs
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.bit_putcs
0.00 +1.3 1.25 ? 4% perf-profile.children.cycles-pp.drm_fbdev_fb_imageblit
0.02 ?141% +1.3 1.28 ? 20% perf-profile.children.cycles-pp.search_binary_handler
0.02 ?141% +1.3 1.28 ? 20% perf-profile.children.cycles-pp.exec_binprm
0.00 +1.3 1.27 ? 44% perf-profile.children.cycles-pp.proc_reg_read_iter
0.34 +1.3 1.64 ? 21% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.33 ? 2% +1.3 1.64 ? 21% perf-profile.children.cycles-pp.hrtimer_interrupt
0.02 ?141% +1.5 1.55 ? 41% perf-profile.children.cycles-pp.bprm_execve
0.00 +1.6 1.61 ? 20% perf-profile.children.cycles-pp.handle_mm_fault
0.00 +1.6 1.61 ? 20% perf-profile.children.cycles-pp.__handle_mm_fault
0.02 ?141% +1.6 1.64 ? 45% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.02 ?141% +1.7 1.69 ? 49% perf-profile.children.cycles-pp.__x64_sys_execve
0.02 ?141% +1.7 1.69 ? 49% perf-profile.children.cycles-pp.do_execveat_common
0.02 ?141% +1.7 1.69 ? 49% perf-profile.children.cycles-pp.execve
0.00 +1.7 1.68 ? 19% perf-profile.children.cycles-pp.fnmatch
0.00 +1.7 1.68 ? 46% perf-profile.children.cycles-pp.new_sync_read
0.00 +1.7 1.72 ? 80% perf-profile.children.cycles-pp.poll_idle
0.00 +1.8 1.76 ? 98% perf-profile.children.cycles-pp.__libc_start_main
0.00 +1.8 1.76 ? 98% perf-profile.children.cycles-pp.main
0.00 +1.8 1.76 ? 98% perf-profile.children.cycles-pp.run_builtin
0.00 +1.8 1.76 ? 98% perf-profile.children.cycles-pp.cmd_record
0.00 +1.8 1.76 ? 98% perf-profile.children.cycles-pp.__cmd_record
0.03 ?141% +1.8 1.80 ? 42% perf-profile.children.cycles-pp.mmput
0.03 ?141% +1.8 1.80 ? 42% perf-profile.children.cycles-pp.exit_mmap
0.00 +1.9 1.88 ? 26% perf-profile.children.cycles-pp.proc_pid_status
0.00 +1.9 1.88 ? 26% perf-profile.children.cycles-pp.proc_single_show
0.00 +2.0 2.01 ? 15% perf-profile.children.cycles-pp.exc_page_fault
0.00 +2.0 2.01 ? 15% perf-profile.children.cycles-pp.do_user_addr_fault
0.00 +2.1 2.10 ? 29% perf-profile.children.cycles-pp.seq_read
0.00 +2.2 2.23 ? 46% perf-profile.children.cycles-pp.smp_call_function_single
0.00 +2.4 2.43 ? 40% perf-profile.children.cycles-pp.event_function_call
0.36 +2.4 2.81 ? 22% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.00 +2.6 2.58 ? 65% perf-profile.children.cycles-pp.update_sg_lb_stats
0.00 +2.6 2.58 ? 65% perf-profile.children.cycles-pp.find_busiest_group
0.00 +2.6 2.58 ? 65% perf-profile.children.cycles-pp.update_sd_lb_stats
0.76 ? 5% +2.6 3.39 ? 23% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +2.6 2.65 ? 82% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.00 +2.8 2.77 ? 23% perf-profile.children.cycles-pp.asm_exc_page_fault
0.00 +2.9 2.91 ? 61% perf-profile.children.cycles-pp.load_balance
0.00 +3.0 3.00 ? 46% perf-profile.children.cycles-pp.pick_next_task_fair
0.00 +3.0 3.02 ? 55% perf-profile.children.cycles-pp.newidle_balance
0.02 ?141% +3.3 3.29 ? 52% perf-profile.children.cycles-pp._dl_catch_error
0.02 ?141% +3.6 3.66 ? 49% perf-profile.children.cycles-pp.generic_file_write_iter
0.02 ?141% +3.6 3.66 ? 49% perf-profile.children.cycles-pp.__generic_file_write_iter
0.02 ?141% +3.6 3.66 ? 49% perf-profile.children.cycles-pp.generic_perform_write
0.00 +3.6 3.65 ? 19% perf-profile.children.cycles-pp.seq_read_iter
0.00 +3.7 3.67 ? 45% perf-profile.children.cycles-pp.__schedule
0.00 +3.7 3.68 ? 56% perf-profile.children.cycles-pp.do_filp_open
0.00 +3.7 3.68 ? 56% perf-profile.children.cycles-pp.path_openat
0.00 +3.7 3.68 ? 27% perf-profile.children.cycles-pp.perf_release
0.00 +3.7 3.68 ? 27% perf-profile.children.cycles-pp.perf_event_release_kernel
0.00 +3.8 3.78 ? 17% perf-profile.children.cycles-pp.ksys_read
0.00 +3.8 3.78 ? 17% perf-profile.children.cycles-pp.vfs_read
0.00 +3.9 3.92 ? 17% perf-profile.children.cycles-pp.read
0.00 +3.9 3.92 ? 56% perf-profile.children.cycles-pp.__x64_sys_openat
0.00 +3.9 3.92 ? 56% perf-profile.children.cycles-pp.do_sys_openat2
0.00 +4.6 4.58 ? 34% perf-profile.children.cycles-pp.arch_do_signal_or_restart
0.00 +4.6 4.58 ? 34% perf-profile.children.cycles-pp.get_signal
0.00 +5.5 5.46 ? 15% perf-profile.children.cycles-pp.__fput
0.00 +5.7 5.71 ? 16% perf-profile.children.cycles-pp.task_work_run
0.00 +6.1 6.07 ? 19% perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.02 ?141% +6.2 6.22 ? 13% perf-profile.children.cycles-pp.do_group_exit
0.02 ?141% +6.2 6.22 ? 13% perf-profile.children.cycles-pp.do_exit
0.00 +6.3 6.27 ? 15% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.00 +7.4 7.36 ? 41% perf-profile.children.cycles-pp.io_serial_in
0.00 +7.7 7.74 ? 43% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +8.1 8.15 ? 76% perf-profile.children.cycles-pp.process_one_work
0.00 +8.1 8.15 ? 76% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.00 +8.1 8.15 ? 76% perf-profile.children.cycles-pp.drm_fb_helper_damage_blit
0.00 +8.1 8.15 ? 76% perf-profile.children.cycles-pp.memcpy_toio
0.00 +8.3 8.33 ? 40% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +8.6 8.62 ? 68% perf-profile.children.cycles-pp.worker_thread
0.00 +8.7 8.66 ? 41% perf-profile.children.cycles-pp.uart_console_write
0.02 ?141% +8.9 8.87 ? 67% perf-profile.children.cycles-pp.ret_from_fork
0.02 ?141% +8.9 8.87 ? 67% perf-profile.children.cycles-pp.kthread
0.00 +9.5 9.47 ? 39% perf-profile.children.cycles-pp.serial8250_console_write
0.00 +10.7 10.72 ? 35% perf-profile.children.cycles-pp.console_unlock
0.00 +10.7 10.72 ? 35% perf-profile.children.cycles-pp.call_console_drivers
0.00 +16.3 16.26 ? 19% perf-profile.children.cycles-pp.devkmsg_write.cold
0.00 +16.3 16.26 ? 19% perf-profile.children.cycles-pp.devkmsg_emit
0.00 +16.3 16.26 ? 19% perf-profile.children.cycles-pp.vprintk_emit
0.02 ?141% +20.0 20.03 ? 19% perf-profile.children.cycles-pp.new_sync_write
0.02 ?141% +20.1 20.14 ? 20% perf-profile.children.cycles-pp.vfs_write
0.03 ?141% +20.2 20.27 ? 19% perf-profile.children.cycles-pp.ksys_write
0.02 ?141% +20.2 20.27 ? 19% perf-profile.children.cycles-pp.write
0.32 ? 37% +34.7 34.98 ? 18% perf-profile.children.cycles-pp.intel_idle
0.32 ? 37% +34.7 34.98 ? 18% perf-profile.children.cycles-pp.mwait_idle_with_hints
0.33 ? 36% +39.2 39.48 ? 19% perf-profile.children.cycles-pp.cpuidle_enter
0.33 ? 36% +39.2 39.48 ? 19% perf-profile.children.cycles-pp.cpuidle_enter_state
0.33 ? 35% +39.7 40.04 ? 19% perf-profile.children.cycles-pp.cpuidle_idle_call
0.33 ? 35% +41.2 41.53 ? 16% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.33 ? 35% +41.2 41.53 ? 16% perf-profile.children.cycles-pp.cpu_startup_entry
0.33 ? 35% +41.2 41.53 ? 16% perf-profile.children.cycles-pp.do_idle
49.51 ? 2% -49.5 0.00 perf-profile.self.cycles-pp.workingset_age_nonresident
24.71 ? 3% -24.7 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
6.52 ? 3% -6.5 0.00 perf-profile.self.cycles-pp.release_pages
5.28 ? 6% -4.9 0.36 ? 76% perf-profile.self.cycles-pp.zap_pte_range
0.00 +0.4 0.45 ? 25% perf-profile.self.cycles-pp.proc_pid_status
0.00 +0.4 0.45 ? 25% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.4 0.45 ? 25% perf-profile.self.cycles-pp.page_counter_uncharge
0.00 +0.6 0.56 ? 19% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.00 +1.0 1.03 ? 44% perf-profile.self.cycles-pp.io_serial_out
0.00 +1.1 1.08 ? 39% perf-profile.self.cycles-pp.delay_tsc
0.00 +1.2 1.21 ? 35% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.03 ? 70% +1.2 1.28 ? 20% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +1.3 1.25 ? 4% perf-profile.self.cycles-pp.sys_imageblit
0.00 +1.7 1.68 ? 19% perf-profile.self.cycles-pp.fnmatch
0.00 +1.7 1.72 ? 80% perf-profile.self.cycles-pp.poll_idle
0.00 +2.1 2.12 ? 41% perf-profile.self.cycles-pp.smp_call_function_single
0.00 +2.4 2.44 ? 61% perf-profile.self.cycles-pp.update_sg_lb_stats
0.00 +7.4 7.36 ? 41% perf-profile.self.cycles-pp.io_serial_in
0.00 +8.1 8.15 ? 76% perf-profile.self.cycles-pp.memcpy_toio
0.32 ? 37% +34.7 34.98 ? 18% perf-profile.self.cycles-pp.mwait_idle_with_hints



***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/mmap-pread-seq-mt/vm-scalability/0x500320a

commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")

18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
47.76 ? 21% +120.9% 105.50 ? 18% vm-scalability.free_time
381902 +67.5% 639832 ? 9% vm-scalability.median
72780899 +64.8% 1.199e+08 ? 6% vm-scalability.throughput
461.08 +35.5% 624.74 vm-scalability.time.elapsed_time
461.08 +35.5% 624.74 vm-scalability.time.elapsed_time.max
140148 ? 7% +46.2% 204907 ? 13% vm-scalability.time.involuntary_context_switches
1.039e+10 +149.1% 2.588e+10 ? 5% vm-scalability.time.maximum_resident_set_size
3.901e+08 +35.8% 5.298e+08 ? 11% vm-scalability.time.minor_page_faults
11965 -27.0% 8731 ? 3% vm-scalability.time.percent_of_cpu_this_job_got
45372 -31.5% 31070 ? 13% vm-scalability.time.system_time
9799 +139.3% 23452 ? 20% vm-scalability.time.user_time
2.327e+10 +67.3% 3.895e+10 ? 3% vm-scalability.workload
3.17e+10 +97.0% 6.247e+10 ? 4% cpuidle..time
1.364e+08 ? 2% +52.6% 2.081e+08 ? 18% cpuidle..usage
511.67 +31.7% 673.93 uptime.boot
39763 +75.9% 69937 ? 3% uptime.idle
1650179 ? 6% +108.4% 3439319 ? 13% numa-numastat.node0.local_node
1714274 ? 7% +104.0% 3497283 ? 12% numa-numastat.node0.numa_hit
80709 ? 10% +1568.8% 1346869 ? 50% numa-numastat.node1.other_node
51833 ? 45% +3011.6% 1612852 ? 72% numa-numastat.node2.other_node
35.43 +16.4 51.83 ? 2% mpstat.cpu.all.idle%
1.15 +0.2 1.32 ? 3% mpstat.cpu.all.irq%
0.14 ? 2% -0.0 0.12 ? 12% mpstat.cpu.all.soft%
51.46 -25.4 26.05 ? 10% mpstat.cpu.all.sys%
11.22 +8.7 19.95 ? 21% mpstat.cpu.all.usr%
35.33 +44.3% 51.00 vmstat.cpu.id
11.00 +72.7% 19.00 ? 22% vmstat.cpu.us
39267912 +186.2% 1.124e+08 vmstat.memory.cache
1.463e+08 -61.9% 55690119 ? 3% vmstat.memory.free
121.33 -25.8% 90.00 ? 3% vmstat.procs.r
2004 -25.7% 1489 ? 2% turbostat.Avg_MHz
65.34 -16.4 48.90 ? 2% turbostat.Busy%
4391194 ? 3% -40.7% 2602910 ? 48% turbostat.C1
65661757 +90.5% 1.251e+08 ? 4% turbostat.C1E
33.68 +15.8 49.47 ? 3% turbostat.C1E%
34.63 +47.3% 51.01 ? 2% turbostat.CPU%c1
0.07 ? 7% +135.0% 0.16 ? 7% turbostat.IPC
2.636e+08 ? 7% +64.2% 4.327e+08 ? 41% turbostat.IRQ
206.94 -1.8% 203.27 turbostat.PkgWatt
18398499 +233.3% 61329324 ? 14% meminfo.Active
434070 ? 10% -41.0% 256027 ? 30% meminfo.Active(anon)
17964429 +240.0% 61073297 ? 14% meminfo.Active(file)
39197349 +185.9% 1.121e+08 meminfo.Cached
2720032 ? 3% -12.0% 2394908 meminfo.Committed_AS
18649788 +160.4% 48571839 ? 21% meminfo.Inactive
592314 ? 10% -23.6% 452265 ? 14% meminfo.Inactive(anon)
18057473 +166.5% 48119574 ? 21% meminfo.Inactive(file)
219940 +73.3% 381070 meminfo.KReclaimable
36119017 +201.2% 1.088e+08 meminfo.Mapped
1.812e+08 -9.5% 1.639e+08 meminfo.MemAvailable
1.461e+08 -61.8% 55768647 ? 3% meminfo.MemFree
51577716 +175.2% 1.419e+08 meminfo.Memused
10114995 +171.9% 27506491 ? 2% meminfo.PageTables
219940 +73.3% 381070 meminfo.SReclaimable
312390 +13.2% 353621 ? 2% meminfo.SUnreclaim
734611 ? 13% -41.7% 427960 ? 8% meminfo.Shmem
532331 +38.0% 734691 meminfo.Slab
55437706 +182.4% 1.566e+08 meminfo.max_used_kB
270259 ?112% +1573.3% 4522215 ? 16% numa-vmstat.node0.nr_active_file
946393 ? 35% +663.5% 7225435 ? 16% numa-vmstat.node0.nr_file_pages
10541034 ? 3% -69.1% 3255834 ? 25% numa-vmstat.node0.nr_free_pages
98102 ? 70% +2211.2% 2267326 ? 32% numa-vmstat.node0.nr_inactive_file
381079 ? 92% +1678.2% 6776300 ? 21% numa-vmstat.node0.nr_mapped
643599 ? 3% +153.2% 1629683 ? 22% numa-vmstat.node0.nr_page_table_pages
20984 ? 12% +51.4% 31778 ? 15% numa-vmstat.node0.nr_slab_reclaimable
270259 ?112% +1573.3% 4522216 ? 16% numa-vmstat.node0.nr_zone_active_file
98101 ? 70% +2211.2% 2267302 ? 32% numa-vmstat.node0.nr_zone_inactive_file
1713996 ? 7% +104.0% 3496544 ? 12% numa-vmstat.node0.numa_hit
1649901 ? 6% +108.4% 3438581 ? 13% numa-vmstat.node0.numa_local
708884 ?122% +466.2% 4013857 ? 7% numa-vmstat.node1.nr_active_file
1335766 ?119% +354.3% 6068220 ? 19% numa-vmstat.node1.nr_file_pages
10293220 ? 15% -58.1% 4317437 ? 20% numa-vmstat.node1.nr_free_pages
622687 ?117% +224.6% 2021489 ? 45% numa-vmstat.node1.nr_inactive_file
1331303 ?119% +348.9% 5976082 ? 20% numa-vmstat.node1.nr_mapped
642435 +189.7% 1860950 ? 16% numa-vmstat.node1.nr_page_table_pages
6047 ? 66% +206.0% 18505 ? 14% numa-vmstat.node1.nr_slab_reclaimable
16156 ? 3% +29.6% 20936 ? 11% numa-vmstat.node1.nr_slab_unreclaimable
708884 ?122% +466.2% 4013857 ? 7% numa-vmstat.node1.nr_zone_active_file
622684 ?117% +224.6% 2021474 ? 45% numa-vmstat.node1.nr_zone_inactive_file
80709 ? 10% +1568.8% 1346867 ? 50% numa-vmstat.node1.numa_other
629284 ? 7% +212.3% 1965297 ? 36% numa-vmstat.node2.nr_page_table_pages
51833 ? 45% +3011.6% 1612851 ? 72% numa-vmstat.node2.numa_other
76205 ? 42% -77.4% 17185 ?135% numa-vmstat.node3.nr_active_anon
2340058 ? 55% +93.0% 4516830 ? 11% numa-vmstat.node3.nr_active_file
27102 ? 87% -88.7% 3069 ? 50% numa-vmstat.node3.nr_anon_pages
4825949 ? 61% +73.4% 8366128 ? 6% numa-vmstat.node3.nr_file_pages
6778041 ? 42% -63.2% 2495049 ? 11% numa-vmstat.node3.nr_free_pages
61462 ? 36% -90.9% 5604 ? 6% numa-vmstat.node3.nr_inactive_anon
9125 ? 24% -25.3% 6816 ? 7% numa-vmstat.node3.nr_kernel_stack
614483 ? 6% +124.8% 1381443 ? 22% numa-vmstat.node3.nr_page_table_pages
110543 ? 34% -82.1% 19753 ?119% numa-vmstat.node3.nr_shmem
16802 ? 18% +54.9% 26023 ? 18% numa-vmstat.node3.nr_slab_reclaimable
76205 ? 42% -77.4% 17185 ?135% numa-vmstat.node3.nr_zone_active_anon
2340060 ? 55% +93.0% 4516833 ? 11% numa-vmstat.node3.nr_zone_active_file
61462 ? 36% -90.9% 5606 ? 6% numa-vmstat.node3.nr_zone_inactive_anon
1395481 ? 71% +251.5% 4905621 ? 17% proc-vmstat.compact_free_scanned
33074 ? 70% +419.4% 171779 ? 22% proc-vmstat.compact_isolated
202053 ? 73% +9922.0% 20249857 ? 36% proc-vmstat.compact_migrate_scanned
108289 ? 9% -40.8% 64062 ? 30% proc-vmstat.nr_active_anon
4508307 +239.1% 15287070 ? 14% proc-vmstat.nr_active_file
4529195 -9.4% 4101452 proc-vmstat.nr_dirty_background_threshold
9069466 -9.4% 8212934 proc-vmstat.nr_dirty_threshold
9806867 +185.6% 28009127 proc-vmstat.nr_file_pages
36527728 -61.8% 13964856 ? 3% proc-vmstat.nr_free_pages
147970 ? 10% -23.8% 112719 ? 14% proc-vmstat.nr_inactive_anon
4504742 +166.5% 12005145 ? 21% proc-vmstat.nr_inactive_file
9.67 ? 70% +1544.8% 159.00 ? 73% proc-vmstat.nr_isolated_file
32470 -2.3% 31717 proc-vmstat.nr_kernel_stack
9038764 +200.8% 27190039 proc-vmstat.nr_mapped
2528396 +171.3% 6860572 proc-vmstat.nr_page_table_pages
183340 ? 13% -41.8% 106760 ? 8% proc-vmstat.nr_shmem
55004 +73.2% 95262 proc-vmstat.nr_slab_reclaimable
78097 +13.2% 88396 ? 2% proc-vmstat.nr_slab_unreclaimable
108289 ? 9% -40.8% 64062 ? 30% proc-vmstat.nr_zone_active_anon
4508309 +239.1% 15287080 ? 14% proc-vmstat.nr_zone_active_file
147971 ? 10% -23.8% 112721 ? 14% proc-vmstat.nr_zone_inactive_anon
4504748 +166.5% 12005031 ? 21% proc-vmstat.nr_zone_inactive_file
1399962 ? 70% +205.1% 4271516 ? 15% proc-vmstat.numa_foreign
187918 ? 10% -26.4% 138295 ? 13% proc-vmstat.numa_hint_faults_local
20021857 ? 5% -27.1% 14586662 ? 7% proc-vmstat.numa_hit
19764621 ? 5% -27.5% 14337667 ? 7% proc-vmstat.numa_local
1400006 ? 70% +205.2% 4273115 ? 15% proc-vmstat.numa_miss
1661389 ? 59% +172.8% 4532571 ? 14% proc-vmstat.numa_other
13859568 +153.3% 35111344 proc-vmstat.pgactivate
0.00 +3.9e+107% 393020 ? 3% proc-vmstat.pgalloc_dma32
21429150 +145.6% 52620266 ? 4% proc-vmstat.pgalloc_normal
289208 ? 74% +3004.1% 8977408 ? 51% proc-vmstat.pgdeactivate
5.274e+08 +35.0% 7.12e+08 ? 17% proc-vmstat.pgfault
21431234 +147.7% 53085222 ? 4% proc-vmstat.pgfree
6523 ? 8% -78.2% 1419 ?122% proc-vmstat.pgmajfault
51107 ? 27% +105.8% 105175 ? 15% proc-vmstat.pgmigrate_success
289208 ? 74% +3004.1% 8977408 ? 51% proc-vmstat.pgrefill
114636 +24.7% 142954 ? 2% proc-vmstat.pgreuse
256146 ? 80% +10403.4% 26904076 ? 47% proc-vmstat.pgscan_file
256146 ? 80% +2517.8% 6705410 ? 27% proc-vmstat.pgscan_kswapd
1706 ?141% +4493.7% 78398 ? 94% proc-vmstat.slabs_scanned
5.68 -9.8% 5.12 ? 7% perf-stat.i.MPKI
2.635e+10 +89.2% 4.987e+10 ? 2% perf-stat.i.branch-instructions
29446064 ? 2% -26.8% 21568998 ? 20% perf-stat.i.branch-misses
41.05 +8.4 49.42 ? 3% perf-stat.i.cache-miss-rate%
4.891e+08 +66.5% 8.145e+08 ? 4% perf-stat.i.cache-references
3.18 -31.9% 2.17 ? 14% perf-stat.i.cpi
3.57e+11 -26.4% 2.627e+11 ? 3% perf-stat.i.cpu-cycles
2046 ? 2% -38.8% 1252 ? 7% perf-stat.i.cycles-between-cache-misses
0.02 ? 4% -0.0 0.01 ? 13% perf-stat.i.dTLB-load-miss-rate%
2.143e+10 +87.7% 4.022e+10 ? 2% perf-stat.i.dTLB-loads
0.02 -0.0 0.01 ? 5% perf-stat.i.dTLB-store-miss-rate%
4.017e+09 +71.6% 6.894e+09 ? 3% perf-stat.i.dTLB-stores
67.19 -6.8 60.42 ? 2% perf-stat.i.iTLB-load-miss-rate%
8.244e+10 +87.3% 1.544e+11 ? 2% perf-stat.i.instructions
11508 +64.7% 18958 ? 20% perf-stat.i.instructions-per-iTLB-miss
0.46 +71.5% 0.78 ? 3% perf-stat.i.ipc
1.86 -26.5% 1.37 ? 3% perf-stat.i.metric.GHz
272.17 +86.8% 508.48 ? 2% perf-stat.i.metric.M/sec
89.61 -4.2 85.41 ? 2% perf-stat.i.node-load-miss-rate%
93.39 -1.1 92.31 perf-stat.i.node-store-miss-rate%
5.94 -11.1% 5.27 ? 2% perf-stat.overall.MPKI
0.11 -0.1 0.04 ? 18% perf-stat.overall.branch-miss-rate%
24.75 -8.3 16.49 ? 3% perf-stat.overall.cache-miss-rate%
4.34 -58.3% 1.81 ? 7% perf-stat.overall.cpi
2957 -29.4% 2088 ? 8% perf-stat.overall.cycles-between-cache-misses
0.03 ? 4% -0.0 0.02 ? 18% perf-stat.overall.dTLB-load-miss-rate%
0.03 -0.0 0.02 ? 6% perf-stat.overall.dTLB-store-miss-rate%
79.19 -4.8 74.43 ? 5% perf-stat.overall.iTLB-load-miss-rate%
14468 +118.3% 31586 ? 23% perf-stat.overall.instructions-per-iTLB-miss
0.23 +140.9% 0.55 ? 7% perf-stat.overall.ipc
1753 +44.2% 2527 ? 3% perf-stat.overall.path-length
2.825e+10 +80.0% 5.086e+10 ? 4% perf-stat.ps.branch-instructions
31369620 -29.5% 22107780 ? 22% perf-stat.ps.branch-misses
5.237e+08 +58.7% 8.311e+08 ? 7% perf-stat.ps.cache-references
3.833e+11 -25.9% 2.842e+11 ? 2% perf-stat.ps.cpu-cycles
2.292e+10 +78.8% 4.1e+10 ? 5% perf-stat.ps.dTLB-loads
4.264e+09 +64.4% 7.011e+09 ? 6% perf-stat.ps.dTLB-stores
8.823e+10 +78.4% 1.574e+11 ? 5% perf-stat.ps.instructions
18972299 -23.4% 14538789 ? 27% perf-stat.ps.node-load-misses
1093618 ? 5% -7.4% 1012942 ? 3% perf-stat.ps.node-loads
7576747 ? 5% -20.8% 5997202 ? 26% perf-stat.ps.node-store-misses
4.081e+13 +141.5% 9.854e+13 ? 6% perf-stat.total.instructions
1078861 ?112% +1578.0% 18103185 ? 16% numa-meminfo.node0.Active
1073730 ?112% +1574.7% 17982022 ? 17% numa-meminfo.node0.Active(file)
3776230 ? 34% +665.8% 28918288 ? 16% numa-meminfo.node0.FilePages
510515 ? 29% +1738.2% 9384093 ? 29% numa-meminfo.node0.Inactive
390425 ? 70% +2253.6% 9188935 ? 31% numa-meminfo.node0.Inactive(file)
83922 ? 12% +51.5% 127156 ? 15% numa-meminfo.node0.KReclaimable
1513602 ? 92% +1691.7% 27118547 ? 21% numa-meminfo.node0.Mapped
42176849 ? 3% -69.2% 12973556 ? 24% numa-meminfo.node0.MemFree
6979665 ? 18% +418.4% 36182958 ? 8% numa-meminfo.node0.MemUsed
2570494 ? 3% +154.8% 6550818 ? 23% numa-meminfo.node0.PageTables
83922 ? 12% +51.5% 127156 ? 15% numa-meminfo.node0.SReclaimable
181192 ? 10% +28.8% 233452 ? 12% numa-meminfo.node0.Slab
2821085 ?121% +465.7% 15958433 ? 8% numa-meminfo.node1.Active
2812387 ?122% +467.1% 15948621 ? 8% numa-meminfo.node1.Active(file)
46712 ? 79% +167.4% 124913 ? 51% numa-meminfo.node1.AnonPages.max
5323354 ?119% +355.5% 24250501 ? 19% numa-meminfo.node1.FilePages
2511909 ?116% +228.8% 8260033 ? 43% numa-meminfo.node1.Inactive
2494074 ?117% +227.6% 8170385 ? 45% numa-meminfo.node1.Inactive(file)
24145 ? 66% +206.4% 73980 ? 14% numa-meminfo.node1.KReclaimable
5303377 ?119% +350.4% 23887574 ? 20% numa-meminfo.node1.Mapped
41195841 ? 15% -58.1% 17270949 ? 20% numa-meminfo.node1.MemFree
8338805 ? 76% +286.9% 32263698 ? 10% numa-meminfo.node1.MemUsed
2565988 +190.9% 7463727 ? 16% numa-meminfo.node1.PageTables
24145 ? 66% +206.4% 73980 ? 14% numa-meminfo.node1.SReclaimable
64627 ? 3% +29.6% 83746 ? 11% numa-meminfo.node1.SUnreclaim
88774 ? 20% +77.7% 157727 ? 11% numa-meminfo.node1.Slab
2513412 ? 7% +212.9% 7864995 ? 36% numa-meminfo.node2.PageTables
115780 ? 29% +36.6% 158133 ? 12% numa-meminfo.node2.Slab
9580292 ? 54% +88.0% 18009449 ? 11% numa-meminfo.node3.Active
305023 ? 43% -77.4% 68934 ?135% numa-meminfo.node3.Active(anon)
9275268 ? 55% +93.4% 17940514 ? 10% numa-meminfo.node3.Active(file)
108407 ? 87% -88.6% 12400 ? 51% numa-meminfo.node3.AnonPages
160985 ? 61% -73.4% 42806 ? 76% numa-meminfo.node3.AnonPages.max
19239019 ? 61% +74.0% 33471217 ? 6% numa-meminfo.node3.FilePages
246214 ? 36% -90.8% 22559 ? 6% numa-meminfo.node3.Inactive(anon)
67061 ? 18% +55.3% 104124 ? 18% numa-meminfo.node3.KReclaimable
9127 ? 24% -25.3% 6820 ? 8% numa-meminfo.node3.KernelStack
27179894 ? 42% -63.4% 9952841 ? 10% numa-meminfo.node3.MemFree
22309393 ? 51% +77.2% 39536446 ? 2% numa-meminfo.node3.MemUsed
2454785 ? 6% +125.9% 5545631 ? 22% numa-meminfo.node3.PageTables
67061 ? 18% +55.3% 104124 ? 18% numa-meminfo.node3.SReclaimable
442746 ? 34% -82.1% 79224 ?119% numa-meminfo.node3.Shmem
146622 ? 3% +26.3% 185228 ? 9% numa-meminfo.node3.Slab
0.69 ? 5% -28.1% 0.49 ? 5% sched_debug.cfs_rq:/.h_nr_running.avg
1.64 ? 6% -10.5% 1.47 sched_debug.cfs_rq:/.h_nr_running.max
0.20 ? 2% +15.6% 0.24 ? 8% sched_debug.cfs_rq:/.h_nr_running.stddev
6824 ? 8% +196.8% 20253 ? 6% sched_debug.cfs_rq:/.load.avg
229906 ? 3% +129.8% 528372 ? 9% sched_debug.cfs_rq:/.load.max
21576 ? 14% +260.6% 77803 ? 5% sched_debug.cfs_rq:/.load.stddev
9.61 ? 13% +125.1% 21.63 ? 9% sched_debug.cfs_rq:/.load_avg.avg
338.23 ? 10% +90.2% 643.24 ? 7% sched_debug.cfs_rq:/.load_avg.max
1.93 ? 20% -50.2% 0.96 ? 51% sched_debug.cfs_rq:/.load_avg.min
35.19 ? 16% +148.2% 87.36 ? 4% sched_debug.cfs_rq:/.load_avg.stddev
28561269 ? 4% +23.7% 35334125 ? 5% sched_debug.cfs_rq:/.min_vruntime.avg
29300278 ? 4% +24.1% 36374870 ? 5% sched_debug.cfs_rq:/.min_vruntime.max
25701358 ? 3% +10.0% 28264060 ? 8% sched_debug.cfs_rq:/.min_vruntime.min
520966 ? 10% +117.7% 1134026 ? 46% sched_debug.cfs_rq:/.min_vruntime.stddev
0.68 ? 5% -28.2% 0.48 ? 6% sched_debug.cfs_rq:/.nr_running.avg
0.18 ? 2% +19.6% 0.21 ? 5% sched_debug.cfs_rq:/.nr_running.stddev
137.38 ? 5% -30.0% 96.19 ? 4% sched_debug.cfs_rq:/.removed.load_avg.max
54.41 ? 14% -34.3% 35.73 ? 37% sched_debug.cfs_rq:/.removed.runnable_avg.max
54.41 ? 14% -34.3% 35.73 ? 37% sched_debug.cfs_rq:/.removed.util_avg.max
617.96 ? 5% -24.3% 467.92 ? 2% sched_debug.cfs_rq:/.runnable_avg.avg
1519 ? 3% -8.0% 1397 sched_debug.cfs_rq:/.runnable_avg.max
208.55 ? 11% -33.1% 139.44 ? 22% sched_debug.cfs_rq:/.runnable_avg.min
140.06 ? 3% +32.9% 186.10 ? 7% sched_debug.cfs_rq:/.runnable_avg.stddev
590586 ? 38% +96.2% 1158625 ? 14% sched_debug.cfs_rq:/.spread0.max
-3015844 +130.5% -6950269 sched_debug.cfs_rq:/.spread0.min
521714 ? 10% +117.4% 1134421 ? 46% sched_debug.cfs_rq:/.spread0.stddev
609.27 ? 5% -24.4% 460.70 ? 2% sched_debug.cfs_rq:/.util_avg.avg
1444 ? 4% -7.9% 1330 sched_debug.cfs_rq:/.util_avg.max
130.58 ? 3% +35.1% 176.43 ? 7% sched_debug.cfs_rq:/.util_avg.stddev
572.34 ? 5% -31.2% 393.53 ? 6% sched_debug.cfs_rq:/.util_est_enqueued.avg
1316 ? 9% -18.0% 1078 sched_debug.cfs_rq:/.util_est_enqueued.max
310775 ? 11% +89.7% 589408 ? 15% sched_debug.cpu.avg_idle.avg
239814 ? 5% +41.3% 338862 ? 4% sched_debug.cpu.clock.avg
239856 ? 5% +41.3% 338900 ? 4% sched_debug.cpu.clock.max
239768 ? 5% +41.3% 338817 ? 4% sched_debug.cpu.clock.min
237458 ? 5% +40.9% 334653 ? 4% sched_debug.cpu.clock_task.avg
237726 ? 5% +41.1% 335330 ? 4% sched_debug.cpu.clock_task.max
227513 ? 5% +42.8% 324812 ? 4% sched_debug.cpu.clock_task.min
5295 ? 6% -31.7% 3615 ? 6% sched_debug.cpu.curr->pid.avg
13058 ? 5% +24.9% 16312 ? 2% sched_debug.cpu.curr->pid.max
1468 +14.2% 1678 ? 4% sched_debug.cpu.curr->pid.stddev
0.68 ? 6% -32.1% 0.46 ? 6% sched_debug.cpu.nr_running.avg
2.408e+09 -9.4% 2.182e+09 ? 6% sched_debug.cpu.nr_uninterruptible.avg
239765 ? 5% +41.3% 338814 ? 4% sched_debug.cpu_clk
238754 ? 5% +41.5% 337801 ? 4% sched_debug.ktime
240246 ? 5% +41.2% 339291 ? 4% sched_debug.sched_clk
54.77 -39.6 15.17 ? 4% perf-profile.calltrace.cycles-pp.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
43.84 -38.2 5.61 ? 3% perf-profile.calltrace.cycles-pp.next_uptodate_page.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
69.95 -37.8 32.18 ? 3% perf-profile.calltrace.cycles-pp.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
75.67 -35.5 40.16 ? 7% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
76.91 -32.6 44.30 ? 7% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
77.31 -32.5 44.81 ? 7% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
78.89 -25.4 53.45 ? 8% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
78.91 -25.2 53.70 ? 8% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
79.15 -25.1 54.07 ? 8% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
88.17 -15.5 72.70 ? 3% perf-profile.calltrace.cycles-pp.do_access
1.25 ? 3% -0.9 0.37 ? 70% perf-profile.calltrace.cycles-pp.folio_wake_bit.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
1.43 ? 2% -0.2 1.20 perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault
1.48 ? 2% -0.1 1.36 ? 2% perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
4.18 +1.1 5.30 ? 2% perf-profile.calltrace.cycles-pp.xas_load.xas_find.filemap_map_pages.xfs_filemap_map_pages.do_fault
4.19 +1.1 5.32 ? 2% perf-profile.calltrace.cycles-pp.xas_find.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
6.57 +1.1 7.70 ? 4% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
4.03 +1.1 5.16 ? 2% perf-profile.calltrace.cycles-pp.xas_start.xas_load.xas_find.filemap_map_pages.xfs_filemap_map_pages
6.30 +1.2 7.52 ? 4% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_filemap_map_pages.do_fault.__handle_mm_fault
0.17 ?141% +2.4 2.62 ? 8% perf-profile.calltrace.cycles-pp.up_read.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
0.74 ? 21% +3.8 4.52 ? 18% perf-profile.calltrace.cycles-pp.down_read_trylock.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
12.32 +22.6 34.87 ? 11% perf-profile.calltrace.cycles-pp.do_rw_once
54.79 -39.6 15.19 ? 4% perf-profile.children.cycles-pp.filemap_map_pages
44.39 -38.8 5.64 ? 3% perf-profile.children.cycles-pp.next_uptodate_page
69.96 -37.8 32.18 ? 3% perf-profile.children.cycles-pp.xfs_filemap_map_pages
75.69 -35.5 40.17 ? 7% perf-profile.children.cycles-pp.do_fault
76.93 -32.6 44.32 ? 7% perf-profile.children.cycles-pp.__handle_mm_fault
77.35 -32.4 44.94 ? 7% perf-profile.children.cycles-pp.handle_mm_fault
78.91 -25.5 53.46 ? 8% perf-profile.children.cycles-pp.do_user_addr_fault
78.93 -25.2 53.71 ? 8% perf-profile.children.cycles-pp.exc_page_fault
79.18 -25.1 54.10 ? 8% perf-profile.children.cycles-pp.asm_exc_page_fault
90.13 -16.3 73.82 ? 4% perf-profile.children.cycles-pp.do_access
1.75 -1.6 0.11 ? 25% perf-profile.children.cycles-pp.folio_unlock
1.96 ? 3% -1.1 0.88 ? 15% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.39 ? 4% -0.7 0.74 ? 34% perf-profile.children.cycles-pp.folio_wake_bit
0.69 ? 2% -0.6 0.11 ? 11% perf-profile.children.cycles-pp.PageHeadHuge
0.48 -0.4 0.03 ? 70% perf-profile.children.cycles-pp.filemap_add_folio
0.71 ? 9% -0.4 0.28 ? 56% perf-profile.children.cycles-pp.intel_idle
0.86 ? 2% -0.4 0.45 ? 31% perf-profile.children.cycles-pp.__wake_up_common
0.80 ? 3% -0.4 0.41 ? 31% perf-profile.children.cycles-pp.wake_page_function
0.77 ? 3% -0.4 0.39 ? 30% perf-profile.children.cycles-pp.try_to_wake_up
0.93 ? 9% -0.3 0.62 ? 13% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.54 ? 6% -0.2 0.31 ? 37% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.46 ? 9% -0.2 0.26 ? 43% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.35 ? 3% -0.2 0.17 ? 29% perf-profile.children.cycles-pp.ttwu_do_activate
1.49 ? 2% -0.2 1.32 perf-profile.children.cycles-pp.page_add_file_rmap
0.61 ? 8% -0.2 0.43 ? 15% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.60 ? 9% -0.2 0.43 ? 15% perf-profile.children.cycles-pp.hrtimer_interrupt
0.58 ? 3% -0.2 0.41 ? 32% perf-profile.children.cycles-pp.schedule_idle
0.30 ? 2% -0.2 0.14 ? 31% perf-profile.children.cycles-pp.enqueue_task_fair
0.27 ? 4% -0.1 0.12 ? 36% perf-profile.children.cycles-pp.pick_next_task_fair
0.27 ? 3% -0.1 0.14 ? 30% perf-profile.children.cycles-pp.dequeue_task_fair
0.29 ? 15% -0.1 0.17 ? 14% perf-profile.children.cycles-pp.irq_exit_rcu
0.22 ? 2% -0.1 0.11 ? 30% perf-profile.children.cycles-pp.enqueue_entity
0.39 ? 11% -0.1 0.28 ? 16% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.22 ? 6% -0.1 0.11 ? 17% perf-profile.children.cycles-pp.update_load_avg
0.23 ? 3% -0.1 0.12 ? 29% perf-profile.children.cycles-pp.dequeue_entity
0.21 ? 2% -0.1 0.10 ? 19% perf-profile.children.cycles-pp.read_pages
0.31 ? 17% -0.1 0.21 ? 17% perf-profile.children.cycles-pp.update_process_times
0.20 ? 2% -0.1 0.10 ? 19% perf-profile.children.cycles-pp.iomap_readahead
0.24 ? 17% -0.1 0.14 ? 15% perf-profile.children.cycles-pp.__softirqentry_text_start
0.32 ? 18% -0.1 0.23 ? 16% perf-profile.children.cycles-pp.tick_sched_timer
0.31 ? 17% -0.1 0.21 ? 17% perf-profile.children.cycles-pp.tick_sched_handle
0.26 ? 6% -0.1 0.17 ? 38% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.14 ? 5% -0.1 0.06 ? 13% perf-profile.children.cycles-pp.asm_sysvec_call_function
0.12 ? 8% -0.1 0.04 ? 73% perf-profile.children.cycles-pp.newidle_balance
0.21 ? 16% -0.1 0.14 ? 17% perf-profile.children.cycles-pp.scheduler_tick
0.16 ? 16% -0.1 0.09 ? 10% perf-profile.children.cycles-pp.rebalance_domains
0.11 ? 11% -0.1 0.04 ? 70% perf-profile.children.cycles-pp.irqtime_account_irq
0.10 ? 4% -0.1 0.04 ? 70% perf-profile.children.cycles-pp.select_task_rq_fair
0.10 ? 8% -0.1 0.04 ? 71% perf-profile.children.cycles-pp.set_next_entity
0.13 ? 7% -0.1 0.07 ? 23% perf-profile.children.cycles-pp.update_curr
0.14 ? 13% -0.1 0.08 ? 12% perf-profile.children.cycles-pp._raw_spin_trylock
0.16 ? 17% -0.1 0.10 ? 19% perf-profile.children.cycles-pp.task_tick_fair
1.54 ? 2% -0.1 1.48 perf-profile.children.cycles-pp.do_set_pte
0.12 -0.1 0.07 ? 70% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.10 ? 4% -0.0 0.05 ? 70% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.14 ? 5% -0.0 0.09 ? 33% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.11 ? 11% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.folio_memcg_lock
0.09 ? 9% -0.0 0.05 ? 71% perf-profile.children.cycles-pp.update_rq_clock
0.18 -0.0 0.16 ? 3% perf-profile.children.cycles-pp.sync_regs
0.21 -0.0 0.19 ? 2% perf-profile.children.cycles-pp.error_entry
0.07 +0.0 0.08 perf-profile.children.cycles-pp.___perf_sw_event
0.07 +0.0 0.11 perf-profile.children.cycles-pp.__perf_sw_event
0.02 ?141% +0.0 0.06 ? 7% perf-profile.children.cycles-pp.__memcg_kmem_charge_page
0.00 +0.1 0.05 perf-profile.children.cycles-pp.unlock_page_memcg
0.00 +0.1 0.07 ? 23% perf-profile.children.cycles-pp.memset_erms
0.00 +0.1 0.07 ? 11% perf-profile.children.cycles-pp.ondemand_readahead
0.00 +0.1 0.07 perf-profile.children.cycles-pp.pmd_install
0.12 ? 4% +0.1 0.21 ? 27% perf-profile.children.cycles-pp.finish_fault
0.35 +0.2 0.50 ? 6% perf-profile.children.cycles-pp._raw_spin_lock
0.24 ? 6% +0.3 0.51 ? 20% perf-profile.children.cycles-pp.native_irq_return_iret
0.38 ? 9% +0.5 0.87 ? 36% perf-profile.children.cycles-pp.finish_task_switch
0.17 ? 29% +0.5 0.71 ? 62% perf-profile.children.cycles-pp.find_vma
8.29 ? 4% +1.1 9.36 ? 9% perf-profile.children.cycles-pp.down_read
5.10 +1.1 6.22 ? 5% perf-profile.children.cycles-pp.xas_start
6.58 +1.1 7.71 ? 4% perf-profile.children.cycles-pp.xfs_iunlock
5.20 +1.2 6.36 ? 5% perf-profile.children.cycles-pp.xas_load
4.20 +1.2 5.39 ? 2% perf-profile.children.cycles-pp.xas_find
6.94 +3.6 10.58 ? 6% perf-profile.children.cycles-pp.up_read
0.74 ? 21% +3.8 4.53 ? 18% perf-profile.children.cycles-pp.down_read_trylock
10.62 +23.3 33.93 ? 13% perf-profile.children.cycles-pp.do_rw_once
43.93 -38.4 5.53 ? 3% perf-profile.self.cycles-pp.next_uptodate_page
1.73 -1.6 0.10 ? 29% perf-profile.self.cycles-pp.folio_unlock
0.68 -0.6 0.10 ? 14% perf-profile.self.cycles-pp.PageHeadHuge
0.70 ? 9% -0.4 0.28 ? 56% perf-profile.self.cycles-pp.intel_idle
1.32 -0.2 1.12 ? 2% perf-profile.self.cycles-pp.page_add_file_rmap
0.46 ? 9% -0.2 0.25 ? 46% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.13 ? 9% -0.1 0.05 ? 8% perf-profile.self.cycles-pp.__count_memcg_events
0.17 ? 4% -0.1 0.09 ? 33% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.09 ? 18% -0.1 0.03 ? 70% perf-profile.self.cycles-pp.irqtime_account_irq
0.14 ? 13% -0.1 0.08 ? 12% perf-profile.self.cycles-pp._raw_spin_trylock
0.11 ? 11% -0.0 0.07 ? 7% perf-profile.self.cycles-pp.folio_memcg_lock
0.11 ? 4% -0.0 0.07 ? 70% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.08 ? 5% -0.0 0.04 ? 71% perf-profile.self.cycles-pp.enqueue_entity
0.07 ? 7% -0.0 0.03 ? 70% perf-profile.self.cycles-pp.update_rq_clock
0.18 -0.0 0.15 ? 3% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.07 ? 23% perf-profile.self.cycles-pp.memset_erms
0.00 +0.1 0.09 ? 18% perf-profile.self.cycles-pp.xas_find
0.03 ? 70% +0.1 0.16 ? 7% perf-profile.self.cycles-pp.do_set_pte
0.33 +0.2 0.49 ? 5% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.2 0.25 ? 22% perf-profile.self.cycles-pp.exc_page_fault
0.24 ? 6% +0.3 0.51 ? 20% perf-profile.self.cycles-pp.native_irq_return_iret
0.28 ? 7% +0.3 0.57 ? 35% perf-profile.self.cycles-pp.__schedule
0.25 ? 10% +0.5 0.75 ? 37% perf-profile.self.cycles-pp.finish_task_switch
0.87 ? 3% +1.0 1.91 ? 14% perf-profile.self.cycles-pp.filemap_map_pages
8.19 ? 4% +1.1 9.27 ? 9% perf-profile.self.cycles-pp.down_read
2.01 ? 2% +1.1 3.13 ? 23% perf-profile.self.cycles-pp.filemap_fault
5.05 +1.1 6.18 ? 5% perf-profile.self.cycles-pp.xas_start
1.21 ? 22% +2.9 4.08 ? 11% perf-profile.self.cycles-pp.__handle_mm_fault
6.87 +3.6 10.51 ? 6% perf-profile.self.cycles-pp.up_read
0.73 ? 21% +3.8 4.50 ? 18% perf-profile.self.cycles-pp.down_read_trylock
9.43 +4.7 14.14 ? 6% perf-profile.self.cycles-pp.do_access
7.58 +20.1 27.67 ? 16% perf-profile.self.cycles-pp.do_rw_once



***************************************************************************************************
lkp-ivb-2ep1: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-ivb-2ep1/migrate/vm-scalability/0x42e

commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")

18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
1279697 ? 2% +24.1% 1588230 vm-scalability.median
1279697 ? 2% +24.1% 1588230 vm-scalability.throughput
627.91 -1.0% 621.31 vm-scalability.time.elapsed_time
627.91 -1.0% 621.31 vm-scalability.time.elapsed_time.max
6.24 ? 5% -1.0 5.22 ? 5% turbostat.Busy%
33945569 ? 20% +44.6% 49088231 ? 17% turbostat.C6
3642839 -96.6% 122845 numa-numastat.node0.interleave_hit
12102273 ? 14% -27.0% 8838437 ? 13% numa-numastat.node0.numa_hit
3665673 -96.7% 122772 numa-numastat.node1.interleave_hit
18826459 ? 8% -24.4% 14235613 ? 7% numa-numastat.node1.numa_hit
40872 ? 6% +67564.4% 27656338 meminfo.Active
217.20 +1.3e+07% 27613029 meminfo.Active(file)
28240893 -97.1% 806140 meminfo.Inactive
27927778 -98.2% 491393 ? 2% meminfo.Inactive(file)
627080 -15.9% 527421 meminfo.Mapped
56525 ? 12% -12.9% 49208 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
64.98 ? 31% -61.2% 25.18 ? 83% sched_debug.cfs_rq:/.removed.runnable_avg.max
12.77 ? 25% -64.1% 4.58 ? 86% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
55.55 ? 24% -54.7% 25.18 ? 83% sched_debug.cfs_rq:/.removed.util_avg.max
11.42 ? 19% -59.9% 4.58 ? 86% sched_debug.cfs_rq:/.removed.util_avg.stddev
24966 ? 36% -39.8% 15027 ? 29% sched_debug.cfs_rq:/.spread0.max
6277 ? 55% +2.2e+05% 13807856 numa-meminfo.node0.Active
128.00 +1.1e+07% 13801812 numa-meminfo.node0.Active(file)
14256136 -96.4% 508062 ? 6% numa-meminfo.node0.Inactive
13970347 -98.2% 245790 ? 2% numa-meminfo.node0.Inactive(file)
329516 -15.4% 278703 numa-meminfo.node0.Mapped
34559 ? 14% +39944.8% 13839251 numa-meminfo.node1.Active
89.00 ? 2% +1.6e+07% 13801956 numa-meminfo.node1.Active(file)
13998562 -97.9% 298543 ? 10% numa-meminfo.node1.Inactive
13971170 -98.2% 245871 numa-meminfo.node1.Inactive(file)
297635 -16.3% 249066 ? 2% numa-meminfo.node1.Mapped
53.60 +1.3e+07% 6902228 proc-vmstat.nr_active_file
6981859 -98.2% 122915 proc-vmstat.nr_inactive_file
156878 -15.7% 132188 proc-vmstat.nr_mapped
26257 +7.1% 28123 proc-vmstat.nr_slab_unreclaimable
53.60 +1.3e+07% 6902228 proc-vmstat.nr_zone_active_file
6981859 -98.2% 122915 proc-vmstat.nr_zone_inactive_file
30931182 -25.4% 23076379 ? 3% proc-vmstat.numa_hit
7308513 -96.6% 245617 proc-vmstat.numa_interleave
24103602 ? 2% -18.4% 19671160 ? 3% proc-vmstat.numa_local
6826723 ? 9% -50.1% 3404877 ? 12% proc-vmstat.numa_other
66066 +10919.7% 7280270 proc-vmstat.pgactivate
32.00 +1.1e+07% 3451480 numa-vmstat.node0.nr_active_file
3491772 -98.2% 61446 numa-vmstat.node0.nr_inactive_file
82425 -15.2% 69909 numa-vmstat.node0.nr_mapped
32.00 +1.1e+07% 3451480 numa-vmstat.node0.nr_zone_active_file
3491772 -98.2% 61446 numa-vmstat.node0.nr_zone_inactive_file
12101625 ? 14% -27.0% 8837704 ? 13% numa-vmstat.node0.numa_hit
3642839 -96.6% 122845 numa-vmstat.node0.numa_interleave
21.60 ? 2% +1.6e+07% 3451528 numa-vmstat.node1.nr_active_file
3491978 -98.2% 61461 numa-vmstat.node1.nr_inactive_file
74233 -16.1% 62260 numa-vmstat.node1.nr_mapped
21.60 ? 2% +1.6e+07% 3451528 numa-vmstat.node1.nr_zone_active_file
3491978 -98.2% 61461 numa-vmstat.node1.nr_zone_inactive_file
18825701 ? 8% -24.4% 14234723 ? 7% numa-vmstat.node1.numa_hit
3665673 -96.7% 122772 numa-vmstat.node1.numa_interleave
40.20 ? 7% +28.4% 51.60 ? 3% perf-stat.i.MPKI
4.844e+08 -4.1% 4.644e+08 perf-stat.i.branch-instructions
9.15 ? 5% +1.5 10.66 perf-stat.i.branch-miss-rate%
19.69 ? 5% +2.0 21.69 ? 5% perf-stat.i.cache-miss-rate%
8672757 ? 11% +28.2% 11114804 ? 11% perf-stat.i.cache-misses
49731408 ? 4% +12.4% 55875231 ? 5% perf-stat.i.cache-references
765.32 ? 9% -22.8% 590.45 ? 4% perf-stat.i.cycles-between-cache-misses
1.15 ? 5% +0.2 1.30 ? 2% perf-stat.i.dTLB-load-miss-rate%
0.18 ? 2% +0.0 0.20 perf-stat.i.dTLB-store-miss-rate%
811798 ? 7% +18.5% 962255 ? 8% perf-stat.i.dTLB-store-misses
90.13 +2.5 92.64 perf-stat.i.iTLB-load-miss-rate%
162262 ? 8% -21.6% 127248 ? 16% perf-stat.i.iTLB-loads
2.198e+09 -4.5% 2.099e+09 perf-stat.i.instructions
374.33 ? 6% +21.9% 456.27 ? 10% perf-stat.i.metric.K/sec
898067 ? 4% +17.5% 1055190 ? 3% perf-stat.i.node-stores
22.64 ? 6% +17.5% 26.62 ? 5% perf-stat.overall.MPKI
17.39 ? 6% +2.4 19.83 ? 6% perf-stat.overall.cache-miss-rate%
731.03 ? 8% -22.9% 563.68 ? 5% perf-stat.overall.cycles-between-cache-misses
0.17 ? 3% +0.0 0.19 ? 2% perf-stat.overall.dTLB-store-miss-rate%
90.01 +1.8 91.84 perf-stat.overall.iTLB-load-miss-rate%
34.37 -3.0 31.41 perf-stat.overall.node-store-miss-rate%
42785 -5.6% 40397 perf-stat.overall.path-length
4.838e+08 -4.1% 4.638e+08 perf-stat.ps.branch-instructions
8662103 ? 11% +28.2% 11100817 ? 11% perf-stat.ps.cache-misses
49649551 ? 4% +12.3% 55779056 ? 5% perf-stat.ps.cache-references
810507 ? 7% +18.5% 960597 ? 8% perf-stat.ps.dTLB-store-misses
162062 ? 8% -21.6% 127036 ? 16% perf-stat.ps.iTLB-loads
2.195e+09 -4.5% 2.096e+09 perf-stat.ps.instructions
898053 ? 4% +17.5% 1055430 ? 3% perf-stat.ps.node-stores
1.389e+12 -5.6% 1.311e+12 perf-stat.total.instructions
1.48 ? 8% -0.4 1.04 ? 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.69 ? 19% -0.4 1.29 ? 8% perf-profile.calltrace.cycles-pp.fork
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work.worker_thread
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_helper_dirtyfb
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail.drm_atomic_helper_commit
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail.commit_tail
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.drm_atomic_helper_dirtyfb.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.calltrace.cycles-pp.drm_fb_memcpy_toio.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail
1.02 ? 13% -0.4 0.64 ? 11% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_memcpy_toio.mgag200_handle_damage.mgag200_simple_display_pipe_update.drm_atomic_helper_commit_planes
1.09 ? 11% -0.4 0.74 ? 5% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
0.59 ? 5% +0.1 0.71 ? 9% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.57 ? 6% +0.1 0.69 ? 8% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.48 ? 8% -0.4 1.04 ? 4% perf-profile.children.cycles-pp.process_one_work
1.72 ? 19% -0.4 1.30 ? 8% perf-profile.children.cycles-pp.fork
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.commit_tail
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.mgag200_simple_display_pipe_update
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.mgag200_handle_damage
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.memcpy_toio
1.08 ? 12% -0.4 0.69 ? 9% perf-profile.children.cycles-pp.drm_fb_memcpy_toio
1.09 ? 11% -0.4 0.74 ? 5% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.45 ? 19% -0.1 0.31 ? 19% perf-profile.children.cycles-pp.finish_task_switch
0.32 ? 19% -0.1 0.20 ? 15% perf-profile.children.cycles-pp.perf_iterate_sb
0.50 ? 12% -0.1 0.40 ? 8% perf-profile.children.cycles-pp.native_irq_return_iret
0.40 ? 14% -0.1 0.31 ? 11% perf-profile.children.cycles-pp.get_page_from_freelist
0.35 ? 12% -0.1 0.27 ? 16% perf-profile.children.cycles-pp.perf_pmu_sched_task
0.30 ? 15% -0.1 0.22 ? 13% perf-profile.children.cycles-pp.__perf_pmu_sched_task
0.16 ? 8% -0.1 0.11 ? 18% perf-profile.children.cycles-pp.__close
0.18 ? 21% -0.1 0.13 ? 9% perf-profile.children.cycles-pp.__get_vm_area_node
0.11 ? 20% -0.0 0.08 ? 15% perf-profile.children.cycles-pp.__put_user_nocheck_4
0.13 ? 14% -0.0 0.10 ? 7% perf-profile.children.cycles-pp.flush_tlb_func
0.34 ? 12% +0.1 0.44 ? 11% perf-profile.children.cycles-pp.timerqueue_del
0.61 ? 6% +0.1 0.72 ? 8% perf-profile.children.cycles-pp.irq_enter_rcu
0.58 ? 6% +0.1 0.70 ? 8% perf-profile.children.cycles-pp.tick_irq_enter
0.42 ? 12% +0.1 0.54 ? 11% perf-profile.children.cycles-pp.__remove_hrtimer
0.60 ? 48% -0.3 0.31 ? 15% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.85 ? 10% -0.2 0.64 ? 10% perf-profile.self.cycles-pp.memcpy_toio
0.49 ? 11% -0.1 0.40 ? 8% perf-profile.self.cycles-pp.native_irq_return_iret
0.25 ? 16% -0.1 0.18 ? 22% perf-profile.self.cycles-pp._dl_catch_error
0.09 ? 13% -0.0 0.05 ? 51% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.15 ? 5% -0.0 0.12 ? 18% perf-profile.self.cycles-pp.update_load_avg
0.11 ? 15% +0.1 0.16 ? 18% perf-profile.self.cycles-pp.timerqueue_del



***************************************************************************************************
lkp-csl-2ap4: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/lru-file-mmap-read/vm-scalability/0x500320a

commit:
18788cfa23 ("mm: Support arbitrary THP sizes")
793917d997 ("mm/readahead: Add large folio readahead")

18788cfa23696774 793917d997df2e432f3e9ac126e
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.10 ? 3% +118.3% 0.22 ? 16% vm-scalability.free_time
101518 ? 2% +42.3% 144417 ? 3% vm-scalability.median
411.17 ? 36% +892.2 1303 ? 17% vm-scalability.stddev%
19593900 +45.0% 28415043 ? 3% vm-scalability.throughput
301.57 -34.4% 197.69 ? 2% vm-scalability.time.elapsed_time
301.57 -34.4% 197.69 ? 2% vm-scalability.time.elapsed_time.max
640209 ? 4% -19.7% 514308 ? 10% vm-scalability.time.involuntary_context_switches
15114 +5.2% 15894 vm-scalability.time.percent_of_cpu_this_job_got
42292 -35.8% 27163 ? 3% vm-scalability.time.system_time
3288 ? 4% +29.7% 4264 ? 5% vm-scalability.time.user_time
1.085e+10 ? 3% -51.9% 5.216e+09 ? 7% cpuidle..time
21205215 ? 12% -48.5% 10910479 ? 5% cpuidle..usage
349.92 -29.7% 246.17 uptime.boot
18808 -30.0% 13160 ? 4% uptime.idle
5.17 ? 7% +109.7% 10.83 ? 3% vmstat.cpu.us
21348646 -41.2% 12548500 ? 5% vmstat.memory.free
5766 ? 4% +23.5% 7120 ? 8% vmstat.system.cs
2.715e+08 ? 2% -43.2% 1.542e+08 ? 6% turbostat.IRQ
33367 ? 25% -64.5% 11839 ? 20% turbostat.POLL
43.50 ? 2% +11.1% 48.33 ? 2% turbostat.PkgTmp
252.04 +6.8% 269.14 turbostat.PkgWatt
18.99 ? 2% -5.4 13.63 ? 8% mpstat.cpu.all.idle%
0.00 ? 31% +0.0 0.01 ?106% mpstat.cpu.all.iowait%
1.08 ? 3% +0.7 1.75 ? 7% mpstat.cpu.all.irq%
0.07 ? 3% +0.1 0.14 ? 6% mpstat.cpu.all.soft%
5.82 ? 5% +5.4 11.22 ? 4% mpstat.cpu.all.usr%
165900 ? 4% +7920.0% 13305211 ? 12% meminfo.Active
160338 ? 4% -63.9% 57831 ? 15% meminfo.Active(anon)
5561 ? 11% +2.4e+05% 13247380 ? 12% meminfo.Active(file)
880618 ? 7% -24.3% 666574 ? 2% meminfo.Committed_AS
569221 ? 11% -22.3% 442282 ? 2% meminfo.Inactive(anon)
23000987 ? 3% -46.2% 12372721 ? 4% meminfo.MemFree
388894 +16.7% 453885 ? 2% meminfo.SUnreclaim
422287 ? 14% -55.5% 188091 ? 9% meminfo.Shmem
2.278e+08 ? 3% -95.1% 11200440 ? 15% numa-numastat.node0.local_node
40532177 ? 14% -90.8% 3741196 ? 21% numa-numastat.node0.numa_foreign
2.275e+08 ? 3% -95.1% 11245827 ? 15% numa-numastat.node0.numa_hit
29443860 ? 13% -83.7% 4811729 ? 27% numa-numastat.node0.numa_miss
29476169 ? 13% -83.5% 4859208 ? 27% numa-numastat.node0.other_node
2.303e+08 ? 3% -95.4% 10517523 ? 16% numa-numastat.node1.local_node
37621005 ? 13% -88.6% 4292186 ? 19% numa-numastat.node1.numa_foreign
2.299e+08 ? 3% -95.4% 10603099 ? 16% numa-numastat.node1.numa_hit
41991213 ? 19% -90.5% 3972773 ? 19% numa-numastat.node1.numa_miss
42068264 ? 19% -90.4% 4057461 ? 19% numa-numastat.node1.other_node
2.349e+08 ? 3% -95.2% 11355001 ? 20% numa-numastat.node2.local_node
34589661 ? 21% -88.3% 4047359 ? 16% numa-numastat.node2.numa_foreign
2.345e+08 ? 3% -95.1% 11421655 ? 20% numa-numastat.node2.numa_hit
41161372 ? 19% -91.1% 3677838 ? 19% numa-numastat.node2.numa_miss
41262019 ? 19% -90.9% 3744731 ? 19% numa-numastat.node2.other_node
2.429e+08 -95.5% 10974126 ? 17% numa-numastat.node3.local_node
31729047 ? 13% -86.9% 4152899 ? 24% numa-numastat.node3.numa_foreign
2.425e+08 -95.4% 11034785 ? 17% numa-numastat.node3.numa_hit
31871419 ? 7% -88.2% 3769387 ? 24% numa-numastat.node3.numa_miss
31964737 ? 7% -88.0% 3833705 ? 23% numa-numastat.node3.other_node
18558 ? 21% +16974.0% 3168645 ? 14% numa-meminfo.node0.Active
736.50 ?109% +4.3e+05% 3151953 ? 15% numa-meminfo.node0.Active(file)
34210000 +9.8% 37546409 numa-meminfo.node0.Mapped
5885498 ? 5% -46.0% 3176670 ? 4% numa-meminfo.node0.MemFree
17451 ? 43% +18303.0% 3211606 ? 17% numa-meminfo.node1.Active
13751 ? 59% -64.0% 4950 ? 54% numa-meminfo.node1.Active(anon)
3698 ? 33% +86597.6% 3206655 ? 17% numa-meminfo.node1.Active(file)
86098 ? 84% -81.2% 16195 ?100% numa-meminfo.node1.Inactive(anon)
36059221 +11.3% 40136100 numa-meminfo.node1.Mapped
6127422 ? 6% -48.4% 3164279 ? 5% numa-meminfo.node1.MemFree
79357 ? 69% -90.4% 7595 ? 68% numa-meminfo.node1.Shmem
5745 ? 20% +63238.1% 3639196 ? 20% numa-meminfo.node2.Active
432.83 ?111% +8.4e+05% 3634730 ? 20% numa-meminfo.node2.Active(file)
5992877 ? 3% -47.5% 3148114 ? 5% numa-meminfo.node2.MemFree
82044 ? 2% +30.5% 107052 ? 14% numa-meminfo.node2.SUnreclaim
127440 ? 8% +2819.6% 3720728 ? 14% numa-meminfo.node3.Active
126792 ? 7% -74.1% 32868 ? 32% numa-meminfo.node3.Active(anon)
647.17 ? 88% +5.7e+05% 3687859 ? 14% numa-meminfo.node3.Active(file)
35603773 +10.6% 39372854 numa-meminfo.node3.Mapped
6060136 ? 4% -47.8% 3163708 ? 6% numa-meminfo.node3.MemFree
174654 ? 21% -55.6% 77594 ? 59% numa-meminfo.node3.Shmem
29.60 ? 15% +20.6% 35.71 ? 9% sched_debug.cfs_rq:/.load_avg.avg
23054096 ? 12% -46.9% 12234921 ? 16% sched_debug.cfs_rq:/.min_vruntime.avg
23762670 ? 12% -46.6% 12697185 ? 16% sched_debug.cfs_rq:/.min_vruntime.max
19343278 ? 15% -61.2% 7508563 ? 23% sched_debug.cfs_rq:/.min_vruntime.min
1484122 ? 52% -67.1% 487970 ? 52% sched_debug.cfs_rq:/.spread0.max
768.96 ? 6% -33.8% 509.26 ? 11% sched_debug.cfs_rq:/.util_est_enqueued.avg
186.50 ? 14% +68.2% 313.61 ? 11% sched_debug.cfs_rq:/.util_est_enqueued.stddev
836130 ? 4% +12.9% 943827 ? 3% sched_debug.cpu.avg_idle.avg
1477455 ? 18% +82.9% 2702269 ? 24% sched_debug.cpu.avg_idle.max
143256 ? 15% +91.3% 274043 ? 25% sched_debug.cpu.avg_idle.min
168476 ? 10% -33.2% 112500 ? 11% sched_debug.cpu.clock.avg
168594 ? 10% -33.2% 112546 ? 11% sched_debug.cpu.clock.max
168368 ? 10% -33.2% 112436 ? 11% sched_debug.cpu.clock.min
167148 ? 10% -33.5% 111146 ? 10% sched_debug.cpu.clock_task.avg
167428 ? 10% -33.4% 111447 ? 10% sched_debug.cpu.clock_task.max
157510 ? 10% -34.9% 102572 ? 11% sched_debug.cpu.clock_task.min
10904 ? 6% -13.6% 9420 ? 4% sched_debug.cpu.curr->pid.max
779426 ? 22% +69.3% 1319881 ? 25% sched_debug.cpu.max_idle_balance_cost.max
22416 ? 61% +282.2% 85670 ? 44% sched_debug.cpu.max_idle_balance_cost.stddev
6379 ? 6% -31.4% 4375 ? 9% sched_debug.cpu.nr_switches.avg
3704 ? 7% -43.7% 2086 ? 15% sched_debug.cpu.nr_switches.min
168334 ? 10% -33.2% 112430 ? 11% sched_debug.cpu_clk
167321 ? 10% -33.4% 111418 ? 11% sched_debug.ktime
168815 ? 10% -32.5% 113880 ? 10% sched_debug.sched_clk
2616 ? 9% +46.3% 3827 ? 14% proc-vmstat.allocstall_normal
2156058 ? 15% +4474.3% 98624936 ? 55% proc-vmstat.compact_daemon_migrate_scanned
20.00 ? 38% +2.4e+05% 48248 ? 40% proc-vmstat.compact_fail
8500358 ? 8% -73.3% 2268228 ? 46% proc-vmstat.compact_isolated
10223131 ? 8% +3370.9% 3.548e+08 ? 61% proc-vmstat.compact_migrate_scanned
27.00 ? 46% +9.8e+05% 265302 ? 36% proc-vmstat.compact_stall
7.00 ? 73% +3.1e+06% 217053 ? 36% proc-vmstat.compact_success
823.33 ? 4% +1193.8% 10652 ? 56% proc-vmstat.kswapd_low_wmark_hit_quickly
41044 ? 4% -64.6% 14549 ? 15% proc-vmstat.nr_active_anon
1387 ? 11% +2.4e+05% 3353262 ? 12% proc-vmstat.nr_active_file
41328974 +6.4% 43963911 proc-vmstat.nr_file_pages
5852009 ? 2% -46.9% 3109416 ? 4% proc-vmstat.nr_free_pages
142808 ? 11% -22.5% 110708 ? 2% proc-vmstat.nr_inactive_anon
1075 ? 3% -32.2% 728.67 ? 10% proc-vmstat.nr_isolated_file
35690 +2.4% 36540 proc-vmstat.nr_kernel_stack
35775593 +9.8% 39287167 proc-vmstat.nr_mapped
869101 +5.6% 917415 ? 2% proc-vmstat.nr_page_table_pages
107130 ? 15% -55.9% 47235 ? 9% proc-vmstat.nr_shmem
97259 +16.7% 113507 ? 2% proc-vmstat.nr_slab_unreclaimable
41046 ? 4% -64.5% 14554 ? 15% proc-vmstat.nr_zone_active_anon
1387 ? 11% +2.4e+05% 3353349 ? 12% proc-vmstat.nr_zone_active_file
142851 ? 11% -22.5% 110765 ? 2% proc-vmstat.nr_zone_inactive_anon
1.445e+08 ? 14% -88.8% 16233642 ? 19% proc-vmstat.numa_foreign
80928 ? 55% -74.9% 20329 ? 20% proc-vmstat.numa_hint_faults
40260 ? 38% -71.4% 11516 ? 32% proc-vmstat.numa_hint_faults_local
9.344e+08 ? 2% -95.3% 44308201 ? 16% proc-vmstat.numa_hit
9.359e+08 ? 2% -95.3% 44049923 ? 16% proc-vmstat.numa_local
1.445e+08 ? 14% -88.8% 16231729 ? 19% proc-vmstat.numa_miss
1.448e+08 ? 14% -88.6% 16495107 ? 19% proc-vmstat.numa_other
411268 ? 14% -51.4% 199735 ? 35% proc-vmstat.numa_pte_updates
2547 ? 2% +339.4% 11194 ? 53% proc-vmstat.pageoutrun
254135 ? 15% +14971.1% 38301088 ? 5% proc-vmstat.pgactivate
8098345 +12.0% 9067933 ? 4% proc-vmstat.pgalloc_dma32
4244635 ? 8% -73.7% 1116248 ? 46% proc-vmstat.pgmigrate_success
2415 -1.5% 2380 proc-vmstat.pgpgout
86990 ? 2% -23.8% 66253 ? 2% proc-vmstat.pgreuse
1.85e+09 -24.6% 1.396e+09 ? 7% proc-vmstat.pgscan_direct
2175 ? 16% -94.3% 124.67 ? 32% proc-vmstat.pgscan_direct_throttle
1.965e+08 ? 16% +239.4% 6.671e+08 ? 14% proc-vmstat.pgscan_kswapd
9.921e+08 -4.0% 9.527e+08 proc-vmstat.pgsteal_direct
34748892 ? 2% +116.7% 75306059 ? 7% proc-vmstat.pgsteal_kswapd
4957012 -8.8% 4522105 ? 2% proc-vmstat.workingset_nodereclaim
4020 ? 17% -59.1% 1644 ? 74% proc-vmstat.workingset_refault_file
183.17 ?109% +4.5e+05% 821995 ? 14% numa-vmstat.node0.nr_active_file
1519607 ? 6% -45.2% 832457 ? 4% numa-vmstat.node0.nr_free_pages
8464904 +10.0% 9312159 numa-vmstat.node0.nr_mapped
183.17 ?109% +4.5e+05% 822014 ? 14% numa-vmstat.node0.nr_zone_active_file
40532177 ? 14% -90.8% 3741196 ? 21% numa-vmstat.node0.numa_foreign
2.275e+08 ? 3% -95.1% 11245235 ? 15% numa-vmstat.node0.numa_hit
2.278e+08 ? 3% -95.1% 11199848 ? 15% numa-vmstat.node0.numa_local
29443860 ? 13% -83.7% 4811729 ? 27% numa-vmstat.node0.numa_miss
29476169 ? 13% -83.5% 4859208 ? 27% numa-vmstat.node0.numa_other
3499 ? 59% -63.4% 1280 ? 53% numa-vmstat.node1.nr_active_anon
922.83 ? 33% +90476.4% 835869 ? 17% numa-vmstat.node1.nr_active_file
1586446 ? 5% -47.8% 827706 ? 5% numa-vmstat.node1.nr_free_pages
21322 ? 84% -81.0% 4042 ?100% numa-vmstat.node1.nr_inactive_anon
254.17 ? 5% -24.0% 193.17 ? 8% numa-vmstat.node1.nr_isolated_file
8919519 +11.6% 9957601 numa-vmstat.node1.nr_mapped
19735 ? 68% -90.1% 1947 ? 67% numa-vmstat.node1.nr_shmem
3500 ? 59% -63.4% 1281 ? 53% numa-vmstat.node1.nr_zone_active_anon
923.00 ? 33% +90460.5% 835873 ? 17% numa-vmstat.node1.nr_zone_active_file
21334 ? 83% -81.0% 4056 ?100% numa-vmstat.node1.nr_zone_inactive_anon
37621005 ? 13% -88.6% 4292186 ? 19% numa-vmstat.node1.numa_foreign
2.299e+08 ? 3% -95.4% 10603128 ? 16% numa-vmstat.node1.numa_hit
2.303e+08 ? 3% -95.4% 10517553 ? 16% numa-vmstat.node1.numa_local
41991213 ? 19% -90.5% 3972773 ? 19% numa-vmstat.node1.numa_miss
42068264 ? 19% -90.4% 4057461 ? 19% numa-vmstat.node1.numa_other
1268948 ? 4% -17.2% 1051134 ? 9% numa-vmstat.node1.workingset_nodereclaim
3123 ? 50% -84.8% 476.17 ?183% numa-vmstat.node1.workingset_refault_file
107.83 ?111% +8.8e+05% 944107 ? 20% numa-vmstat.node2.nr_active_file
1554237 ? 2% -46.9% 826010 ? 5% numa-vmstat.node2.nr_free_pages
277.50 ? 3% -45.2% 152.17 ? 11% numa-vmstat.node2.nr_isolated_file
20516 ? 2% +30.5% 26769 ? 14% numa-vmstat.node2.nr_slab_unreclaimable
107.83 ?111% +8.8e+05% 944108 ? 20% numa-vmstat.node2.nr_zone_active_file
34589661 ? 21% -88.3% 4047359 ? 16% numa-vmstat.node2.numa_foreign
2.345e+08 ? 3% -95.1% 11421811 ? 20% numa-vmstat.node2.numa_hit
2.349e+08 ? 3% -95.2% 11355157 ? 20% numa-vmstat.node2.numa_local
41161372 ? 19% -91.1% 3677838 ? 19% numa-vmstat.node2.numa_miss
41262019 ? 19% -90.9% 3744731 ? 19% numa-vmstat.node2.numa_other
1263735 ? 6% -10.7% 1128659 ? 4% numa-vmstat.node2.workingset_nodereclaim
31933 ? 7% -73.0% 8623 ? 34% numa-vmstat.node3.nr_active_anon
161.00 ? 88% +6e+05% 961368 ? 14% numa-vmstat.node3.nr_active_file
1568641 ? 4% -47.2% 828497 ? 5% numa-vmstat.node3.nr_free_pages
264.00 ? 9% -44.3% 147.00 ? 14% numa-vmstat.node3.nr_isolated_file
8802026 +10.9% 9759037 numa-vmstat.node3.nr_mapped
43822 ? 20% -54.4% 20003 ? 57% numa-vmstat.node3.nr_shmem
31933 ? 7% -73.0% 8625 ? 34% numa-vmstat.node3.nr_zone_active_anon
161.00 ? 88% +6e+05% 961370 ? 14% numa-vmstat.node3.nr_zone_active_file
31729047 ? 13% -86.9% 4152899 ? 24% numa-vmstat.node3.numa_foreign
2.425e+08 -95.5% 11033782 ? 17% numa-vmstat.node3.numa_hit
2.429e+08 -95.5% 10973124 ? 17% numa-vmstat.node3.numa_local
31871419 ? 7% -88.2% 3769387 ? 24% numa-vmstat.node3.numa_miss
31964738 ? 7% -88.0% 3833705 ? 23% numa-vmstat.node3.numa_other
2.978e+10 +13.3% 3.375e+10 perf-stat.i.branch-instructions
32180693 ? 4% -14.4% 27541092 ? 7% perf-stat.i.branch-misses
5.15e+08 ? 3% +19.6% 6.16e+08 ? 4% perf-stat.i.cache-references
5528 ? 4% +30.5% 7216 ? 10% perf-stat.i.context-switches
4.07 +6.5% 4.33 ? 3% perf-stat.i.cpi
4.639e+11 +8.0% 5.009e+11 perf-stat.i.cpu-cycles
270.77 +9.2% 295.80 ? 4% perf-stat.i.cpu-migrations
14735665 ? 5% -32.1% 9999207 ? 8% perf-stat.i.dTLB-load-misses
2.788e+10 +4.1% 2.903e+10 perf-stat.i.dTLB-loads
0.01 ? 29% +0.0 0.02 ? 10% perf-stat.i.dTLB-store-miss-rate%
838683 ? 5% +22.9% 1030872 ? 3% perf-stat.i.dTLB-store-misses
6.544e+09 -30.2% 4.568e+09 ? 2% perf-stat.i.dTLB-stores
3574432 +25.9% 4499739 ? 6% perf-stat.i.iTLB-load-misses
1.084e+11 +4.2% 1.13e+11 perf-stat.i.instructions
29122 -15.1% 24732 ? 6% perf-stat.i.instructions-per-iTLB-miss
0.35 ? 4% -20.1% 0.28 ? 6% perf-stat.i.ipc
111142 +51.4% 168291 ? 2% perf-stat.i.major-faults
2.39 +7.8% 2.58 perf-stat.i.metric.GHz
332.60 +5.1% 349.53 perf-stat.i.metric.M/sec
219873 ? 2% +51.0% 331983 ? 2% perf-stat.i.minor-faults
18812619 ? 3% +50.5% 28318976 ? 10% perf-stat.i.node-load-misses
4668466 ? 6% +72.7% 8060213 ? 11% perf-stat.i.node-loads
60.86 +19.9 80.75 perf-stat.i.node-store-miss-rate%
11184776 ? 3% -70.5% 3297078 ? 4% perf-stat.i.node-stores
331016 +51.1% 500275 ? 2% perf-stat.i.page-faults
4.70 +15.5% 5.43 ? 3% perf-stat.overall.MPKI
0.11 ? 3% -0.0 0.08 ? 7% perf-stat.overall.branch-miss-rate%
31.24 -1.6 29.60 ? 2% perf-stat.overall.cache-miss-rate%
0.05 ? 5% -0.0 0.03 ? 7% perf-stat.overall.dTLB-load-miss-rate%
0.01 ? 3% +0.0 0.02 ? 3% perf-stat.overall.dTLB-store-miss-rate%
30846 -17.5% 25451 ? 6% perf-stat.overall.instructions-per-iTLB-miss
54.45 +26.4 80.82 perf-stat.overall.node-store-miss-rate%
6878 -32.1% 4673 perf-stat.overall.path-length
3.01e+10 +12.7% 3.391e+10 perf-stat.ps.branch-instructions
32017180 ? 4% -15.6% 27019688 ? 7% perf-stat.ps.branch-misses
5.163e+08 ? 2% +19.4% 6.167e+08 ? 4% perf-stat.ps.cache-references
5664 ? 4% +23.3% 6986 ? 9% perf-stat.ps.context-switches
4.801e+11 +5.8% 5.079e+11 perf-stat.ps.cpu-cycles
269.63 +6.4% 286.97 ? 4% perf-stat.ps.cpu-migrations
15001073 ? 5% -32.9% 10065925 ? 8% perf-stat.ps.dTLB-load-misses
2.823e+10 +3.4% 2.918e+10 perf-stat.ps.dTLB-loads
840470 ? 4% +22.8% 1031964 ? 3% perf-stat.ps.dTLB-store-misses
6.581e+09 -30.6% 4.57e+09 ? 2% perf-stat.ps.dTLB-stores
3558560 +25.8% 4478004 ? 6% perf-stat.ps.iTLB-load-misses
1.098e+11 +3.4% 1.135e+11 perf-stat.ps.instructions
110858 +52.3% 168813 ? 2% perf-stat.ps.major-faults
219045 +51.8% 332439 ? 2% perf-stat.ps.minor-faults
18998997 ? 3% +49.3% 28360434 ? 10% perf-stat.ps.node-load-misses
4608425 ? 6% +71.4% 7899142 ? 10% perf-stat.ps.node-loads
11135414 ? 3% -70.6% 3278444 ? 4% perf-stat.ps.node-stores
329904 +51.9% 501252 ? 2% perf-stat.ps.page-faults
3.324e+13 -32.1% 2.258e+13 perf-stat.total.instructions
88.08 -62.2 25.91 ? 39% perf-profile.calltrace.cycles-pp.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
61.68 ? 3% -36.0 25.68 ? 39% perf-profile.calltrace.cycles-pp.folio_alloc.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
61.64 ? 3% -36.0 25.68 ? 39% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
55.80 ? 5% -30.1 25.65 ? 39% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
55.22 ? 5% -29.6 25.59 ? 39% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc.page_cache_ra_unbounded
20.78 ? 11% -20.8 0.00 perf-profile.calltrace.cycles-pp.filemap_add_folio.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
52.64 ? 5% -19.5 33.18 ? 8% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
52.68 ? 5% -19.5 33.23 ? 8% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
18.43 ? 12% -18.4 0.00 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
18.39 ? 12% -18.4 0.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded.filemap_fault
17.74 ? 12% -17.7 0.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio.page_cache_ra_unbounded
55.38 ? 6% -14.3 41.11 ? 16% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc
56.25 ? 6% -14.3 41.99 ? 17% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages
25.36 ? 10% -12.3 13.07 ? 18% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node
25.22 ? 9% -12.1 13.12 ? 18% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
88.37 -11.6 76.74 ? 5% perf-profile.calltrace.cycles-pp.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault.__handle_mm_fault
88.37 -11.6 76.75 ? 5% perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
88.37 -11.6 76.75 ? 5% perf-profile.calltrace.cycles-pp.__xfs_filemap_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault
90.28 -11.1 79.20 ? 5% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
90.32 -11.0 79.34 ? 5% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
90.45 -10.9 79.57 ? 5% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
90.50 -10.8 79.66 ? 4% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
90.50 -10.8 79.66 ? 4% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
90.56 -10.8 79.74 ? 4% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
94.36 -8.1 86.27 ? 4% perf-profile.calltrace.cycles-pp.do_access
17.70 ? 12% -5.9 11.76 ? 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru
17.73 ? 12% -5.9 11.84 ? 14% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio
5.77 ? 25% -5.8 0.00 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
5.51 ? 9% -5.5 0.00 perf-profile.calltrace.cycles-pp.read_pages.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault.__do_fault
5.50 ? 8% -5.5 0.00 perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_fault.__xfs_filemap_fault
5.46 ? 27% -5.5 0.00 perf-profile.calltrace.cycles-pp.rmqueue_bulk.get_page_from_freelist.__alloc_pages.folio_alloc.page_cache_ra_unbounded
5.29 ? 9% -5.3 0.00 perf-profile.calltrace.cycles-pp.iomap_readpage_iter.iomap_readahead.read_pages.page_cache_ra_unbounded.filemap_fault
5.12 ? 28% -5.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages.folio_alloc
5.12 ? 28% -5.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.rmqueue_bulk.get_page_from_freelist.__alloc_pages
11.37 ? 9% -5.0 6.38 ? 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec
11.41 ? 9% -5.0 6.44 ? 16% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node
5.37 ? 13% -4.9 0.47 ? 70% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
11.50 ? 9% -4.9 6.60 ? 16% perf-profile.calltrace.cycles-pp.lru_note_cost.shrink_inactive_list.shrink_lruvec.shrink_node.do_try_to_free_pages
0.76 ? 8% +0.5 1.28 ? 20% perf-profile.calltrace.cycles-pp.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
0.79 ? 7% +0.5 1.33 ? 19% perf-profile.calltrace.cycles-pp.xfs_filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.40 ? 9% +0.5 1.95 ? 8% perf-profile.calltrace.cycles-pp.shrink_lruvec.shrink_node.balance_pgdat.kswapd.kthread
1.40 ? 9% +0.5 1.95 ? 8% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
1.43 ? 9% +0.6 2.00 ? 8% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
1.43 ? 9% +0.6 2.00 ? 8% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
1.43 ? 9% +0.6 2.02 ? 8% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
0.27 ?100% +0.7 0.96 ? 23% perf-profile.calltrace.cycles-pp.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault
0.28 ?100% +0.7 1.00 ? 23% perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.xfs_filemap_map_pages.do_fault.__handle_mm_fault
0.00 +0.7 0.74 ? 26% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.page_add_file_rmap.do_set_pte.filemap_map_pages.xfs_filemap_map_pages
0.56 ? 46% +0.8 1.35 ? 10% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node.balance_pgdat
0.00 +1.0 1.00 ? 25% perf-profile.calltrace.cycles-pp.try_charge_memcg.charge_memcg.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio
0.00 +1.0 1.04 ? 21% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages
0.00 +1.0 1.04 ? 22% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages.folio_alloc
0.00 +1.1 1.15 ? 26% perf-profile.calltrace.cycles-pp.charge_memcg.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.ondemand_readahead
0.00 +1.2 1.17 ? 21% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +1.2 1.19 ? 58% perf-profile.calltrace.cycles-pp.uncharge_batch.__mem_cgroup_uncharge.free_compound_page.shrink_page_list.shrink_inactive_list
1.76 ? 9% +1.3 3.02 ? 15% perf-profile.calltrace.cycles-pp.ret_from_fork
1.76 ? 9% +1.3 3.02 ? 15% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +1.3 1.32 ? 29% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.__filemap_add_folio.filemap_add_folio.ondemand_readahead.filemap_fault
0.00 +1.4 1.45 ? 54% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge.free_compound_page.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.00 +1.5 1.45 ? 54% perf-profile.calltrace.cycles-pp.free_compound_page.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +1.5 1.50 ? 81% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__free_pages_ok.shrink_page_list.shrink_inactive_list
0.00 +1.5 1.53 ? 80% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__free_pages_ok.shrink_page_list.shrink_inactive_list.shrink_lruvec
0.00 +1.6 1.61 ? 77% perf-profile.calltrace.cycles-pp.__free_pages_ok.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
0.00 +1.7 1.74 ? 26% perf-profile.calltrace.cycles-pp.__filemap_add_folio.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault
4.68 ? 5% +2.2 6.89 ? 15% perf-profile.calltrace.cycles-pp.do_rw_once
0.00 +3.5 3.49 ? 35% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.memset_erms.iomap_readpage_iter.iomap_readahead.read_pages
0.00 +6.7 6.71 ? 14% perf-profile.calltrace.cycles-pp.memset_erms.iomap_readpage_iter.iomap_readahead.read_pages.filemap_fault
0.00 +9.0 8.97 ? 13% perf-profile.calltrace.cycles-pp.iomap_readpage_iter.iomap_readahead.read_pages.filemap_fault.__xfs_filemap_fault
0.00 +9.1 9.05 ? 32% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages.folio_alloc
0.00 +9.1 9.10 ? 12% perf-profile.calltrace.cycles-pp.iomap_readahead.read_pages.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +9.1 9.10 ? 12% perf-profile.calltrace.cycles-pp.read_pages.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
0.00 +9.1 9.10 ? 31% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +9.7 9.72 ? 31% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault
0.00 +11.8 11.84 ? 14% perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__pagevec_lru_add.folio_add_lru.filemap_add_folio.ondemand_readahead
0.00 +12.4 12.38 ? 14% perf-profile.calltrace.cycles-pp.__pagevec_lru_add.folio_add_lru.filemap_add_folio.ondemand_readahead.filemap_fault
0.00 +12.4 12.39 ? 14% perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault
0.00 +14.1 14.13 ? 14% perf-profile.calltrace.cycles-pp.filemap_add_folio.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +15.5 15.54 ? 44% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead
0.00 +17.8 17.76 ? 42% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault
0.00 +27.5 27.50 ? 36% perf-profile.calltrace.cycles-pp.__alloc_pages.folio_alloc.ondemand_readahead.filemap_fault.__xfs_filemap_fault
0.00 +27.5 27.51 ? 36% perf-profile.calltrace.cycles-pp.folio_alloc.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +41.7 41.69 ? 23% perf-profile.calltrace.cycles-pp.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault.do_fault
88.08 -62.1 25.98 ? 39% perf-profile.children.cycles-pp.page_cache_ra_unbounded
54.48 ? 5% -19.0 35.44 ? 8% perf-profile.children.cycles-pp.shrink_inactive_list
54.50 ? 5% -19.0 35.47 ? 8% perf-profile.children.cycles-pp.shrink_lruvec
56.66 ? 6% -14.4 42.30 ? 17% perf-profile.children.cycles-pp.do_try_to_free_pages
56.67 ? 6% -14.4 42.31 ? 17% perf-profile.children.cycles-pp.try_to_free_pages
58.09 ? 6% -13.8 44.30 ? 16% perf-profile.children.cycles-pp.shrink_node
68.98 ? 2% -13.6 55.40 ? 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
57.28 ? 6% -12.7 44.62 ? 15% perf-profile.children.cycles-pp.__alloc_pages_slowpath
88.37 -11.5 76.88 ? 5% perf-profile.children.cycles-pp.filemap_fault
88.37 -11.5 76.89 ? 5% perf-profile.children.cycles-pp.__do_fault
88.37 -11.5 76.89 ? 5% perf-profile.children.cycles-pp.__xfs_filemap_fault
90.49 -11.0 79.46 ? 5% perf-profile.children.cycles-pp.__handle_mm_fault
90.30 -10.9 79.36 ? 5% perf-profile.children.cycles-pp.do_fault
90.63 -10.8 79.78 ? 5% perf-profile.children.cycles-pp.handle_mm_fault
90.64 -10.8 79.86 ? 5% perf-profile.children.cycles-pp.do_user_addr_fault
90.64 -10.8 79.88 ? 5% perf-profile.children.cycles-pp.exc_page_fault
90.70 -10.7 79.96 ? 5% perf-profile.children.cycles-pp.asm_exc_page_fault
10.20 ? 18% -8.7 1.50 ? 17% perf-profile.children.cycles-pp._raw_spin_lock
94.68 -7.5 87.16 ? 4% perf-profile.children.cycles-pp.do_access
20.78 ? 11% -6.4 14.34 ? 14% perf-profile.children.cycles-pp.filemap_add_folio
6.28 ? 24% -5.9 0.36 ? 42% perf-profile.children.cycles-pp.rmqueue_bulk
18.47 ? 12% -5.9 12.58 ? 14% perf-profile.children.cycles-pp.folio_add_lru
18.50 ? 12% -5.8 12.65 ? 14% perf-profile.children.cycles-pp.__pagevec_lru_add
17.85 ? 12% -5.6 12.29 ? 14% perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
12.08 ? 9% -5.1 7.00 ? 16% perf-profile.children.cycles-pp.lru_note_cost
5.56 ? 13% -4.6 0.96 ? 16% perf-profile.children.cycles-pp.__remove_mapping
2.26 ? 8% -2.1 0.20 ? 29% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.27 ? 13% -1.2 0.12 ? 19% perf-profile.children.cycles-pp.workingset_eviction
0.71 -0.6 0.16 ? 12% perf-profile.children.cycles-pp.__list_del_entry_valid
0.61 ? 12% -0.5 0.13 ? 57% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.96 ? 6% -0.5 0.49 ? 23% perf-profile.children.cycles-pp.free_pcppages_bulk
0.46 ? 10% -0.4 0.04 ? 75% perf-profile.children.cycles-pp.workingset_age_nonresident
0.65 ? 5% -0.4 0.28 ? 12% perf-profile.children.cycles-pp.isolate_lru_pages
0.96 ? 6% -0.3 0.64 ? 14% perf-profile.children.cycles-pp.folio_referenced
0.26 ? 5% -0.2 0.08 ? 12% perf-profile.children.cycles-pp.__free_one_page
0.24 ? 5% -0.1 0.10 ? 9% perf-profile.children.cycles-pp.down_read
0.20 ? 18% -0.1 0.08 ? 24% perf-profile.children.cycles-pp.move_pages_to_lru
0.14 ? 4% -0.1 0.02 ? 99% perf-profile.children.cycles-pp.__might_resched
0.20 ? 6% -0.1 0.08 ? 13% perf-profile.children.cycles-pp.xas_load
0.15 ? 42% -0.1 0.04 ?104% perf-profile.children.cycles-pp.alloc_pages_vma
0.30 ? 11% -0.1 0.21 ? 19% perf-profile.children.cycles-pp.xas_create
0.20 ? 7% -0.1 0.14 ? 23% perf-profile.children.cycles-pp.filemap_unaccount_folio
0.14 ? 4% -0.1 0.08 ? 17% perf-profile.children.cycles-pp.next_uptodate_page
0.08 ? 6% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.__list_add_valid
0.06 ? 13% +0.0 0.08 ? 13% perf-profile.children.cycles-pp.count_shadow_nodes
0.05 ? 7% +0.0 0.09 ? 14% perf-profile.children.cycles-pp.__mod_zone_page_state
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.update_load_avg
0.06 ? 9% +0.1 0.11 ? 22% perf-profile.children.cycles-pp.release_pages
0.00 +0.1 0.06 ? 16% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.06 ? 14% perf-profile.children.cycles-pp.flush_tlb_func
0.00 +0.1 0.07 ? 37% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.02 ?141% +0.1 0.09 ? 10% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.07 ? 38% perf-profile.children.cycles-pp.sysvec_call_function_single
0.00 +0.1 0.08 ? 16% perf-profile.children.cycles-pp.iomap_releasepage
0.01 ?223% +0.1 0.09 ? 22% perf-profile.children.cycles-pp.irq_exit_rcu
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.try_to_release_page
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.filemap_release_folio
0.08 ? 4% +0.1 0.17 ? 15% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 ? 17% perf-profile.children.cycles-pp.__mod_lruvec_kmem_state
0.18 ? 4% +0.1 0.28 ? 14% perf-profile.children.cycles-pp.__mod_lruvec_state
0.11 ? 6% +0.1 0.22 ? 16% perf-profile.children.cycles-pp.scheduler_tick
0.16 ? 4% +0.1 0.27 ? 14% perf-profile.children.cycles-pp.__mod_node_page_state
0.08 ? 54% +0.1 0.20 ? 44% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.1 0.13 ? 7% perf-profile.children.cycles-pp.folio_mapcount
0.15 ? 6% +0.1 0.30 ? 17% perf-profile.children.cycles-pp.tick_sched_handle
0.15 ? 4% +0.1 0.30 ? 17% perf-profile.children.cycles-pp.update_process_times
0.16 ? 5% +0.2 0.31 ? 17% perf-profile.children.cycles-pp.tick_sched_timer
0.06 ? 52% +0.2 0.26 ? 52% perf-profile.children.cycles-pp.workingset_update_node
0.05 ? 76% +0.2 0.25 ? 54% perf-profile.children.cycles-pp.list_lru_add
0.22 ? 4% +0.2 0.43 ? 16% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.21 ? 61% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.46 ? 4% +0.2 0.67 ? 19% perf-profile.children.cycles-pp.xas_store
0.00 +0.2 0.22 ? 47% perf-profile.children.cycles-pp.uncharge_folio
0.00 +0.2 0.22 ? 61% perf-profile.children.cycles-pp.folio_mark_accessed
0.12 ? 10% +0.2 0.34 ? 13% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.2 0.24 ? 52% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.30 ? 3% +0.2 0.55 ? 16% perf-profile.children.cycles-pp.hrtimer_interrupt
0.31 ? 3% +0.2 0.56 ? 16% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.18 ? 7% +0.3 0.46 ? 13% perf-profile.children.cycles-pp.native_irq_return_iret
0.34 ? 4% +0.3 0.65 ? 15% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.24 ? 8% +0.3 0.55 ? 25% perf-profile.children.cycles-pp.__count_memcg_events
0.14 ? 37% +0.3 0.47 ? 24% perf-profile.children.cycles-pp.drain_local_pages_wq
0.14 ? 37% +0.3 0.47 ? 24% perf-profile.children.cycles-pp.drain_pages_zone
0.15 ? 36% +0.4 0.51 ? 25% perf-profile.children.cycles-pp.process_one_work
0.15 ? 34% +0.4 0.52 ? 25% perf-profile.children.cycles-pp.worker_thread
0.00 +0.4 0.39 ? 63% perf-profile.children.cycles-pp.unmap_vmas
0.00 +0.4 0.39 ? 63% perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.4 0.39 ? 63% perf-profile.children.cycles-pp.zap_pte_range
0.00 +0.4 0.40 ? 63% perf-profile.children.cycles-pp.munmap
0.00 +0.4 0.41 ? 63% perf-profile.children.cycles-pp.__x64_sys_munmap
0.00 +0.4 0.41 ? 62% perf-profile.children.cycles-pp.__do_munmap
0.00 +0.4 0.41 ? 62% perf-profile.children.cycles-pp.unmap_region
0.00 +0.4 0.41 ? 62% perf-profile.children.cycles-pp.__vm_munmap
0.50 ? 10% +0.5 0.97 ? 23% perf-profile.children.cycles-pp.page_add_file_rmap
0.51 ? 9% +0.5 1.00 ? 23% perf-profile.children.cycles-pp.do_set_pte
0.77 ? 7% +0.5 1.30 ? 19% perf-profile.children.cycles-pp.filemap_map_pages
1.75 ? 2% +0.5 2.28 ? 18% perf-profile.children.cycles-pp.rmap_walk_file
0.79 ? 7% +0.5 1.33 ? 19% perf-profile.children.cycles-pp.xfs_filemap_map_pages
0.24 ? 16% +0.5 0.78 ? 25% perf-profile.children.cycles-pp.page_counter_try_charge
1.43 ? 9% +0.6 2.00 ? 8% perf-profile.children.cycles-pp.balance_pgdat
1.43 ? 9% +0.6 2.02 ? 8% perf-profile.children.cycles-pp.kswapd
0.18 ? 16% +0.6 0.79 ? 39% perf-profile.children.cycles-pp.page_counter_cancel
0.32 ? 14% +0.7 1.01 ? 24% perf-profile.children.cycles-pp.try_charge_memcg
0.08 ? 14% +0.7 0.78 ? 38% perf-profile.children.cycles-pp.propagate_protected_usage
0.00 +0.7 0.72 ? 49% perf-profile.children.cycles-pp.free_transhuge_page
0.87 ? 4% +0.7 1.62 ? 23% perf-profile.children.cycles-pp.try_to_unmap
1.25 ? 9% +0.9 2.12 ? 28% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.70 ? 4% +0.9 1.60 ? 23% perf-profile.children.cycles-pp.try_to_unmap_one
0.24 ? 7% +1.0 1.21 ? 29% perf-profile.children.cycles-pp.page_remove_rmap
0.21 ? 14% +1.1 1.34 ? 40% perf-profile.children.cycles-pp.page_counter_uncharge
1.80 ? 10% +1.2 3.02 ? 14% perf-profile.children.cycles-pp.ret_from_fork
1.76 ? 9% +1.3 3.02 ? 15% perf-profile.children.cycles-pp.kthread
0.23 ? 11% +1.3 1.57 ? 42% perf-profile.children.cycles-pp.uncharge_batch
0.00 +1.7 1.72 ? 43% perf-profile.children.cycles-pp.free_compound_page
0.00 +1.7 1.75 ? 43% perf-profile.children.cycles-pp.__mem_cgroup_uncharge
0.00 +2.1 2.08 ? 64% perf-profile.children.cycles-pp.__free_pages_ok
0.46 ? 5% +2.1 2.61 ? 28% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
5.50 ? 8% +3.7 9.17 ? 12% perf-profile.children.cycles-pp.iomap_readahead
5.51 ? 9% +3.7 9.18 ? 12% perf-profile.children.cycles-pp.read_pages
5.32 ? 9% +3.7 9.04 ? 13% perf-profile.children.cycles-pp.iomap_readpage_iter
2.98 ? 10% +5.7 8.69 ? 13% perf-profile.children.cycles-pp.memset_erms
17.88 ? 12% +8.1 25.98 ? 16% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +41.7 41.74 ? 23% perf-profile.children.cycles-pp.ondemand_readahead
68.94 ? 2% -13.5 55.40 ? 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.20 ? 8% -2.1 0.07 ? 21% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.80 ? 14% -0.7 0.07 ? 16% perf-profile.self.cycles-pp.workingset_eviction
0.70 -0.5 0.16 ? 12% perf-profile.self.cycles-pp.__list_del_entry_valid
0.60 ? 11% -0.5 0.13 ? 57% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.46 ? 10% -0.4 0.04 ? 75% perf-profile.self.cycles-pp.workingset_age_nonresident
0.28 ? 13% -0.2 0.08 ? 75% perf-profile.self.cycles-pp.charge_memcg
0.24 ? 3% -0.2 0.08 ? 11% perf-profile.self.cycles-pp.shrink_page_list
0.19 ? 12% -0.1 0.04 ?118% perf-profile.self.cycles-pp.__mem_cgroup_charge
0.23 ? 5% -0.1 0.09 ? 10% perf-profile.self.cycles-pp._raw_spin_lock
0.22 ? 5% -0.1 0.12 ? 10% perf-profile.self.cycles-pp.isolate_lru_pages
0.16 ? 4% -0.1 0.05 ? 45% perf-profile.self.cycles-pp.xas_load
0.20 ? 3% -0.1 0.10 ? 9% perf-profile.self.cycles-pp.xas_create
0.19 ? 5% -0.1 0.10 ? 21% perf-profile.self.cycles-pp.__pagevec_lru_add
0.17 ? 5% -0.1 0.08 ? 12% perf-profile.self.cycles-pp.down_read
0.30 ? 4% -0.1 0.21 ? 7% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.14 ? 5% -0.1 0.07 ? 14% perf-profile.self.cycles-pp.next_uptodate_page
0.07 -0.0 0.04 ? 45% perf-profile.self.cycles-pp.__list_add_valid
0.05 ? 7% +0.0 0.09 ? 14% perf-profile.self.cycles-pp.__mod_zone_page_state
0.06 ? 6% +0.0 0.09 ? 13% perf-profile.self.cycles-pp.xas_store
0.02 ?141% +0.1 0.07 ? 12% perf-profile.self.cycles-pp.count_shadow_nodes
0.00 +0.1 0.06 ? 16% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.page_remove_rmap
0.01 ?223% +0.1 0.08 ? 25% perf-profile.self.cycles-pp.release_pages
0.06 ? 8% +0.1 0.14 ? 14% perf-profile.self.cycles-pp.filemap_map_pages
0.06 ? 9% +0.1 0.16 ? 21% perf-profile.self.cycles-pp.lru_note_cost
0.00 +0.1 0.12 ? 14% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.15 ? 6% +0.1 0.27 ? 14% perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.13 ? 7% perf-profile.self.cycles-pp.folio_mapcount
0.00 +0.1 0.14 ? 46% perf-profile.self.cycles-pp.uncharge_batch
0.00 +0.2 0.15 ? 19% perf-profile.self.cycles-pp.page_add_file_rmap
0.08 ? 12% +0.2 0.22 ? 24% perf-profile.self.cycles-pp.try_charge_memcg
0.21 ? 7% +0.2 0.37 ? 18% perf-profile.self.cycles-pp.__mod_lruvec_page_state
0.00 +0.2 0.22 ? 47% perf-profile.self.cycles-pp.uncharge_folio
0.20 ? 8% +0.3 0.46 ? 27% perf-profile.self.cycles-pp.__count_memcg_events
0.18 ? 7% +0.3 0.46 ? 13% perf-profile.self.cycles-pp.native_irq_return_iret
0.04 ? 44% +0.4 0.40 ? 27% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.19 ? 16% +0.4 0.56 ? 24% perf-profile.self.cycles-pp.page_counter_try_charge
0.18 ? 15% +0.6 0.78 ? 38% perf-profile.self.cycles-pp.page_counter_cancel
0.08 ? 14% +0.7 0.77 ? 38% perf-profile.self.cycles-pp.propagate_protected_usage
3.37 ? 8% +2.8 6.19 ? 16% perf-profile.self.cycles-pp.do_access
2.95 ? 9% +5.5 8.47 ? 13% perf-profile.self.cycles-pp.memset_erms





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://01.org/lkp



Attachments:
(No filename) (182.70 kB)
config-5.17.0-rc4-00163-g793917d997df (163.91 kB)
job-script (8.58 kB)
job.yaml (5.93 kB)
reproduce (981.00 B)
Download all attachments