Greeting,
FYI, we noticed a -90.0% regression of aim7.jobs-per-min due to commit:
commit: b11c83553a02b58ee0934cb80c8852109f981451 ("f2fs: fix deadloop in foreground GC")
https://git.kernel.org/cgit/linux/kernel/git/chao/linux.git bugzilla
in testcase: aim7
on test machine: 88 threads 2 sockets Intel(R) Xeon(R) Gold 6238M CPU @ 2.10GHz with 128G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: f2fs
test: sync_disk_rw
load: 100
cpufreq_governor: performance
ucode: 0x500320a
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min -86.1% regression |
| test machine | 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1BRD_48G |
| | fs=f2fs |
| | load=3000 |
| | test=disk_src |
| | ucode=0xd000331 |
+------------------+---------------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.open.ops_per_sec -49.0% regression |
| test machine | 96 threads 2 sockets Ice Lake with 256G memory |
| test parameters | class=filesystem |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=f2fs |
| | nr_threads=10% |
| | test=open |
| | testtime=60s |
| | ucode=0xb000280 |
+------------------+---------------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-11/performance/4BRD_12G/f2fs/x86_64-rhel-8.3/100/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp9/sync_disk_rw/aim7/0x500320a
commit:
86aef22d0f ("f2fs: fix to do sanity check on total_data_blocks")
b11c83553a ("f2fs: fix deadloop in foreground GC")
86aef22d0f7cc71d b11c83553a02b58ee0934cb80c8
---------------- ---------------------------
%stddev %change %stddev
\ | \
10040 -90.0% 1001 ? 2% aim7.jobs-per-min
59.79 +903.0% 599.72 ? 2% aim7.time.elapsed_time
59.79 +903.0% 599.72 ? 2% aim7.time.elapsed_time.max
86350216 +750.0% 7.34e+08 ? 2% aim7.time.file_system_outputs
1257273 -6.5% 1175394 aim7.time.involuntary_context_switches
9897 +73.9% 17213 ? 4% aim7.time.minor_page_faults
660.30 +293.9% 2600 ? 2% aim7.time.system_time
31524696 -20.4% 25089511 aim7.time.voluntary_context_switches
4.502e+09 +995.3% 4.931e+10 ? 2% cpuidle..time
38190504 +227.2% 1.25e+08 ? 2% cpuidle..usage
97.24 +556.8% 638.71 ? 2% uptime.boot
7302 +609.9% 51844 ? 2% uptime.idle
86.19 +7.7% 92.83 iostat.cpu.idle
0.08 ? 13% +1233.8% 1.09 ? 10% iostat.cpu.iowait
13.55 -55.4% 6.04 ? 2% iostat.cpu.system
0.08 ? 14% +1.0 1.10 ? 10% mpstat.cpu.all.iowait%
12.70 ? 2% -7.8 4.89 ? 2% mpstat.cpu.all.sys%
0.15 ? 7% -0.1 0.03 ? 13% mpstat.cpu.all.usr%
4238331 ? 2% +78.0% 7544519 ? 5% numa-numastat.node0.local_node
4299923 ? 2% +76.2% 7574783 ? 5% numa-numastat.node0.numa_hit
4468617 ? 2% +76.7% 7895118 ? 4% numa-numastat.node1.local_node
4487657 ? 2% +77.1% 7945602 ? 4% numa-numastat.node1.numa_hit
85.83 +7.4% 92.17 vmstat.cpu.id
679039 -10.4% 608375 vmstat.io.bo
1.133e+08 -27.1% 82553195 vmstat.memory.free
0.00 +1e+102% 1.00 vmstat.procs.b
10.67 ? 8% -62.5% 4.00 vmstat.procs.r
1126012 -91.3% 97851 vmstat.system.cs
224153 ? 2% -18.7% 182243 vmstat.system.in
494.00 -63.9% 178.33 turbostat.Avg_MHz
18.17 -10.8 7.33 ? 2% turbostat.Busy%
2724 -10.3% 2443 turbostat.Bzy_MHz
32021437 ? 6% +226.6% 1.046e+08 ? 13% turbostat.C1E
78.54 ? 3% +18.0% 92.64 turbostat.CPU%c1
3.29 ? 74% -99.0% 0.03 ? 90% turbostat.CPU%c6
14161632 ? 2% +675.5% 1.098e+08 ? 2% turbostat.IRQ
140002 +338.5% 613853 ? 8% turbostat.POLL
0.05 -0.0 0.01 ? 31% turbostat.POLL%
51.83 -4.2% 49.67 turbostat.PkgTmp
158.67 -17.6% 130.67 turbostat.PkgWatt
14.64 -10.9% 13.05 turbostat.RAMWatt
27273 ? 4% +306.9% 110965 meminfo.Active
21839 ? 4% -13.3% 18930 ? 5% meminfo.Active(anon)
5433 ? 2% +1593.9% 92035 meminfo.Active(file)
79893 +139.4% 191225 meminfo.AnonHugePages
505015 -13.0% 439371 meminfo.Committed_AS
48290 ? 7% -59.2% 19722 meminfo.Dirty
84156 -20.7% 66713 meminfo.Inactive(file)
136537 +54.5% 210911 meminfo.KReclaimable
34715 +14.9% 39884 ? 2% meminfo.Mapped
1.13e+08 -27.3% 82127332 meminfo.MemAvailable
1.135e+08 -27.2% 82559332 meminfo.MemFree
18195700 +169.9% 49114550 meminfo.Memused
136537 +54.5% 210911 meminfo.SReclaimable
321913 +23.6% 397882 meminfo.Slab
18197896 +174.2% 49896449 meminfo.max_used_kB
4701 ? 3% +947.7% 49256 ? 3% numa-meminfo.node0.Active
1050 ? 19% +291.3% 4110 ? 12% numa-meminfo.node0.Active(anon)
3650 ? 2% +1136.7% 45144 ? 3% numa-meminfo.node0.Active(file)
24760 ? 6% -60.2% 9845 ? 5% numa-meminfo.node0.Dirty
42577 -21.2% 33544 ? 5% numa-meminfo.node0.Inactive(file)
94812 ? 4% +34.2% 127225 ? 3% numa-meminfo.node0.KReclaimable
55282473 -26.8% 40465119 ? 2% numa-meminfo.node0.MemFree
10376881 ? 2% +142.8% 25194235 ? 4% numa-meminfo.node0.MemUsed
94812 ? 4% +34.2% 127225 ? 3% numa-meminfo.node0.SReclaimable
200531 ? 6% +14.9% 230342 ? 3% numa-meminfo.node0.Slab
22485 ? 5% +174.5% 61715 ? 2% numa-meminfo.node1.Active
20743 ? 5% -28.5% 14830 ? 9% numa-meminfo.node1.Active(anon)
1742 ? 11% +2590.6% 46884 ? 2% numa-meminfo.node1.Active(file)
24249 ? 7% -58.5% 10067 ? 5% numa-meminfo.node1.Dirty
107983 ? 83% +169.2% 290729 ? 44% numa-meminfo.node1.FilePages
41677 -20.5% 33154 ? 4% numa-meminfo.node1.Inactive(file)
41632 ? 11% +101.0% 83678 ? 5% numa-meminfo.node1.KReclaimable
58224846 -27.7% 42093376 ? 2% numa-meminfo.node1.MemFree
7789680 ? 3% +207.1% 23921151 ? 4% numa-meminfo.node1.MemUsed
41632 ? 11% +101.0% 83678 ? 5% numa-meminfo.node1.SReclaimable
121289 ? 12% +38.1% 167535 ? 5% numa-meminfo.node1.Slab
261.67 ? 19% +292.5% 1027 ? 12% numa-vmstat.node0.nr_active_anon
896.67 ? 3% +1158.3% 11283 ? 3% numa-vmstat.node0.nr_active_file
5338051 +764.8% 46164014 ? 5% numa-vmstat.node0.nr_dirtied
6342 ? 4% -61.4% 2447 ? 5% numa-vmstat.node0.nr_dirty
13831096 -26.9% 10116544 ? 2% numa-vmstat.node0.nr_free_pages
10693 -21.6% 8388 ? 5% numa-vmstat.node0.nr_inactive_file
23669 ? 4% +34.4% 31802 ? 3% numa-vmstat.node0.nr_slab_reclaimable
5316491 +768.0% 46146250 ? 5% numa-vmstat.node0.nr_written
261.67 ? 19% +292.5% 1027 ? 12% numa-vmstat.node0.nr_zone_active_anon
896.67 ? 3% +1158.3% 11283 ? 3% numa-vmstat.node0.nr_zone_active_file
10693 -21.6% 8388 ? 5% numa-vmstat.node0.nr_zone_inactive_file
4791 ? 3% -42.2% 2767 ? 4% numa-vmstat.node0.nr_zone_write_pending
4298525 ? 2% +76.2% 7573682 ? 5% numa-vmstat.node0.numa_hit
4236933 ? 2% +78.0% 7543416 ? 5% numa-vmstat.node0.numa_local
5153 ? 5% -27.9% 3717 ? 9% numa-vmstat.node1.nr_active_anon
419.67 ? 9% +2692.2% 11717 ? 2% numa-vmstat.node1.nr_active_file
5437248 +738.0% 45566234 ? 4% numa-vmstat.node1.nr_dirtied
6258 ? 5% -60.0% 2504 ? 5% numa-vmstat.node1.nr_dirty
27035 ? 83% +169.0% 72736 ? 44% numa-vmstat.node1.nr_file_pages
14566424 -27.8% 10523648 ? 2% numa-vmstat.node1.nr_free_pages
10467 -20.7% 8296 ? 4% numa-vmstat.node1.nr_inactive_file
10367 ? 10% +101.8% 20916 ? 5% numa-vmstat.node1.nr_slab_reclaimable
127.17 ? 14% -41.8% 74.00 ? 4% numa-vmstat.node1.nr_writeback
5415604 +741.1% 45548050 ? 4% numa-vmstat.node1.nr_written
5153 ? 5% -27.9% 3717 ? 9% numa-vmstat.node1.nr_zone_active_anon
419.67 ? 9% +2692.2% 11717 ? 2% numa-vmstat.node1.nr_zone_active_file
10466 -20.7% 8296 ? 4% numa-vmstat.node1.nr_zone_inactive_file
4777 ? 2% -41.4% 2799 ? 4% numa-vmstat.node1.nr_zone_write_pending
4485674 ? 2% +77.1% 7944450 ? 4% numa-vmstat.node1.numa_hit
4466634 ? 2% +76.7% 7893966 ? 4% numa-vmstat.node1.numa_local
5448 ? 4% -13.0% 4738 ? 5% proc-vmstat.nr_active_anon
1348 ? 4% +1606.6% 23005 proc-vmstat.nr_active_file
66203 -4.6% 63179 proc-vmstat.nr_anon_pages
10787366 +750.4% 91731068 ? 2% proc-vmstat.nr_dirtied
12325 ? 6% -60.0% 4935 proc-vmstat.nr_dirty
2820205 -27.3% 2049524 proc-vmstat.nr_dirty_background_threshold
5647307 -27.3% 4104060 proc-vmstat.nr_dirty_threshold
615683 +2.9% 633600 proc-vmstat.nr_file_pages
28375247 -27.3% 20639835 proc-vmstat.nr_free_pages
68827 -2.4% 67160 proc-vmstat.nr_inactive_anon
21065 -20.8% 16687 proc-vmstat.nr_inactive_file
16879 -3.2% 16343 proc-vmstat.nr_kernel_stack
8960 +14.7% 10274 proc-vmstat.nr_mapped
2060 ? 2% -8.5% 1884 ? 2% proc-vmstat.nr_page_table_pages
34110 +54.6% 52723 proc-vmstat.nr_slab_reclaimable
10744616 +753.4% 91695141 ? 2% proc-vmstat.nr_written
5448 ? 4% -13.0% 4738 ? 5% proc-vmstat.nr_zone_active_anon
1348 ? 4% +1606.6% 23005 proc-vmstat.nr_zone_active_file
68827 -2.4% 67160 proc-vmstat.nr_zone_inactive_anon
21065 -20.8% 16687 proc-vmstat.nr_zone_inactive_file
9507 ? 2% -41.7% 5547 proc-vmstat.nr_zone_write_pending
2686 ? 8% +458.5% 15002 ? 13% proc-vmstat.numa_hint_faults
1577 ? 12% +576.9% 10675 ? 18% proc-vmstat.numa_hint_faults_local
8789825 +76.6% 15521979 proc-vmstat.numa_hit
8709192 +77.3% 15441219 proc-vmstat.numa_local
1427 ? 76% +378.2% 6827 ? 91% proc-vmstat.numa_pages_migrated
14137 ? 76% +843.8% 133427 ? 3% proc-vmstat.numa_pte_updates
15853 ? 2% +749.9% 134728 ? 6% proc-vmstat.pgactivate
8788729 +76.6% 15517350 proc-vmstat.pgalloc_normal
9855 -7.5% 9112 proc-vmstat.pgdeactivate
339793 +477.4% 1961972 proc-vmstat.pgfault
1716158 +74.9% 3001183 proc-vmstat.pgfree
1427 ? 76% +378.2% 6827 ? 91% proc-vmstat.pgmigrate_success
42991331 +753.2% 3.668e+08 ? 2% proc-vmstat.pgpgout
23685 ? 2% +565.0% 157495 ? 2% proc-vmstat.pgreuse
19703 -7.9% 18150 proc-vmstat.pgrotated
0.15 ? 11% -34.8% 0.10 ? 38% sched_debug.cfs_rq:/.h_nr_running.avg
0.36 ? 4% -25.7% 0.27 ? 14% sched_debug.cfs_rq:/.h_nr_running.stddev
58.54 ? 6% -67.5% 19.00 ? 12% sched_debug.cfs_rq:/.load_avg.avg
704.25 ? 20% -34.2% 463.10 ? 23% sched_debug.cfs_rq:/.load_avg.max
1.08 ? 41% -100.0% 0.00 sched_debug.cfs_rq:/.load_avg.min
161.35 ? 9% -56.8% 69.70 ? 19% sched_debug.cfs_rq:/.load_avg.stddev
58262 ? 7% +79.3% 104483 ? 4% sched_debug.cfs_rq:/.min_vruntime.avg
79623 ? 7% +60.8% 128069 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
51615 ? 6% +74.6% 90131 ? 6% sched_debug.cfs_rq:/.min_vruntime.min
4743 ? 9% +34.2% 6365 ? 3% sched_debug.cfs_rq:/.min_vruntime.stddev
0.36 ? 4% -25.3% 0.27 ? 14% sched_debug.cfs_rq:/.nr_running.stddev
36.73 ? 12% -74.7% 9.28 ? 28% sched_debug.cfs_rq:/.removed.load_avg.avg
597.33 ? 31% -51.7% 288.58 ? 40% sched_debug.cfs_rq:/.removed.load_avg.max
138.37 ? 15% -66.9% 45.85 ? 29% sched_debug.cfs_rq:/.removed.load_avg.stddev
15.81 ? 16% -74.3% 4.07 ? 30% sched_debug.cfs_rq:/.removed.runnable_avg.avg
299.83 ? 28% -51.6% 145.05 ? 39% sched_debug.cfs_rq:/.removed.runnable_avg.max
61.29 ? 18% -65.3% 21.28 ? 32% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
15.81 ? 16% -74.3% 4.07 ? 30% sched_debug.cfs_rq:/.removed.util_avg.avg
299.83 ? 28% -51.6% 145.04 ? 39% sched_debug.cfs_rq:/.removed.util_avg.max
61.29 ? 18% -65.3% 21.28 ? 32% sched_debug.cfs_rq:/.removed.util_avg.stddev
239.69 -64.1% 85.99 ? 18% sched_debug.cfs_rq:/.runnable_avg.avg
848.08 ? 3% -24.1% 643.52 ? 2% sched_debug.cfs_rq:/.runnable_avg.max
204.55 ? 3% -38.0% 126.84 ? 10% sched_debug.cfs_rq:/.runnable_avg.stddev
-9422 -76.1% -2254 sched_debug.cfs_rq:/.spread0.avg
4744 ? 9% +34.2% 6365 ? 3% sched_debug.cfs_rq:/.spread0.stddev
238.51 -64.0% 85.79 ? 18% sched_debug.cfs_rq:/.util_avg.avg
847.83 ? 3% -24.1% 643.32 ? 2% sched_debug.cfs_rq:/.util_avg.max
203.68 ? 3% -37.7% 126.86 ? 10% sched_debug.cfs_rq:/.util_avg.stddev
22.61 ? 14% -72.3% 6.26 ? 26% sched_debug.cfs_rq:/.util_est_enqueued.avg
658.75 ? 15% -66.9% 217.89 ? 21% sched_debug.cfs_rq:/.util_est_enqueued.max
91.50 ? 8% -67.9% 29.41 ? 18% sched_debug.cfs_rq:/.util_est_enqueued.stddev
464279 -36.5% 294831 ? 6% sched_debug.cpu.avg_idle.avg
160264 ? 9% +43.1% 229376 ? 8% sched_debug.cpu.avg_idle.stddev
66086 +388.1% 322594 ? 5% sched_debug.cpu.clock.avg
66090 +388.1% 322598 ? 5% sched_debug.cpu.clock.max
66082 +388.2% 322590 ? 5% sched_debug.cpu.clock.min
65678 +386.0% 319206 ? 5% sched_debug.cpu.clock_task.avg
65744 +385.8% 319372 ? 5% sched_debug.cpu.clock_task.max
64132 +394.8% 317337 ? 5% sched_debug.cpu.clock_task.min
5848 +116.4% 12658 ? 3% sched_debug.cpu.curr->pid.max
0.35 ? 4% -23.2% 0.27 ? 14% sched_debug.cpu.nr_running.stddev
402088 -21.1% 317120 ? 4% sched_debug.cpu.nr_switches.avg
452370 -20.1% 361500 ? 5% sched_debug.cpu.nr_switches.max
385885 -23.4% 295599 ? 4% sched_debug.cpu.nr_switches.min
66083 +388.2% 322591 ? 5% sched_debug.cpu_clk
65514 +391.5% 322021 ? 5% sched_debug.ktime
69859 +367.0% 326211 ? 5% sched_debug.sched_clk
29.31 ? 22% -68.3% 9.30 perf-stat.i.MPKI
3.376e+09 -61.6% 1.296e+09 perf-stat.i.branch-instructions
2.24 ? 24% -1.6 0.61 ? 4% perf-stat.i.branch-miss-rate%
48353907 -84.2% 7650642 ? 5% perf-stat.i.branch-misses
17.35 ? 3% +19.2 36.51 perf-stat.i.cache-miss-rate%
77744012 -73.2% 20803265 perf-stat.i.cache-misses
4.044e+08 -85.9% 57087508 perf-stat.i.cache-references
1177570 -91.7% 98137 perf-stat.i.context-switches
2.99 ? 9% -18.8% 2.42 perf-stat.i.cpi
4.369e+10 -65.3% 1.515e+10 ? 2% perf-stat.i.cpu-cycles
29321 -92.3% 2264 perf-stat.i.cpu-migrations
0.14 ? 47% -0.1 0.05 ? 38% perf-stat.i.dTLB-load-miss-rate%
2821820 ? 9% -72.3% 780974 ? 35% perf-stat.i.dTLB-load-misses
4.782e+09 -64.8% 1.682e+09 perf-stat.i.dTLB-loads
0.03 ? 53% -0.0 0.01 ? 50% perf-stat.i.dTLB-store-miss-rate%
176511 ? 5% -71.2% 50858 ? 47% perf-stat.i.dTLB-store-misses
2.057e+09 -72.9% 5.583e+08 perf-stat.i.dTLB-stores
7980174 -80.3% 1569592 ? 3% perf-stat.i.iTLB-load-misses
15107872 -83.5% 2499377 perf-stat.i.iTLB-loads
1.692e+10 -63.2% 6.228e+09 perf-stat.i.instructions
2182 ? 2% +84.1% 4018 ? 3% perf-stat.i.instructions-per-iTLB-miss
0.39 ? 3% +8.4% 0.42 perf-stat.i.ipc
1.92 ? 15% -74.1% 0.50 ? 27% perf-stat.i.major-faults
0.50 -65.3% 0.17 ? 2% perf-stat.i.metric.GHz
880.64 ? 8% -10.5% 788.06 perf-stat.i.metric.K/sec
120.55 -66.7% 40.20 perf-stat.i.metric.M/sec
3755 -19.2% 3035 perf-stat.i.minor-faults
88.50 -11.0 77.51 perf-stat.i.node-load-miss-rate%
33873709 -84.8% 5145213 perf-stat.i.node-load-misses
3093268 -52.0% 1485344 perf-stat.i.node-loads
87.63 -23.4 64.20 perf-stat.i.node-store-miss-rate%
13147468 -79.4% 2707310 perf-stat.i.node-store-misses
1093418 +33.7% 1461825 perf-stat.i.node-stores
3757 -19.2% 3035 perf-stat.i.page-faults
23.90 -61.7% 9.17 perf-stat.overall.MPKI
1.43 ? 2% -0.8 0.59 ? 4% perf-stat.overall.branch-miss-rate%
19.22 +17.2 36.44 perf-stat.overall.cache-miss-rate%
2.58 -5.8% 2.43 perf-stat.overall.cpi
562.12 +29.5% 728.23 ? 2% perf-stat.overall.cycles-between-cache-misses
34.56 +4.0 38.56 ? 2% perf-stat.overall.iTLB-load-miss-rate%
2120 +87.4% 3973 ? 3% perf-stat.overall.instructions-per-iTLB-miss
0.39 +6.2% 0.41 perf-stat.overall.ipc
91.63 -14.0 77.60 perf-stat.overall.node-load-miss-rate%
92.32 -27.4 64.94 perf-stat.overall.node-store-miss-rate%
3.317e+09 -61.0% 1.294e+09 perf-stat.ps.branch-instructions
47523233 -83.9% 7638089 ? 5% perf-stat.ps.branch-misses
76393517 -72.8% 20767984 perf-stat.ps.cache-misses
3.974e+08 -85.7% 56991079 perf-stat.ps.cache-references
1157099 -91.5% 97971 perf-stat.ps.context-switches
86558 +1.5% 87853 perf-stat.ps.cpu-clock
4.294e+10 -64.8% 1.512e+10 ? 2% perf-stat.ps.cpu-cycles
28812 -92.2% 2260 perf-stat.ps.cpu-migrations
2773615 ? 9% -71.9% 779650 ? 35% perf-stat.ps.dTLB-load-misses
4.699e+09 -64.3% 1.679e+09 perf-stat.ps.dTLB-loads
173584 ? 5% -70.8% 50772 ? 47% perf-stat.ps.dTLB-store-misses
2.021e+09 -72.4% 5.574e+08 perf-stat.ps.dTLB-stores
7842066 -80.0% 1566971 ? 3% perf-stat.ps.iTLB-load-misses
14845806 -83.2% 2495179 perf-stat.ps.iTLB-loads
1.663e+10 -62.6% 6.218e+09 perf-stat.ps.instructions
1.89 ? 15% -73.7% 0.50 ? 27% perf-stat.ps.major-faults
3691 -17.9% 3030 perf-stat.ps.minor-faults
33284894 -84.6% 5136558 perf-stat.ps.node-load-misses
3039497 -51.2% 1482855 perf-stat.ps.node-loads
12918927 -79.1% 2702672 perf-stat.ps.node-store-misses
1074419 +35.8% 1459333 perf-stat.ps.node-stores
3692 -17.9% 3030 perf-stat.ps.page-faults
86558 +1.5% 87853 perf-stat.ps.task-clock
1.011e+12 +269.7% 3.736e+12 ? 2% perf-stat.total.instructions
44.20 -10.2 34.01 ? 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
43.08 -9.9 33.20 ? 9% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
50.02 -9.2 40.80 ? 7% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
49.50 -9.2 40.34 ? 7% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
49.47 -9.1 40.33 ? 7% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
45.62 -7.8 37.78 ? 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
45.42 -7.5 37.94 ? 7% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
46.53 -7.2 39.32 ? 7% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
5.82 -5.4 0.38 ? 71% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule
9.86 ? 4% -5.3 4.61 ? 29% perf-profile.calltrace.cycles-pp.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages
5.19 ? 5% -4.9 0.28 ?100% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
7.45 ? 6% -3.4 4.01 ? 31% perf-profile.calltrace.cycles-pp.f2fs_outplace_write_data.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages
4.32 ? 4% -3.4 0.92 ? 16% perf-profile.calltrace.cycles-pp.set_node_addr.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
4.07 ? 4% -3.2 0.85 ? 15% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.set_node_addr.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file
7.08 ? 6% -3.2 3.89 ? 32% perf-profile.calltrace.cycles-pp.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages
6.12 ? 8% -3.2 2.94 ? 36% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_submit_page_write.do_write_page
6.70 ? 7% -3.0 3.67 ? 24% perf-profile.calltrace.cycles-pp.f2fs_do_write_node_page.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
6.69 ? 7% -3.0 3.67 ? 24% perf-profile.calltrace.cycles-pp.do_write_page.f2fs_do_write_node_page.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file
5.08 ? 9% -3.0 2.08 ? 45% perf-profile.calltrace.cycles-pp.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.f2fs_write_single_data_page
4.78 ? 9% -2.8 1.98 ? 47% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page
4.67 ? 9% -2.7 1.92 ? 48% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data
2.89 ? 2% -1.8 1.10 ? 24% perf-profile.calltrace.cycles-pp.__submit_merged_write_cond.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages.filemap_fdatawrite_wbc
3.22 ? 7% -1.7 1.54 ? 30% perf-profile.calltrace.cycles-pp.f2fs_allocate_data_block.do_write_page.f2fs_do_write_node_page.__write_node_page.f2fs_fsync_node_pages
2.38 ? 2% -1.5 0.83 ? 21% perf-profile.calltrace.cycles-pp.f2fs_issue_flush.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
2.00 ? 2% -1.4 0.61 ? 47% perf-profile.calltrace.cycles-pp.__wait_for_common.f2fs_issue_flush.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
1.66 ? 2% -1.4 0.28 ?100% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.set_node_addr.__write_node_page.f2fs_fsync_node_pages
1.94 ? 2% -1.4 0.58 ? 47% perf-profile.calltrace.cycles-pp.schedule_timeout.__wait_for_common.f2fs_issue_flush.f2fs_do_sync_file.f2fs_file_write_iter
1.93 ? 2% -1.3 0.58 ? 47% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__wait_for_common.f2fs_issue_flush.f2fs_do_sync_file
1.93 ? 2% -1.3 0.58 ? 47% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__wait_for_common.f2fs_issue_flush
3.20 ? 7% -1.3 1.89 ? 20% perf-profile.calltrace.cycles-pp.f2fs_submit_page_write.do_write_page.f2fs_do_write_node_page.__write_node_page.f2fs_fsync_node_pages
2.18 -1.3 0.90 ? 21% perf-profile.calltrace.cycles-pp.__submit_merged_write_cond.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
1.68 ? 2% -1.3 0.42 ? 71% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.schedule_timeout.__wait_for_common
1.68 ? 2% -1.3 0.42 ? 71% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.schedule_timeout
2.36 ? 9% -1.2 1.19 ? 31% perf-profile.calltrace.cycles-pp.__mutex_lock.f2fs_allocate_data_block.do_write_page.f2fs_do_write_node_page.__write_node_page
2.69 ? 8% -1.2 1.51 ? 21% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_submit_page_write.do_write_page.f2fs_do_write_node_page
2.90 ? 8% -1.1 1.77 ? 20% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_submit_page_write.do_write_page.f2fs_do_write_node_page.__write_node_page
1.32 ? 2% -0.9 0.39 ? 71% perf-profile.calltrace.cycles-pp.ret_from_fork
1.32 ? 2% -0.9 0.39 ? 71% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.84 ? 11% -0.9 0.97 ? 32% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.f2fs_allocate_data_block.do_write_page.f2fs_do_write_node_page
1.32 ? 14% -0.7 0.63 ? 48% perf-profile.calltrace.cycles-pp.f2fs_space_for_roll_forward.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
1.27 ? 14% -0.7 0.61 ? 48% perf-profile.calltrace.cycles-pp.__percpu_counter_sum.f2fs_space_for_roll_forward.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.42 ? 44% +0.6 0.97 ? 11% perf-profile.calltrace.cycles-pp.brd_submit_bio.__submit_bio.__submit_bio_noacct.__submit_merged_bio.__submit_merged_write_cond
0.00 +0.6 0.61 ? 16% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle.cpu_startup_entry
0.00 +0.6 0.63 ? 11% perf-profile.calltrace.cycles-pp.is_alive.gc_data_segment.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page
0.00 +0.7 0.70 ? 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +0.8 0.83 ? 18% perf-profile.calltrace.cycles-pp.f2fs_wait_on_page_writeback.f2fs_wait_on_node_pages_writeback.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.00 +0.9 0.90 ? 17% perf-profile.calltrace.cycles-pp.f2fs_wait_on_node_pages_writeback.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
0.00 +1.0 0.96 ? 11% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +1.1 1.08 ? 24% perf-profile.calltrace.cycles-pp.brd_insert_page.brd_do_bvec.brd_submit_bio.__submit_bio.__submit_bio_noacct
0.00 +1.1 1.10 ? 7% perf-profile.calltrace.cycles-pp.brd_submit_bio.__submit_bio.__submit_bio_noacct.__submit_merged_bio.f2fs_submit_page_write
0.00 +1.2 1.19 ? 9% perf-profile.calltrace.cycles-pp.__submit_bio_noacct.__submit_merged_bio.__submit_merged_write_cond.do_garbage_collect.f2fs_gc
0.00 +1.2 1.19 ? 9% perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.__submit_merged_bio.__submit_merged_write_cond.do_garbage_collect
0.00 +1.2 1.19 ? 9% perf-profile.calltrace.cycles-pp.__submit_merged_bio.__submit_merged_write_cond.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page
0.00 +1.2 1.19 ? 9% perf-profile.calltrace.cycles-pp.__submit_merged_write_cond.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page.f2fs_write_cache_pages
0.59 ? 10% +1.3 1.84 ? 12% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.60 ? 10% +1.3 1.87 ? 12% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +1.3 1.34 ? 8% perf-profile.calltrace.cycles-pp.__submit_bio_noacct.__submit_merged_bio.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data
0.00 +1.3 1.34 ? 8% perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.__submit_merged_bio.f2fs_submit_page_write.do_write_page
0.00 +1.3 1.34 ? 8% perf-profile.calltrace.cycles-pp.__submit_merged_bio.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page
0.00 +1.5 1.45 ? 8% perf-profile.calltrace.cycles-pp.f2fs_submit_page_write.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.move_data_page
0.42 ? 44% +1.6 2.06 ? 8% perf-profile.calltrace.cycles-pp.brd_do_bvec.brd_submit_bio.__submit_bio.__submit_bio_noacct.__submit_merged_bio
0.00 +1.8 1.85 ? 8% perf-profile.calltrace.cycles-pp.do_write_page.f2fs_outplace_write_data.f2fs_do_write_data_page.move_data_page.gc_data_segment
0.00 +1.9 1.87 ? 8% perf-profile.calltrace.cycles-pp.f2fs_outplace_write_data.f2fs_do_write_data_page.move_data_page.gc_data_segment.do_garbage_collect
0.87 ? 8% +2.0 2.83 ? 11% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.94 ? 9% +2.3 3.21 ? 10% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.00 +2.3 2.32 ? 8% perf-profile.calltrace.cycles-pp.f2fs_do_write_data_page.move_data_page.gc_data_segment.do_garbage_collect.f2fs_gc
0.00 +2.4 2.42 ? 11% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_write_single_data_page
0.00 +2.5 2.54 ? 5% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.__write_node_page
0.00 +3.1 3.12 ? 7% perf-profile.calltrace.cycles-pp.move_data_page.gc_data_segment.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page
0.00 +4.8 4.83 ? 8% perf-profile.calltrace.cycles-pp.gc_data_segment.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page.f2fs_write_cache_pages
0.00 +6.1 6.07 ? 8% perf-profile.calltrace.cycles-pp.do_garbage_collect.f2fs_gc.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages
0.00 +6.1 6.14 ? 8% perf-profile.calltrace.cycles-pp.f2fs_gc.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages
23.00 +7.4 30.39 ? 6% perf-profile.calltrace.cycles-pp.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
0.00 +8.5 8.54 ? 31% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_write_single_data_page
13.73 ? 3% +9.6 23.30 ? 11% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
13.68 ? 3% +9.6 23.28 ? 11% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
13.67 ? 3% +9.6 23.28 ? 11% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter
13.65 ? 3% +9.6 23.28 ? 11% perf-profile.calltrace.cycles-pp.__f2fs_write_data_pages.do_writepages.filemap_fdatawrite_wbc.file_write_and_wait_range.f2fs_do_sync_file
13.52 ? 3% +9.7 23.22 ? 11% perf-profile.calltrace.cycles-pp.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages.filemap_fdatawrite_wbc.file_write_and_wait_range
47.98 +9.9 57.90 ? 4% perf-profile.calltrace.cycles-pp.write
47.72 +10.1 57.79 ? 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
47.71 +10.1 57.79 ? 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
47.64 +10.1 57.77 ? 4% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
47.62 +10.1 57.77 ? 4% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
47.52 +10.2 57.73 ? 4% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.50 +10.2 57.71 ? 4% perf-profile.calltrace.cycles-pp.f2fs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
0.00 +11.0 10.97 ? 23% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_write_single_data_page.f2fs_write_cache_pages
0.00 +11.1 11.06 ? 23% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages
46.02 +11.1 57.12 ? 4% perf-profile.calltrace.cycles-pp.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write.ksys_write
0.00 +11.1 11.14 ? 23% perf-profile.calltrace.cycles-pp.f2fs_balance_fs.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages
10.08 ? 4% +11.8 21.92 ? 11% perf-profile.calltrace.cycles-pp.f2fs_write_single_data_page.f2fs_write_cache_pages.__f2fs_write_data_pages.do_writepages.filemap_fdatawrite_wbc
15.50 +13.0 28.51 ? 7% perf-profile.calltrace.cycles-pp.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.00 +19.9 19.86 ? 17% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.__write_node_page
0.00 +22.4 22.41 ? 14% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.__write_node_page.f2fs_fsync_node_pages
0.00 +22.5 22.45 ? 14% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_balance_fs.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file
0.00 +22.5 22.49 ? 14% perf-profile.calltrace.cycles-pp.f2fs_balance_fs.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
44.46 -10.2 34.29 ? 9% perf-profile.children.cycles-pp.mwait_idle_with_hints
44.49 -10.2 34.32 ? 9% perf-profile.children.cycles-pp.intel_idle
50.02 -9.2 40.80 ? 7% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
50.02 -9.2 40.80 ? 7% perf-profile.children.cycles-pp.cpu_startup_entry
50.00 -9.2 40.79 ? 7% perf-profile.children.cycles-pp.do_idle
10.27 ? 2% -7.8 2.46 ? 16% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
45.88 -7.5 38.36 ? 7% perf-profile.children.cycles-pp.cpuidle_enter_state
45.89 -7.5 38.37 ? 7% perf-profile.children.cycles-pp.cpuidle_enter
10.76 -7.5 3.25 ? 17% perf-profile.children.cycles-pp.__schedule
47.02 -7.2 39.77 ? 7% perf-profile.children.cycles-pp.cpuidle_idle_call
9.40 -6.6 2.81 ? 17% perf-profile.children.cycles-pp.schedule
7.86 -5.5 2.33 ? 17% perf-profile.children.cycles-pp.pick_next_task_fair
7.38 -5.2 2.20 ? 17% perf-profile.children.cycles-pp.newidle_balance
6.76 -4.6 2.13 ? 16% perf-profile.children.cycles-pp.load_balance
6.40 -4.4 1.97 ? 17% perf-profile.children.cycles-pp.find_busiest_group
6.32 -4.4 1.94 ? 17% perf-profile.children.cycles-pp.update_sd_lb_stats
6.06 -4.2 1.82 ? 17% perf-profile.children.cycles-pp.update_sg_lb_stats
13.78 ? 5% -4.2 9.60 ? 22% perf-profile.children.cycles-pp.do_write_page
5.49 ? 3% -3.9 1.63 ? 18% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
4.64 ? 7% -3.6 1.06 ? 15% perf-profile.children.cycles-pp._raw_spin_lock_irq
4.34 ? 4% -3.4 0.93 ? 17% perf-profile.children.cycles-pp.set_node_addr
6.70 ? 7% -3.0 3.69 ? 24% perf-profile.children.cycles-pp.f2fs_do_write_node_page
9.86 ? 4% -2.7 7.15 ? 21% perf-profile.children.cycles-pp.f2fs_do_write_data_page
8.28 ? 7% -2.7 5.59 ? 23% perf-profile.children.cycles-pp.f2fs_submit_page_write
3.60 ? 4% -2.7 0.92 ? 13% perf-profile.children.cycles-pp.f2fs_get_node_info
2.52 -2.2 0.37 ? 19% perf-profile.children.cycles-pp.f2fs_need_inode_block_update
2.37 ? 3% -1.9 0.45 ? 17% perf-profile.children.cycles-pp.f2fs_is_checkpointed_node
2.64 -1.9 0.74 ? 18% perf-profile.children.cycles-pp.rwsem_wake
2.19 ? 2% -1.8 0.39 ? 17% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
2.19 ? 2% -1.8 0.39 ? 17% perf-profile.children.cycles-pp.find_get_pages_range_tag
2.46 -1.7 0.72 ? 19% perf-profile.children.cycles-pp.try_to_wake_up
2.10 -1.6 0.54 ? 17% perf-profile.children.cycles-pp.wake_up_q
2.38 ? 2% -1.5 0.83 ? 21% perf-profile.children.cycles-pp.f2fs_issue_flush
5.03 ? 5% -1.4 3.64 ? 25% perf-profile.children.cycles-pp.f2fs_allocate_data_block
2.10 ? 2% -1.4 0.72 ? 20% perf-profile.children.cycles-pp.__wait_for_common
1.95 ? 2% -1.3 0.67 ? 21% perf-profile.children.cycles-pp.schedule_timeout
1.36 ? 2% -0.9 0.44 ? 18% perf-profile.children.cycles-pp.schedule_idle
3.38 ? 8% -0.9 2.48 ? 27% perf-profile.children.cycles-pp.__mutex_lock
1.93 ? 8% -0.9 1.06 ? 21% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.17 ? 2% -0.8 0.33 ? 20% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
1.20 -0.8 0.36 ? 23% perf-profile.children.cycles-pp.ttwu_do_activate
1.16 -0.8 0.34 ? 23% perf-profile.children.cycles-pp.enqueue_task_fair
1.32 ? 2% -0.8 0.53 ? 19% perf-profile.children.cycles-pp.ret_from_fork
1.32 ? 2% -0.8 0.53 ? 19% perf-profile.children.cycles-pp.kthread
1.23 -0.8 0.48 ? 26% perf-profile.children.cycles-pp.f2fs_buffered_write_iter
1.21 -0.7 0.46 ? 27% perf-profile.children.cycles-pp.generic_perform_write
0.96 -0.7 0.30 ? 20% perf-profile.children.cycles-pp.dequeue_task_fair
0.88 ? 3% -0.6 0.25 ? 21% perf-profile.children.cycles-pp.sched_ttwu_pending
1.32 ? 14% -0.6 0.70 ? 25% perf-profile.children.cycles-pp.f2fs_space_for_roll_forward
0.92 -0.6 0.29 ? 15% perf-profile.children.cycles-pp.idle_cpu
0.86 ? 2% -0.6 0.25 ? 22% perf-profile.children.cycles-pp.enqueue_entity
1.28 ? 14% -0.6 0.68 ? 25% perf-profile.children.cycles-pp.__percpu_counter_sum
0.81 -0.6 0.25 ? 19% perf-profile.children.cycles-pp.dequeue_entity
0.70 ? 2% -0.6 0.14 ? 23% perf-profile.children.cycles-pp.__pagevec_release
0.69 ? 2% -0.6 0.14 ? 22% perf-profile.children.cycles-pp.release_pages
0.85 -0.5 0.31 ? 18% perf-profile.children.cycles-pp.update_load_avg
0.81 ? 2% -0.5 0.29 ? 21% perf-profile.children.cycles-pp.issue_flush_thread
0.69 -0.4 0.26 ? 26% perf-profile.children.cycles-pp.f2fs_write_begin
0.60 ? 2% -0.4 0.21 ? 19% perf-profile.children.cycles-pp.cpumask_next_and
0.51 ? 3% -0.4 0.14 ? 20% perf-profile.children.cycles-pp.rwsem_mark_wake
0.47 ? 2% -0.3 0.12 ? 28% perf-profile.children.cycles-pp.xas_find_marked
0.71 ? 2% -0.3 0.38 ? 26% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.49 -0.3 0.19 ? 18% perf-profile.children.cycles-pp._find_next_bit
0.43 ? 4% -0.3 0.14 ? 24% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.50 ? 3% -0.3 0.22 ? 24% perf-profile.children.cycles-pp.worker_thread
0.46 ? 3% -0.3 0.17 ? 30% perf-profile.children.cycles-pp.prepare_write_begin
0.41 ? 7% -0.3 0.14 ? 23% perf-profile.children.cycles-pp.f2fs_dirty_node_folio
0.47 ? 3% -0.3 0.20 ? 16% perf-profile.children.cycles-pp.update_rq_clock
0.40 ? 3% -0.3 0.15 ? 26% perf-profile.children.cycles-pp.__submit_flush_wait
0.40 ? 3% -0.3 0.15 ? 24% perf-profile.children.cycles-pp.blkdev_issue_flush
0.41 ? 3% -0.2 0.16 ? 13% perf-profile.children.cycles-pp.finish_task_switch
0.39 ? 3% -0.2 0.15 ? 28% perf-profile.children.cycles-pp.f2fs_submit_merged_ipu_write
0.39 ? 3% -0.2 0.15 ? 24% perf-profile.children.cycles-pp.submit_bio_wait
0.34 ? 2% -0.2 0.10 ? 20% perf-profile.children.cycles-pp.update_curr
0.31 ? 2% -0.2 0.08 ? 20% perf-profile.children.cycles-pp.set_next_entity
0.35 ? 7% -0.2 0.14 ? 21% perf-profile.children.cycles-pp.f2fs_update_data_blkaddr
0.44 ? 3% -0.2 0.22 ? 21% perf-profile.children.cycles-pp.md_submit_bio
0.29 ? 3% -0.2 0.08 ? 28% perf-profile.children.cycles-pp.f2fs_write_inode
0.30 ? 5% -0.2 0.09 ? 19% perf-profile.children.cycles-pp.select_task_rq
0.34 ? 3% -0.2 0.14 ? 32% perf-profile.children.cycles-pp.f2fs_write_end
0.37 ? 4% -0.2 0.16 ? 23% perf-profile.children.cycles-pp.md_handle_request
0.32 ? 4% -0.2 0.12 ? 20% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.29 ? 3% -0.2 0.09 ? 16% perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
0.26 ? 2% -0.2 0.06 ? 49% perf-profile.children.cycles-pp.f2fs_update_inode_page
0.32 ? 2% -0.2 0.13 ? 19% perf-profile.children.cycles-pp.__list_del_entry_valid
0.32 ? 3% -0.2 0.13 ? 22% perf-profile.children.cycles-pp.complete
0.74 ? 5% -0.2 0.56 ? 13% perf-profile.children.cycles-pp.down_read
0.33 ? 4% -0.2 0.15 ? 25% perf-profile.children.cycles-pp.raid0_make_request
0.26 ? 5% -0.2 0.08 ? 19% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.24 ? 2% -0.2 0.06 ? 52% perf-profile.children.cycles-pp.__switch_to_asm
0.24 ? 4% -0.2 0.06 ? 21% perf-profile.children.cycles-pp.prepare_task_switch
0.35 -0.2 0.18 ? 19% perf-profile.children.cycles-pp.f2fs_is_valid_blkaddr
0.32 ? 4% -0.2 0.16 ? 25% perf-profile.children.cycles-pp.process_one_work
0.24 ? 3% -0.2 0.08 ? 20% perf-profile.children.cycles-pp.select_task_rq_fair
0.24 ? 3% -0.2 0.08 ? 28% perf-profile.children.cycles-pp.__list_add_valid
0.24 ? 4% -0.2 0.08 ? 28% perf-profile.children.cycles-pp.submit_flushes
0.32 ? 3% -0.2 0.16 ? 22% perf-profile.children.cycles-pp.down_write
0.23 ? 7% -0.2 0.07 ? 18% perf-profile.children.cycles-pp.__switch_to
0.21 ? 4% -0.1 0.06 ? 46% perf-profile.children.cycles-pp.llist_add_batch
0.20 ? 3% -0.1 0.06 ? 47% perf-profile.children.cycles-pp.__smp_call_single_queue
0.23 ? 4% -0.1 0.09 ? 29% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.16 ? 4% -0.1 0.03 ?100% perf-profile.children.cycles-pp.wake_q_add
0.22 ? 5% -0.1 0.09 ? 35% perf-profile.children.cycles-pp.md_flush_request
0.18 ? 7% -0.1 0.05 ? 51% perf-profile.children.cycles-pp.llist_reverse_order
0.20 ? 3% -0.1 0.08 ? 6% perf-profile.children.cycles-pp.raw_spin_rq_lock_nested
0.20 ? 5% -0.1 0.08 ? 17% perf-profile.children.cycles-pp.__update_load_avg_se
0.47 ? 4% -0.1 0.35 ? 16% perf-profile.children.cycles-pp.filemap_dirty_folio
0.40 ? 4% -0.1 0.28 ? 16% perf-profile.children.cycles-pp.__lookup_nat_cache
0.43 ? 5% -0.1 0.32 ? 20% perf-profile.children.cycles-pp.__folio_start_writeback
0.15 ? 5% -0.1 0.03 ? 70% perf-profile.children.cycles-pp.wake_affine
0.28 ? 3% -0.1 0.16 ? 17% perf-profile.children.cycles-pp.update_sit_entry
0.19 ? 5% -0.1 0.07 ? 33% perf-profile.children.cycles-pp.sysvec_call_function_single
0.16 ? 5% -0.1 0.06 ? 62% perf-profile.children.cycles-pp.queue_work_on
0.16 ? 5% -0.1 0.06 ? 53% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.14 ? 5% -0.1 0.04 ? 71% perf-profile.children.cycles-pp.__has_merged_page
0.28 ? 6% -0.1 0.18 ? 8% perf-profile.children.cycles-pp.__radix_tree_lookup
0.16 ? 4% -0.1 0.06 ? 51% perf-profile.children.cycles-pp.kmem_cache_free
0.15 ? 4% -0.1 0.06 ? 52% perf-profile.children.cycles-pp.bio_alloc_bioset
0.15 ? 7% -0.1 0.06 ? 59% perf-profile.children.cycles-pp.__queue_work
0.13 ? 5% -0.1 0.03 ?100% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.14 ? 3% -0.1 0.05 ? 45% perf-profile.children.cycles-pp.find_busiest_queue
0.18 ? 5% -0.1 0.08 ? 22% perf-profile.children.cycles-pp.update_cfs_group
0.22 ? 10% -0.1 0.12 ? 31% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.21 ? 10% -0.1 0.12 ? 31% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.12 ? 5% -0.1 0.03 ?100% perf-profile.children.cycles-pp.osq_unlock
0.11 ? 3% -0.1 0.02 ? 99% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.11 ? 3% -0.1 0.02 ? 99% perf-profile.children.cycles-pp.copyin
0.10 ? 6% -0.1 0.02 ? 99% perf-profile.children.cycles-pp.nr_iowait_cpu
0.28 ? 7% -0.1 0.22 ? 17% perf-profile.children.cycles-pp.__folio_mark_dirty
0.12 ? 3% -0.1 0.07 ? 17% perf-profile.children.cycles-pp.__mark_inode_dirty
0.20 ? 3% -0.1 0.15 ? 14% perf-profile.children.cycles-pp.mutex_lock
0.09 ? 7% -0.1 0.04 ? 71% perf-profile.children.cycles-pp.kmem_cache_alloc
0.09 ? 7% -0.0 0.05 ? 47% perf-profile.children.cycles-pp.sb_mark_inode_writeback
0.08 ? 8% -0.0 0.04 ? 75% perf-profile.children.cycles-pp.xas_set_mark
0.17 ? 3% -0.0 0.13 ? 18% perf-profile.children.cycles-pp.locate_dirty_segment
0.10 ? 4% -0.0 0.07 ? 10% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.08 ? 7% +0.0 0.12 ? 19% perf-profile.children.cycles-pp.f2fs_update_dirty_folio
0.06 ? 7% +0.0 0.11 ? 11% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.hrtimer_update_next_event
0.11 ? 8% +0.1 0.17 ? 18% perf-profile.children.cycles-pp.clear_page_erms
0.10 ? 9% +0.1 0.16 ? 8% perf-profile.children.cycles-pp.read_tsc
0.00 +0.1 0.06 ? 26% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.1 0.07 ? 17% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.10 ? 10% +0.1 0.16 ? 15% perf-profile.children.cycles-pp.__is_cp_guaranteed
0.00 +0.1 0.07 ? 28% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.11 ? 9% +0.1 0.18 ? 5% perf-profile.children.cycles-pp.read_node_page
0.06 ? 6% +0.1 0.13 ? 28% perf-profile.children.cycles-pp.folio_unlock
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.20 ? 5% +0.1 0.28 ? 13% perf-profile.children.cycles-pp.get_page_from_freelist
0.06 ? 13% +0.1 0.13 ? 10% perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.1 0.08 ? 24% perf-profile.children.cycles-pp.__radix_tree_preload
0.00 +0.1 0.08 ? 10% perf-profile.children.cycles-pp.folio_mark_accessed
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.task_tick_fair
0.05 ? 44% +0.1 0.13 ? 28% perf-profile.children.cycles-pp.tick_sched_do_timer
0.00 +0.1 0.08 ? 14% perf-profile.children.cycles-pp.__xa_clear_mark
0.00 +0.1 0.09 ? 18% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.09 ? 12% perf-profile.children.cycles-pp.f2fs_iget
0.00 +0.1 0.09 ? 12% perf-profile.children.cycles-pp.iget_locked
0.04 ? 71% +0.1 0.12 ? 12% perf-profile.children.cycles-pp.tick_irq_enter
0.00 +0.1 0.09 ? 16% perf-profile.children.cycles-pp.f2fs_write_checkpoint
0.04 ? 44% +0.1 0.13 ? 11% perf-profile.children.cycles-pp.irqtime_account_irq
0.22 ? 5% +0.1 0.31 ? 13% perf-profile.children.cycles-pp.__alloc_pages
0.00 +0.1 0.10 ? 14% perf-profile.children.cycles-pp.io_serial_in
0.12 ? 7% +0.1 0.23 ? 11% perf-profile.children.cycles-pp.__might_resched
0.05 ? 45% +0.1 0.17 ? 23% perf-profile.children.cycles-pp.calc_global_load_tick
0.04 ? 45% +0.1 0.17 ? 17% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.09 ? 7% +0.1 0.22 ? 15% perf-profile.children.cycles-pp.rebalance_domains
0.03 ? 70% +0.1 0.16 ? 15% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.1 0.13 ? 14% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.24 ? 3% +0.1 0.38 ? 14% perf-profile.children.cycles-pp.f2fs_dirty_data_folio
0.00 +0.1 0.14 ? 26% perf-profile.children.cycles-pp.f2fs_ra_meta_pages
0.00 +0.1 0.14 ? 13% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +0.1 0.14 ? 13% perf-profile.children.cycles-pp.wait_for_xmitr
0.01 ?223% +0.1 0.15 ? 13% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.1 0.15 ? 10% perf-profile.children.cycles-pp.uart_console_write
0.08 ? 8% +0.1 0.23 ? 16% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.15 ? 9% perf-profile.children.cycles-pp.serial8250_console_write
0.17 ? 18% +0.2 0.32 ? 12% perf-profile.children.cycles-pp.xas_load
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.sysvec_irq_work
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.__sysvec_irq_work
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.irq_work_run
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.irq_work_single
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp._printk
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.vprintk_emit
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.console_unlock
0.00 +0.2 0.15 ? 12% perf-profile.children.cycles-pp.call_console_drivers
0.06 ? 7% +0.2 0.23 ? 16% perf-profile.children.cycles-pp.lapic_next_deadline
0.08 ? 8% +0.2 0.24 ? 25% perf-profile.children.cycles-pp.f2fs_put_page
0.06 ? 11% +0.2 0.25 ? 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.04 ? 75% +0.2 0.28 ? 20% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.25 ? 26% perf-profile.children.cycles-pp.f2fs_create
0.00 +0.3 0.25 ? 25% perf-profile.children.cycles-pp.open_last_lookups
0.00 +0.3 0.25 ? 25% perf-profile.children.cycles-pp.lookup_open
0.00 +0.3 0.25 ? 24% perf-profile.children.cycles-pp.open64
0.20 ? 4% +0.3 0.47 ? 16% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.3 0.28 ? 23% perf-profile.children.cycles-pp.do_filp_open
0.00 +0.3 0.28 ? 23% perf-profile.children.cycles-pp.path_openat
0.00 +0.3 0.28 ? 22% perf-profile.children.cycles-pp.__x64_sys_openat
0.00 +0.3 0.28 ? 22% perf-profile.children.cycles-pp.do_sys_openat2
0.33 ? 3% +0.3 0.61 ? 16% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.21 ? 4% +0.3 0.50 ? 14% perf-profile.children.cycles-pp.__softirqentry_text_start
0.21 ? 6% +0.3 0.50 ? 20% perf-profile.children.cycles-pp.tick_nohz_next_event
0.12 ? 6% +0.3 0.43 ? 9% perf-profile.children.cycles-pp.f2fs_get_dnode_of_data
0.26 ? 4% +0.3 0.61 ? 13% perf-profile.children.cycles-pp.__irq_exit_rcu
0.31 ? 4% +0.4 0.68 ? 16% perf-profile.children.cycles-pp.update_process_times
0.00 +0.4 0.38 ? 11% perf-profile.children.cycles-pp.f2fs_get_lock_data_page
0.31 ? 5% +0.4 0.70 ? 16% perf-profile.children.cycles-pp.tick_sched_handle
0.36 ? 10% +0.4 0.76 ? 20% perf-profile.children.cycles-pp.clockevents_program_event
0.40 +0.4 0.81 ? 9% perf-profile.children.cycles-pp.__get_node_page
0.48 ? 6% +0.5 0.95 ? 17% perf-profile.children.cycles-pp.ktime_get
0.39 ? 4% +0.5 0.91 ? 15% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.7 0.70 ? 11% perf-profile.children.cycles-pp.is_alive
0.50 ? 2% +0.7 1.22 ? 14% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.37 ? 5% +0.7 1.10 ? 9% perf-profile.children.cycles-pp.__filemap_get_folio
0.38 ? 5% +0.8 1.15 ? 10% perf-profile.children.cycles-pp.pagecache_get_page
0.57 ? 3% +0.8 1.35 ? 11% perf-profile.children.cycles-pp.brd_insert_page
0.38 +0.8 1.17 ? 10% perf-profile.children.cycles-pp.copy_to_brd
0.10 ? 6% +0.8 0.90 ? 17% perf-profile.children.cycles-pp.f2fs_wait_on_node_pages_writeback
0.00 +0.8 0.85 ? 18% perf-profile.children.cycles-pp.f2fs_wait_on_page_writeback
0.00 +0.9 0.86 ? 8% perf-profile.children.cycles-pp.f2fs_get_read_data_page
0.96 ? 5% +1.3 2.25 ? 15% perf-profile.children.cycles-pp.hrtimer_interrupt
0.97 ? 5% +1.3 2.28 ? 15% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
2.28 +1.3 3.60 ? 11% perf-profile.children.cycles-pp.__submit_bio_noacct
2.28 +1.3 3.60 ? 11% perf-profile.children.cycles-pp.__submit_bio
1.08 +1.5 2.61 ? 10% perf-profile.children.cycles-pp.brd_do_bvec
1.09 ? 2% +1.5 2.62 ? 10% perf-profile.children.cycles-pp.brd_submit_bio
1.74 +1.7 3.41 ? 11% perf-profile.children.cycles-pp.__submit_merged_bio
2.53 ? 14% +2.0 4.51 ? 21% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.38 ? 4% +2.0 3.38 ? 13% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.00 +3.4 3.41 ? 7% perf-profile.children.cycles-pp.move_data_page
1.81 +4.2 5.97 ? 8% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +5.3 5.29 ? 8% perf-profile.children.cycles-pp.gc_data_segment
0.00 +6.6 6.63 ? 8% perf-profile.children.cycles-pp.do_garbage_collect
0.00 +6.8 6.77 ? 8% perf-profile.children.cycles-pp.f2fs_gc
23.00 +7.4 30.39 ? 6% perf-profile.children.cycles-pp.f2fs_fsync_node_pages
13.74 ? 3% +9.6 23.30 ? 11% perf-profile.children.cycles-pp.file_write_and_wait_range
13.68 ? 3% +9.6 23.28 ? 11% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
13.67 ? 3% +9.6 23.28 ? 11% perf-profile.children.cycles-pp.do_writepages
13.65 ? 3% +9.6 23.28 ? 11% perf-profile.children.cycles-pp.__f2fs_write_data_pages
13.52 ? 3% +9.7 23.22 ? 11% perf-profile.children.cycles-pp.f2fs_write_cache_pages
48.01 +9.9 57.90 ? 4% perf-profile.children.cycles-pp.write
47.65 +10.1 57.77 ? 4% perf-profile.children.cycles-pp.ksys_write
47.63 +10.1 57.77 ? 4% perf-profile.children.cycles-pp.vfs_write
47.53 +10.2 57.73 ? 4% perf-profile.children.cycles-pp.new_sync_write
47.50 +10.2 57.72 ? 4% perf-profile.children.cycles-pp.f2fs_file_write_iter
48.03 +10.4 58.38 ? 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
48.01 +10.4 58.36 ? 4% perf-profile.children.cycles-pp.do_syscall_64
46.02 +11.1 57.12 ? 4% perf-profile.children.cycles-pp.f2fs_do_sync_file
10.08 ? 4% +11.8 21.92 ? 11% perf-profile.children.cycles-pp.f2fs_write_single_data_page
15.50 +13.0 28.54 ? 7% perf-profile.children.cycles-pp.__write_node_page
10.99 ? 7% +23.4 34.36 ? 11% perf-profile.children.cycles-pp.osq_lock
13.58 ? 3% +25.8 39.39 ? 10% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
10.84 ? 5% +27.6 38.48 ? 10% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.18 ? 3% +33.7 33.88 ? 14% perf-profile.children.cycles-pp.f2fs_balance_fs
44.19 -10.1 34.04 ? 9% perf-profile.self.cycles-pp.mwait_idle_with_hints
5.48 ? 3% -3.9 1.63 ? 18% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
4.64 -3.3 1.37 ? 17% perf-profile.self.cycles-pp.update_sg_lb_stats
1.72 ? 2% -1.4 0.28 ? 14% perf-profile.self.cycles-pp.find_get_pages_range_tag
0.88 -0.6 0.28 ? 15% perf-profile.self.cycles-pp.idle_cpu
0.66 ? 2% -0.5 0.13 ? 25% perf-profile.self.cycles-pp.release_pages
0.67 ? 4% -0.5 0.18 ? 19% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
0.56 ? 2% -0.4 0.17 ? 23% perf-profile.self.cycles-pp.__schedule
0.46 ? 2% -0.3 0.12 ? 25% perf-profile.self.cycles-pp.xas_find_marked
0.71 ? 2% -0.3 0.37 ? 26% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.76 ? 2% -0.3 0.43 ? 17% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.52 ? 2% -0.3 0.21 ? 24% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.46 ? 12% -0.3 0.17 ? 26% perf-profile.self.cycles-pp.__percpu_counter_sum
0.38 ? 2% -0.3 0.09 ? 24% perf-profile.self.cycles-pp.f2fs_fsync_node_pages
0.46 -0.3 0.18 ? 18% perf-profile.self.cycles-pp._find_next_bit
0.45 ? 5% -0.3 0.17 ? 11% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.69 ? 5% -0.3 0.44 ? 15% perf-profile.self.cycles-pp.down_read
0.36 ? 4% -0.2 0.11 ? 18% perf-profile.self.cycles-pp.update_rq_clock
0.31 ? 3% -0.2 0.08 ? 16% perf-profile.self.cycles-pp.enqueue_entity
0.34 ? 4% -0.2 0.12 ? 17% perf-profile.self.cycles-pp.update_load_avg
0.30 ? 3% -0.2 0.09 ? 29% perf-profile.self.cycles-pp.enqueue_task_fair
0.32 ? 2% -0.2 0.12 ? 17% perf-profile.self.cycles-pp.__list_del_entry_valid
0.29 ? 5% -0.2 0.09 ? 18% perf-profile.self.cycles-pp.newidle_balance
0.31 ? 4% -0.2 0.12 ? 20% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.26 ? 4% -0.2 0.08 ? 27% perf-profile.self.cycles-pp.rwsem_mark_wake
0.24 ? 3% -0.2 0.06 ? 53% perf-profile.self.cycles-pp.__switch_to_asm
0.34 ? 2% -0.2 0.18 ? 18% perf-profile.self.cycles-pp.f2fs_is_valid_blkaddr
0.24 ? 3% -0.2 0.08 ? 26% perf-profile.self.cycles-pp.__list_add_valid
0.28 ? 5% -0.2 0.13 ? 26% perf-profile.self.cycles-pp.down_write
0.22 ? 7% -0.2 0.07 ? 21% perf-profile.self.cycles-pp.__switch_to
0.21 ? 4% -0.1 0.06 ? 46% perf-profile.self.cycles-pp.llist_add_batch
0.18 ? 2% -0.1 0.03 ?100% perf-profile.self.cycles-pp.flush_smp_call_function_from_idle
0.16 ? 4% -0.1 0.03 ?100% perf-profile.self.cycles-pp.wake_q_add
0.19 ? 6% -0.1 0.06 ? 50% perf-profile.self.cycles-pp.f2fs_do_sync_file
0.18 ? 6% -0.1 0.05 ? 50% perf-profile.self.cycles-pp.cpumask_next_and
0.16 ? 4% -0.1 0.03 ?102% perf-profile.self.cycles-pp.__write_node_page
0.19 ? 5% -0.1 0.06 ? 49% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
0.18 ? 6% -0.1 0.05 ? 51% perf-profile.self.cycles-pp.llist_reverse_order
0.17 ? 7% -0.1 0.04 ? 45% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.17 ? 5% -0.1 0.05 ? 47% perf-profile.self.cycles-pp.update_curr
0.16 ? 5% -0.1 0.04 ? 73% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.28 ? 2% -0.1 0.16 ? 17% perf-profile.self.cycles-pp.update_sit_entry
0.20 ? 6% -0.1 0.08 ? 17% perf-profile.self.cycles-pp.__update_load_avg_se
0.16 ? 3% -0.1 0.05 ? 46% perf-profile.self.cycles-pp.finish_task_switch
0.20 ? 5% -0.1 0.09 ? 23% perf-profile.self.cycles-pp.update_sd_lb_stats
0.14 ? 4% -0.1 0.03 ?100% perf-profile.self.cycles-pp.dequeue_task_fair
0.14 ? 5% -0.1 0.04 ? 71% perf-profile.self.cycles-pp.__has_merged_page
0.28 ? 6% -0.1 0.18 ? 7% perf-profile.self.cycles-pp.__radix_tree_lookup
0.23 ? 4% -0.1 0.13 ? 23% perf-profile.self.cycles-pp.f2fs_allocate_data_block
0.18 ? 5% -0.1 0.08 ? 23% perf-profile.self.cycles-pp.update_cfs_group
0.13 ? 2% -0.1 0.04 ? 71% perf-profile.self.cycles-pp.find_busiest_queue
0.12 ? 5% -0.1 0.03 ?100% perf-profile.self.cycles-pp.osq_unlock
0.12 ? 14% -0.1 0.04 ?101% perf-profile.self.cycles-pp.__f2fs_write_data_pages
0.11 ? 3% -0.1 0.02 ? 99% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.18 ? 3% -0.1 0.10 ? 21% perf-profile.self.cycles-pp.mutex_lock
0.10 ? 10% -0.1 0.02 ? 99% perf-profile.self.cycles-pp.nr_iowait_cpu
0.12 ? 5% -0.1 0.07 ? 21% perf-profile.self.cycles-pp.do_idle
0.12 ? 6% -0.1 0.06 ? 17% perf-profile.self.cycles-pp.__mark_inode_dirty
0.12 ? 6% -0.1 0.07 ? 21% perf-profile.self.cycles-pp.__lookup_nat_cache
0.15 ? 3% -0.0 0.10 ? 22% perf-profile.self.cycles-pp.f2fs_submit_page_write
0.08 ? 10% -0.0 0.03 ?100% perf-profile.self.cycles-pp.locate_dirty_segment
0.08 ? 10% -0.0 0.04 ? 75% perf-profile.self.cycles-pp.xas_set_mark
0.07 ? 11% -0.0 0.04 ? 71% perf-profile.self.cycles-pp.folio_clear_dirty_for_io
0.10 ? 5% -0.0 0.07 ? 11% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.11 ? 8% +0.0 0.15 ? 7% perf-profile.self.cycles-pp.clear_page_erms
0.06 ? 8% +0.0 0.10 ? 13% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.__cond_resched
0.10 ? 8% +0.1 0.16 ? 9% perf-profile.self.cycles-pp.read_tsc
0.06 ? 6% +0.1 0.12 ? 15% perf-profile.self.cycles-pp.folio_unlock
0.00 +0.1 0.06 ? 14% perf-profile.self.cycles-pp.is_alive
0.00 +0.1 0.06 ? 28% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.04 ? 44% +0.1 0.11 ? 31% perf-profile.self.cycles-pp.tick_sched_do_timer
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.move_data_page
0.01 ?223% +0.1 0.07 ? 12% perf-profile.self.cycles-pp.f2fs_update_dirty_folio
0.09 ? 10% +0.1 0.16 ? 14% perf-profile.self.cycles-pp.__is_cp_guaranteed
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.f2fs_do_write_data_page
0.00 +0.1 0.07 ? 17% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.08 ? 22% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.08 ? 15% perf-profile.self.cycles-pp.folio_mark_accessed
0.00 +0.1 0.08 ? 24% perf-profile.self.cycles-pp.__radix_tree_preload
0.00 +0.1 0.09 ? 12% perf-profile.self.cycles-pp.gc_data_segment
0.04 ? 71% +0.1 0.13 ? 5% perf-profile.self.cycles-pp.read_node_page
0.00 +0.1 0.10 ? 14% perf-profile.self.cycles-pp.io_serial_in
0.12 ? 8% +0.1 0.22 ? 12% perf-profile.self.cycles-pp.__might_resched
0.00 +0.1 0.10 ? 15% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.05 ? 45% +0.1 0.16 ? 23% perf-profile.self.cycles-pp.calc_global_load_tick
0.00 +0.1 0.12 ? 39% perf-profile.self.cycles-pp.tick_nohz_next_event
0.00 +0.1 0.13 ? 19% perf-profile.self.cycles-pp.f2fs_put_page
0.08 ? 8% +0.1 0.23 ? 16% perf-profile.self.cycles-pp.native_irq_return_iret
0.06 ? 7% +0.2 0.23 ? 16% perf-profile.self.cycles-pp.lapic_next_deadline
0.08 ? 10% +0.2 0.24 ? 14% perf-profile.self.cycles-pp.xas_load
0.03 ?105% +0.2 0.28 ? 20% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.14 ? 15% +0.4 0.50 ? 6% perf-profile.self.cycles-pp.cpuidle_enter_state
0.39 ? 7% +0.4 0.82 ? 19% perf-profile.self.cycles-pp.ktime_get
0.19 ? 6% +0.5 0.67 ? 11% perf-profile.self.cycles-pp.__filemap_get_folio
0.36 +0.8 1.14 ? 11% perf-profile.self.cycles-pp.copy_to_brd
1.73 +4.2 5.92 ? 8% perf-profile.self.cycles-pp.rwsem_spin_on_owner
10.91 ? 7% +23.3 34.17 ? 11% perf-profile.self.cycles-pp.osq_lock
***************************************************************************************************
lkp-icl-2sp2: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase/ucode:
gcc-11/performance/1BRD_48G/f2fs/x86_64-rhel-8.3/3000/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp2/disk_src/aim7/0xd000331
commit:
86aef22d0f ("f2fs: fix to do sanity check on total_data_blocks")
b11c83553a ("f2fs: fix deadloop in foreground GC")
86aef22d0f7cc71d b11c83553a02b58ee0934cb80c8
---------------- ---------------------------
%stddev %change %stddev
\ | \
191692 -86.1% 26720 ? 7% aim7.jobs-per-min
94.15 +619.6% 677.45 ? 7% aim7.time.elapsed_time
94.15 +619.6% 677.45 ? 7% aim7.time.elapsed_time.max
1.079e+08 +1504.5% 1.731e+09 ? 8% aim7.time.file_system_outputs
30777 ? 9% -31.0% 21227 ? 8% aim7.time.involuntary_context_switches
238509 ? 2% +38.4% 330038 ? 2% aim7.time.minor_page_faults
7169 +153.2% 18150 ? 5% aim7.time.system_time
5416041 ? 6% +32.8% 7195068 ? 4% aim7.time.voluntary_context_switches
5.03e+09 ? 2% +1260.8% 6.845e+10 ? 8% cpuidle..time
14205173 ? 2% +910.0% 1.435e+08 ? 7% cpuidle..usage
41.23 ? 2% +89.6% 78.17 iostat.cpu.idle
58.49 -62.8% 21.78 ? 3% iostat.cpu.system
6640245 +10.4% 7333613 numa-numastat.node0.local_node
6699322 +10.2% 7380409 numa-numastat.node0.numa_hit
151.26 ? 2% +385.7% 734.74 ? 6% uptime.boot
11446 ? 4% +551.1% 74528 ? 7% uptime.idle
40.04 ? 2% +38.1 78.11 mpstat.cpu.all.idle%
0.97 -0.2 0.78 ? 2% mpstat.cpu.all.irq%
0.19 ? 2% -0.1 0.07 ? 3% mpstat.cpu.all.soft%
58.52 -37.5 21.00 ? 4% mpstat.cpu.all.sys%
0.27 ? 2% -0.2 0.05 ? 5% mpstat.cpu.all.usr%
40.67 +90.6% 77.50 vmstat.cpu.id
58.00 -62.9% 21.50 ? 3% vmstat.cpu.sy
25675 ? 3% +4555.8% 1195409 vmstat.io.bo
70.17 ? 6% -61.8% 26.83 ? 5% vmstat.procs.r
113043 ? 6% -79.3% 23407 ? 3% vmstat.system.cs
326529 -18.0% 267632 vmstat.system.in
2114 ? 65% +239.9% 7185 ? 16% numa-meminfo.node0.Active(anon)
8110 ? 16% -30.5% 5637 ? 22% numa-meminfo.node0.Active(file)
427.17 ? 63% +620.4% 3077 ? 13% numa-meminfo.node0.Inactive(file)
110266 ? 7% -47.0% 58459 ? 16% numa-meminfo.node1.Active
103615 ? 8% -48.4% 53463 ? 17% numa-meminfo.node1.Active(anon)
38.50 ? 78% +8625.1% 3359 ? 66% numa-meminfo.node1.Dirty
267.83 ? 84% +1026.9% 3018 ? 19% numa-meminfo.node1.Inactive(file)
117328 ? 6% -46.9% 62307 ? 13% numa-meminfo.node1.Shmem
120444 ? 7% -40.8% 71292 ? 14% meminfo.Active
105723 ? 7% -42.6% 60648 ? 16% meminfo.Active(anon)
14720 ? 12% -27.7% 10643 meminfo.Active(file)
116158 +69.0% 196251 meminfo.AnonHugePages
100.17 ?144% +4887.5% 4995 ? 5% meminfo.Dirty
557.50 ? 82% +990.6% 6080 ? 5% meminfo.Inactive(file)
51229 ? 3% -26.3% 37768 meminfo.Mapped
445.00 ? 70% +340.2% 1958 ? 16% meminfo.Mlocked
132595 ? 6% -39.4% 80367 ? 12% meminfo.Shmem
1556 -62.8% 578.67 ? 3% turbostat.Avg_MHz
60.01 -37.7 22.31 ? 3% turbostat.Busy%
14086503 ? 2% +917.5% 1.433e+08 ? 7% turbostat.C1
40.63 ? 2% +38.1 78.68 turbostat.C1%
39.99 ? 2% +94.3% 77.69 turbostat.CPU%c1
0.09 +20.4% 0.11 ? 3% turbostat.IPC
31993196 +469.0% 1.82e+08 ? 7% turbostat.IRQ
0.02 ?134% -0.0 0.00 turbostat.POLL%
67.50 -3.5% 65.17 turbostat.PkgTmp
309.66 -12.2% 271.95 turbostat.PkgWatt
34.39 -6.7% 32.10 turbostat.RAMWatt
528.17 ? 65% +239.9% 1795 ? 16% numa-vmstat.node0.nr_active_anon
1982 ? 15% -28.8% 1410 ? 22% numa-vmstat.node0.nr_active_file
6586469 +1441.0% 1.015e+08 ? 10% numa-vmstat.node0.nr_dirtied
22.00 ? 87% +2080.3% 479.67 ?125% numa-vmstat.node0.nr_dirty
96.50 ? 55% +695.9% 768.00 ? 13% numa-vmstat.node0.nr_inactive_file
306181 ? 11% +30981.9% 95167127 ? 11% numa-vmstat.node0.nr_written
528.17 ? 65% +239.9% 1795 ? 16% numa-vmstat.node0.nr_zone_active_anon
1982 ? 15% -28.8% 1410 ? 22% numa-vmstat.node0.nr_zone_active_file
96.33 ? 55% +697.2% 768.00 ? 13% numa-vmstat.node0.nr_zone_inactive_file
21.83 ? 87% +17975.6% 3946 ? 3% numa-vmstat.node0.nr_zone_write_pending
6698967 +10.2% 7379461 numa-vmstat.node0.numa_hit
6639890 +10.4% 7332699 numa-vmstat.node0.numa_local
25868 ? 8% -48.3% 13364 ? 17% numa-vmstat.node1.nr_active_anon
6944365 +1555.4% 1.15e+08 ? 18% numa-vmstat.node1.nr_dirtied
123.83 ? 63% +511.2% 756.83 ? 19% numa-vmstat.node1.nr_inactive_file
29327 ? 6% -46.9% 15577 ? 13% numa-vmstat.node1.nr_shmem
323823 ? 5% +33372.3% 1.084e+08 ? 19% numa-vmstat.node1.nr_written
25868 ? 8% -48.3% 13364 ? 17% numa-vmstat.node1.nr_zone_active_anon
123.83 ? 63% +511.0% 756.67 ? 19% numa-vmstat.node1.nr_zone_inactive_file
39.33 ? 78% +10563.6% 4194 ? 5% numa-vmstat.node1.nr_zone_write_pending
26414 ? 7% -42.6% 15160 ? 16% proc-vmstat.nr_active_anon
3690 ? 8% -27.9% 2659 proc-vmstat.nr_active_file
13529359 +1499.9% 2.165e+08 ? 8% proc-vmstat.nr_dirtied
19.17 ?105% +6480.0% 1261 ? 3% proc-vmstat.nr_dirty
621410 -1.9% 609853 proc-vmstat.nr_file_pages
135.83 ? 58% +1020.2% 1521 ? 4% proc-vmstat.nr_inactive_file
13094 ? 3% -25.8% 9710 proc-vmstat.nr_mapped
110.33 ? 70% +343.7% 489.50 ? 16% proc-vmstat.nr_mlock
33157 ? 6% -39.4% 20104 ? 12% proc-vmstat.nr_shmem
47494 +4.3% 49513 proc-vmstat.nr_slab_reclaimable
87494 +2.0% 89241 proc-vmstat.nr_slab_unreclaimable
254.50 ? 13% -33.0% 170.50 ? 5% proc-vmstat.nr_writeback
629986 ? 3% +32211.6% 2.036e+08 ? 8% proc-vmstat.nr_written
26414 ? 7% -42.6% 15160 ? 16% proc-vmstat.nr_zone_active_anon
3690 ? 8% -27.9% 2659 proc-vmstat.nr_zone_active_file
135.83 ? 58% +1020.2% 1521 ? 4% proc-vmstat.nr_zone_inactive_file
19.33 ?106% +42105.2% 8159 proc-vmstat.nr_zone_write_pending
56914 +180.7% 159750 ? 6% proc-vmstat.numa_hint_faults
25773 ? 2% +325.0% 109540 ? 6% proc-vmstat.numa_hint_faults_local
13729792 +9.1% 14980938 proc-vmstat.numa_hit
13614030 +9.2% 14865085 proc-vmstat.numa_local
31141 ? 2% +56.2% 48651 ? 7% proc-vmstat.numa_pages_migrated
65594 ? 15% +237.1% 221103 ? 5% proc-vmstat.numa_pte_updates
1062327 ? 4% +37.2% 1457406 ? 5% proc-vmstat.pgactivate
13729456 +9.1% 14980726 proc-vmstat.pgalloc_normal
11764455 -2.4% 11476399 proc-vmstat.pgdeactivate
668095 +266.3% 2447452 ? 5% proc-vmstat.pgfault
13540930 +9.2% 14781995 proc-vmstat.pgfree
31141 ? 2% +56.2% 48651 ? 7% proc-vmstat.pgmigrate_success
2520104 ? 3% +32209.7% 8.142e+08 ? 8% proc-vmstat.pgpgout
35308 +408.3% 179476 ? 6% proc-vmstat.pgreuse
12430627 -5.6% 11734667 proc-vmstat.pgrotated
1804 ? 2% +5.7% 1906 proc-vmstat.unevictable_pgs_culled
0.40 ? 8% -49.3% 0.20 ? 16% sched_debug.cfs_rq:/.h_nr_running.avg
5154 ? 8% +48.9% 7676 ? 4% sched_debug.cfs_rq:/.load.avg
25602 ? 6% +104.8% 52442 ? 7% sched_debug.cfs_rq:/.load.max
5545 ? 4% +186.6% 15895 ? 6% sched_debug.cfs_rq:/.load.stddev
41.91 ? 25% -73.4% 11.13 ? 18% sched_debug.cfs_rq:/.load_avg.avg
533.00 ? 2% -74.4% 136.45 ? 6% sched_debug.cfs_rq:/.load_avg.max
2.25 ? 11% -92.5% 0.17 ? 75% sched_debug.cfs_rq:/.load_avg.min
122.66 ? 16% -77.5% 27.64 ? 11% sched_debug.cfs_rq:/.load_avg.stddev
1409691 +73.6% 2447011 ? 8% sched_debug.cfs_rq:/.min_vruntime.avg
1478243 +79.6% 2655561 ? 7% sched_debug.cfs_rq:/.min_vruntime.max
1346120 ? 2% +74.4% 2348020 ? 9% sched_debug.cfs_rq:/.min_vruntime.min
33624 ? 10% +58.5% 53296 ? 12% sched_debug.cfs_rq:/.min_vruntime.stddev
0.40 ? 8% -49.3% 0.20 ? 16% sched_debug.cfs_rq:/.nr_running.avg
27.29 ? 39% -81.3% 5.10 ? 36% sched_debug.cfs_rq:/.removed.load_avg.avg
512.00 -83.0% 86.83 ? 5% sched_debug.cfs_rq:/.removed.load_avg.max
111.99 ? 19% -82.3% 19.85 ? 16% sched_debug.cfs_rq:/.removed.load_avg.stddev
12.75 ? 39% -81.6% 2.34 ? 34% sched_debug.cfs_rq:/.removed.runnable_avg.avg
268.58 ? 5% -83.2% 45.17 ? 6% sched_debug.cfs_rq:/.removed.runnable_avg.max
53.01 ? 19% -82.5% 9.29 ? 17% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
12.75 ? 39% -81.7% 2.34 ? 34% sched_debug.cfs_rq:/.removed.util_avg.avg
268.58 ? 5% -83.2% 45.17 ? 6% sched_debug.cfs_rq:/.removed.util_avg.max
53.01 ? 19% -82.5% 9.29 ? 17% sched_debug.cfs_rq:/.removed.util_avg.stddev
487.70 -56.4% 212.56 ? 12% sched_debug.cfs_rq:/.runnable_avg.avg
1136 ? 7% -28.5% 812.06 ? 6% sched_debug.cfs_rq:/.runnable_avg.max
235.92 ? 10% -86.0% 33.04 ? 53% sched_debug.cfs_rq:/.runnable_avg.min
196.81 ? 3% -27.5% 142.63 ? 5% sched_debug.cfs_rq:/.runnable_avg.stddev
30712 ? 7% -342.4% -74456 sched_debug.cfs_rq:/.spread0.avg
-33811 +412.7% -173361 sched_debug.cfs_rq:/.spread0.min
33381 ? 10% +59.6% 53271 ? 12% sched_debug.cfs_rq:/.spread0.stddev
486.63 -56.4% 212.34 ? 12% sched_debug.cfs_rq:/.util_avg.avg
1121 ? 7% -27.7% 810.37 ? 6% sched_debug.cfs_rq:/.util_avg.max
235.67 ? 10% -86.0% 33.01 ? 53% sched_debug.cfs_rq:/.util_avg.min
196.05 ? 3% -27.3% 142.44 ? 5% sched_debug.cfs_rq:/.util_avg.stddev
594.08 ? 5% -24.2% 450.27 ? 5% sched_debug.cfs_rq:/.util_est_enqueued.max
731061 ? 3% +22.5% 895619 sched_debug.cpu.avg_idle.avg
39474 ? 31% +537.9% 251790 ? 21% sched_debug.cpu.avg_idle.min
226984 ? 2% -29.9% 159215 ? 7% sched_debug.cpu.avg_idle.stddev
85858 ? 3% +344.8% 381940 ? 5% sched_debug.cpu.clock.avg
85876 ? 3% +344.8% 381964 ? 5% sched_debug.cpu.clock.max
85837 ? 3% +344.9% 381913 ? 5% sched_debug.cpu.clock.min
10.76 ? 6% +34.0% 14.42 ? 7% sched_debug.cpu.clock.stddev
85445 ? 3% +343.7% 379106 ? 5% sched_debug.cpu.clock_task.avg
85529 ? 3% +343.5% 379344 ? 5% sched_debug.cpu.clock_task.max
82296 ? 3% +355.5% 374899 ? 6% sched_debug.cpu.clock_task.min
2773 ? 8% -47.4% 1457 ? 17% sched_debug.cpu.curr->pid.avg
7423 +119.2% 16269 ? 3% sched_debug.cpu.curr->pid.max
2306 ? 2% +25.0% 2884 ? 3% sched_debug.cpu.curr->pid.stddev
0.40 ? 8% -49.3% 0.20 ? 16% sched_debug.cpu.nr_running.avg
28511 ? 6% +116.5% 61721 ? 5% sched_debug.cpu.nr_switches.avg
61951 ? 15% +61.5% 100048 ? 10% sched_debug.cpu.nr_switches.max
24348 ? 6% +126.6% 55173 ? 5% sched_debug.cpu.nr_switches.min
4617 ? 11% +29.1% 5961 ? 13% sched_debug.cpu.nr_switches.stddev
2.072e+09 ? 5% -27.5% 1.501e+09 ? 13% sched_debug.cpu.nr_uninterruptible.avg
2.121e+09 -12.2% 1.862e+09 ? 7% sched_debug.cpu.nr_uninterruptible.stddev
85839 ? 3% +344.9% 381913 ? 5% sched_debug.cpu_clk
84612 ? 4% +349.9% 380687 ? 5% sched_debug.ktime
89997 ? 3% +328.1% 385309 ? 5% sched_debug.sched_clk
3.55 +14.3% 4.06 perf-stat.i.MPKI
1.125e+10 -56.8% 4.864e+09 ? 2% perf-stat.i.branch-instructions
0.34 ? 2% -0.2 0.17 ? 2% perf-stat.i.branch-miss-rate%
22597340 ? 2% -67.9% 7262182 ? 2% perf-stat.i.branch-misses
31.68 -6.1 25.54 perf-stat.i.cache-miss-rate%
65976477 -63.6% 24023418 ? 2% perf-stat.i.cache-misses
1.931e+08 -52.5% 91760911 perf-stat.i.cache-references
115551 ? 6% -79.9% 23281 ? 3% perf-stat.i.context-switches
3.63 -15.5% 3.06 perf-stat.i.cpi
1.992e+11 -63.3% 7.318e+10 ? 3% perf-stat.i.cpu-cycles
14389 ? 6% -91.8% 1178 ? 12% perf-stat.i.cpu-migrations
3978 ? 3% -21.8% 3109 ? 3% perf-stat.i.cycles-between-cache-misses
0.01 ? 8% -0.0 0.01 ? 4% perf-stat.i.dTLB-load-miss-rate%
1334916 ? 10% -67.9% 429008 ? 5% perf-stat.i.dTLB-load-misses
1.473e+10 -59.2% 6.006e+09 ? 2% perf-stat.i.dTLB-loads
0.01 ? 7% +0.0 0.02 ? 30% perf-stat.i.dTLB-store-miss-rate%
1.629e+09 -53.1% 7.634e+08 perf-stat.i.dTLB-stores
5.543e+10 -57.7% 2.347e+10 ? 2% perf-stat.i.instructions
0.29 +15.0% 0.33 perf-stat.i.ipc
21.38 ? 66% -76.0% 5.13 ? 41% perf-stat.i.major-faults
1.56 -63.3% 0.57 ? 3% perf-stat.i.metric.GHz
253.25 +196.0% 749.69 perf-stat.i.metric.K/sec
217.15 -58.1% 90.93 ? 2% perf-stat.i.metric.M/sec
6025 -43.5% 3403 perf-stat.i.minor-faults
15622422 ? 2% -74.8% 3941745 ? 3% perf-stat.i.node-load-misses
1484660 -68.2% 472318 ? 4% perf-stat.i.node-loads
61.58 +21.8 83.38 perf-stat.i.node-store-miss-rate%
8387611 -13.5% 7257174 perf-stat.i.node-store-misses
4451007 -68.3% 1412548 ? 2% perf-stat.i.node-stores
6046 -43.6% 3408 perf-stat.i.page-faults
3.49 +12.0% 3.91 perf-stat.overall.MPKI
0.20 -0.1 0.15 ? 2% perf-stat.overall.branch-miss-rate%
34.12 -7.9 26.20 perf-stat.overall.cache-miss-rate%
3.59 -13.2% 3.12 perf-stat.overall.cpi
0.01 ? 9% -0.0 0.01 ? 4% perf-stat.overall.dTLB-load-miss-rate%
0.01 ? 10% +0.0 0.02 ? 30% perf-stat.overall.dTLB-store-miss-rate%
0.28 +15.2% 0.32 perf-stat.overall.ipc
91.27 -2.0 89.30 perf-stat.overall.node-load-miss-rate%
65.25 +18.4 83.67 perf-stat.overall.node-store-miss-rate%
1.121e+10 -56.6% 4.866e+09 ? 2% perf-stat.ps.branch-instructions
22469266 ? 2% -67.7% 7263322 ? 2% perf-stat.ps.branch-misses
65766621 -63.5% 24032139 ? 2% perf-stat.ps.cache-misses
1.928e+08 -52.4% 91723202 perf-stat.ps.cache-references
114943 ? 6% -79.7% 23320 ? 3% perf-stat.ps.context-switches
1.985e+11 -63.1% 7.326e+10 ? 3% perf-stat.ps.cpu-cycles
14298 ? 6% -91.7% 1187 ? 12% perf-stat.ps.cpu-migrations
1352336 ? 10% -68.3% 429114 ? 5% perf-stat.ps.dTLB-load-misses
1.468e+10 -59.1% 6.01e+09 ? 3% perf-stat.ps.dTLB-loads
1.622e+09 -53.0% 7.631e+08 perf-stat.ps.dTLB-stores
5.522e+10 -57.5% 2.348e+10 ? 2% perf-stat.ps.instructions
20.05 ? 66% -74.9% 5.03 ? 40% perf-stat.ps.major-faults
5853 -42.0% 3394 perf-stat.ps.minor-faults
15565494 ? 2% -74.6% 3947858 ? 3% perf-stat.ps.node-load-misses
1488839 -68.2% 473128 ? 4% perf-stat.ps.node-loads
8353311 -13.2% 7248284 perf-stat.ps.node-store-misses
4449260 -68.2% 1414235 ? 2% perf-stat.ps.node-stores
5873 -42.1% 3399 perf-stat.ps.page-faults
5.288e+12 +200.8% 1.591e+13 ? 5% perf-stat.total.instructions
43.86 -10.8 33.04 ? 10% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
43.14 -9.9 33.20 ? 9% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
45.72 -7.8 37.94 ? 6% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64
46.05 -7.2 38.82 ? 6% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
49.29 -7.0 42.32 ? 4% perf-profile.calltrace.cycles-pp.unlink
49.27 -7.0 42.30 ? 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
49.26 -7.0 42.30 ? 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
49.25 -7.0 42.30 ? 4% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
49.22 -6.9 42.28 ? 4% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
44.53 -6.3 38.23 ? 5% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open
44.87 -5.7 39.15 ? 4% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
2.55 ? 5% -1.6 0.97 ? 24% perf-profile.calltrace.cycles-pp.f2fs_do_add_link.f2fs_create.lookup_open.open_last_lookups.path_openat
1.80 ? 2% -1.3 0.55 ? 49% perf-profile.calltrace.cycles-pp.f2fs_add_regular_entry.f2fs_add_dentry.f2fs_do_add_link.f2fs_create.lookup_open
1.86 ? 2% -1.1 0.76 ? 23% perf-profile.calltrace.cycles-pp.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.80 ? 2% -1.1 0.71 ? 25% perf-profile.calltrace.cycles-pp.f2fs_add_dentry.f2fs_do_add_link.f2fs_create.lookup_open.open_last_lookups
1.78 ? 2% -1.1 0.72 ? 23% perf-profile.calltrace.cycles-pp.f2fs_evict_inode.evict.do_unlinkat.__x64_sys_unlink.do_syscall_64
0.00 +0.9 0.88 ? 20% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open
0.00 +0.9 0.93 ? 26% perf-profile.calltrace.cycles-pp.do_garbage_collect.f2fs_gc.f2fs_unlink.vfs_unlink.do_unlinkat
0.00 +1.0 0.98 ? 26% perf-profile.calltrace.cycles-pp.f2fs_move_node_page.gc_node_segment.do_garbage_collect.f2fs_gc.f2fs_create
0.00 +1.0 1.02 ? 26% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_unlink.vfs_unlink
0.00 +1.0 1.03 ? 26% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_unlink.vfs_unlink.do_unlinkat
0.00 +1.1 1.06 ? 25% perf-profile.calltrace.cycles-pp.f2fs_balance_fs.f2fs_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink
0.00 +1.1 1.09 ? 25% perf-profile.calltrace.cycles-pp.f2fs_gc.f2fs_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink
0.00 +1.3 1.29 ? 25% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.acpi_idle_do_entry.acpi_idle_enter.cpuidle_enter_state
0.00 +1.4 1.44 ? 27% perf-profile.calltrace.cycles-pp.gc_node_segment.do_garbage_collect.f2fs_gc.f2fs_create.lookup_open
0.00 +1.5 1.49 ? 25% perf-profile.calltrace.cycles-pp.acpi_idle_do_entry.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
0.98 ? 2% +1.6 2.56 ? 18% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.94 ? 2% +1.6 2.54 ? 18% perf-profile.calltrace.cycles-pp.f2fs_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64
0.00 +1.7 1.68 ? 27% perf-profile.calltrace.cycles-pp.do_garbage_collect.f2fs_gc.f2fs_create.lookup_open.open_last_lookups
0.00 +2.0 2.03 ? 26% perf-profile.calltrace.cycles-pp.f2fs_gc.f2fs_create.lookup_open.open_last_lookups.path_openat
0.00 +2.5 2.46 ? 26% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_create
1.80 ? 5% +3.1 4.87 ? 20% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
0.76 ? 4% +3.4 4.19 ? 25% perf-profile.calltrace.cycles-pp.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
0.81 ? 4% +3.5 4.29 ? 25% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
0.82 ? 4% +3.5 4.30 ? 25% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
48.98 +3.6 52.54 perf-profile.calltrace.cycles-pp.creat64
48.96 +3.6 52.53 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat64
48.95 +3.6 52.53 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
48.94 +3.6 52.52 perf-profile.calltrace.cycles-pp.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
48.93 +3.6 52.52 perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat64
48.86 +3.6 52.48 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.86 +3.6 52.48 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat.do_syscall_64
1.35 ? 4% +3.7 5.01 ? 21% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
0.87 ? 4% +3.7 4.61 ? 25% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.95 ? 3% +3.7 4.70 ? 24% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
0.96 ? 3% +3.7 4.70 ? 24% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
48.64 +3.8 52.40 perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_creat
0.97 ? 4% +3.8 4.74 ? 24% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
0.56 ? 5% +6.0 6.51 ? 26% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.acpi_idle_do_entry.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter
0.00 +7.3 7.27 ? 27% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_create
3.61 ? 3% +9.6 13.18 ? 21% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
3.54 ? 3% +9.6 13.14 ? 21% perf-profile.calltrace.cycles-pp.f2fs_create.lookup_open.open_last_lookups.path_openat.do_filp_open
0.00 +9.7 9.73 ? 26% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_create.lookup_open
0.00 +9.7 9.74 ? 26% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.f2fs_balance_fs.f2fs_create.lookup_open.open_last_lookups
0.00 +9.8 9.78 ? 26% perf-profile.calltrace.cycles-pp.f2fs_balance_fs.f2fs_create.lookup_open.open_last_lookups.path_openat
87.43 -13.2 74.20 ? 6% perf-profile.children.cycles-pp.osq_lock
49.30 -7.0 42.32 ? 4% perf-profile.children.cycles-pp.unlink
49.25 -7.0 42.30 ? 4% perf-profile.children.cycles-pp.__x64_sys_unlink
49.22 -6.9 42.28 ? 4% perf-profile.children.cycles-pp.do_unlinkat
90.83 -3.7 87.15 perf-profile.children.cycles-pp.rwsem_optimistic_spin
98.63 -3.6 95.05 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
98.62 -3.6 95.04 perf-profile.children.cycles-pp.do_syscall_64
91.56 -2.6 88.97 perf-profile.children.cycles-pp.rwsem_down_write_slowpath
2.55 ? 5% -1.6 0.97 ? 24% perf-profile.children.cycles-pp.f2fs_do_add_link
1.80 ? 2% -1.2 0.63 ? 21% perf-profile.children.cycles-pp.f2fs_add_regular_entry
1.86 ? 2% -1.1 0.76 ? 23% perf-profile.children.cycles-pp.evict
1.80 ? 2% -1.1 0.71 ? 25% perf-profile.children.cycles-pp.f2fs_add_dentry
1.78 ? 2% -1.1 0.72 ? 23% perf-profile.children.cycles-pp.f2fs_evict_inode
1.36 ? 3% -0.8 0.53 ? 26% perf-profile.children.cycles-pp.f2fs_init_inode_metadata
1.22 ? 3% -0.7 0.48 ? 26% perf-profile.children.cycles-pp.f2fs_new_inode_page
1.22 ? 3% -0.7 0.48 ? 26% perf-profile.children.cycles-pp.f2fs_new_node_page
1.16 ? 2% -0.7 0.49 ? 23% perf-profile.children.cycles-pp.f2fs_remove_inode_page
1.08 ? 2% -0.6 0.45 ? 23% perf-profile.children.cycles-pp.truncate_node
0.90 ? 11% -0.6 0.33 ? 22% perf-profile.children.cycles-pp.__f2fs_find_entry
0.99 -0.5 0.53 ? 10% perf-profile.children.cycles-pp._raw_spin_lock
0.68 ? 3% -0.4 0.27 ? 20% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.90 ? 3% -0.4 0.50 ? 9% perf-profile.children.cycles-pp.set_node_addr
0.46 ? 22% -0.3 0.14 ? 24% perf-profile.children.cycles-pp.f2fs_find_data_page
0.44 ? 7% -0.3 0.13 ? 34% perf-profile.children.cycles-pp.__schedule
0.52 ? 5% -0.3 0.22 ? 21% perf-profile.children.cycles-pp.invalidate_mapping_pagevec
0.48 ? 2% -0.3 0.19 ? 25% perf-profile.children.cycles-pp.f2fs_new_inode
0.38 ? 8% -0.3 0.10 ? 42% perf-profile.children.cycles-pp.schedule
0.30 ? 7% -0.3 0.04 ?110% perf-profile.children.cycles-pp.pick_next_task_fair
0.28 ? 7% -0.2 0.04 ?108% perf-profile.children.cycles-pp.newidle_balance
0.41 ? 2% -0.2 0.17 ? 20% perf-profile.children.cycles-pp.f2fs_find_target_dentry
0.34 ? 17% -0.2 0.12 ? 23% perf-profile.children.cycles-pp.f2fs_get_dnode_of_data
0.29 ? 26% -0.2 0.07 ? 47% perf-profile.children.cycles-pp.f2fs_get_read_data_page
0.74 ? 9% -0.2 0.53 ? 6% perf-profile.children.cycles-pp.pagecache_get_page
0.72 ? 9% -0.2 0.51 ? 5% perf-profile.children.cycles-pp.__filemap_get_folio
0.28 ? 7% -0.2 0.10 ? 22% perf-profile.children.cycles-pp.load_balance
0.28 ? 3% -0.2 0.12 ? 21% perf-profile.children.cycles-pp.f2fs_inode_dirtied
0.26 ? 2% -0.2 0.10 ? 26% perf-profile.children.cycles-pp.f2fs_update_parent_metadata
0.23 ? 3% -0.2 0.08 ? 23% perf-profile.children.cycles-pp.rcu_do_batch
0.26 -0.1 0.11 ? 22% perf-profile.children.cycles-pp.__xstat64
0.20 ? 8% -0.1 0.06 ? 50% perf-profile.children.cycles-pp.update_sg_lb_stats
0.22 ? 7% -0.1 0.08 ? 17% perf-profile.children.cycles-pp.find_busiest_group
0.22 ? 6% -0.1 0.08 ? 19% perf-profile.children.cycles-pp.update_sd_lb_stats
0.26 ? 3% -0.1 0.13 ? 12% perf-profile.children.cycles-pp.f2fs_update_inode_page
0.23 -0.1 0.10 ? 22% perf-profile.children.cycles-pp.__do_sys_newstat
0.23 ? 3% -0.1 0.10 ? 20% perf-profile.children.cycles-pp.f2fs_mark_inode_dirty_sync
0.22 ? 2% -0.1 0.09 ? 18% perf-profile.children.cycles-pp.__pagevec_release
0.29 -0.1 0.17 ? 8% perf-profile.children.cycles-pp.kmem_cache_free
0.21 ? 2% -0.1 0.09 ? 23% perf-profile.children.cycles-pp.f2fs_find_entry
0.21 -0.1 0.09 ? 23% perf-profile.children.cycles-pp.vfs_fstatat
0.21 ? 3% -0.1 0.09 ? 20% perf-profile.children.cycles-pp.link_path_walk
0.25 ? 3% -0.1 0.13 ? 8% perf-profile.children.cycles-pp.rcu_core
0.20 ? 4% -0.1 0.08 ? 26% perf-profile.children.cycles-pp.filemap_add_folio
0.19 ? 9% -0.1 0.08 ? 34% perf-profile.children.cycles-pp.clear_node_page_dirty
0.18 ? 10% -0.1 0.07 ? 30% perf-profile.children.cycles-pp.remove_mapping
0.18 ? 10% -0.1 0.07 ? 30% perf-profile.children.cycles-pp.__remove_mapping
0.18 ? 2% -0.1 0.08 ? 24% perf-profile.children.cycles-pp.__remove_ino_entry
0.19 ? 4% -0.1 0.09 ? 14% perf-profile.children.cycles-pp.rwsem_wake
0.14 ? 2% -0.1 0.04 ? 72% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.14 ? 3% -0.1 0.04 ? 75% perf-profile.children.cycles-pp.is_extension_exist
0.16 ? 2% -0.1 0.06 ? 19% perf-profile.children.cycles-pp.f2fs_inode_synced
0.14 ? 4% -0.1 0.04 ? 75% perf-profile.children.cycles-pp.new_inode
0.19 ? 5% -0.1 0.10 ? 16% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.14 ? 3% -0.1 0.04 ? 75% perf-profile.children.cycles-pp.f2fs_init_extent_tree
0.14 ? 4% -0.1 0.05 ? 49% perf-profile.children.cycles-pp.f2fs_add_orphan_inode
0.14 ? 4% -0.1 0.05 ? 49% perf-profile.children.cycles-pp.__add_ino_entry
0.16 ? 4% -0.1 0.07 ? 26% perf-profile.children.cycles-pp.__filemap_add_folio
0.12 ? 3% -0.1 0.03 ?101% perf-profile.children.cycles-pp.release_pages
0.15 ? 2% -0.1 0.06 ? 19% perf-profile.children.cycles-pp.vfs_statx
0.13 ? 2% -0.1 0.04 ? 75% perf-profile.children.cycles-pp.__grab_extent_tree
0.11 ? 4% -0.1 0.03 ?102% perf-profile.children.cycles-pp.alloc_inode
0.13 ? 2% -0.1 0.05 ? 48% perf-profile.children.cycles-pp.filename_lookup
0.13 -0.1 0.05 ? 47% perf-profile.children.cycles-pp.path_lookupat
0.13 ? 3% -0.1 0.05 ? 48% perf-profile.children.cycles-pp.wake_up_q
0.17 ? 2% -0.1 0.09 ? 9% perf-profile.children.cycles-pp.f2fs_update_inode
0.11 ? 5% -0.1 0.03 ?102% perf-profile.children.cycles-pp.f2fs_destroy_extent_tree
0.12 ? 4% -0.1 0.04 ? 75% perf-profile.children.cycles-pp.getname_flags
0.13 ? 5% -0.1 0.06 ? 18% perf-profile.children.cycles-pp.try_to_wake_up
0.13 ? 20% -0.1 0.06 ? 49% perf-profile.children.cycles-pp.read_node_page
0.12 ? 21% -0.1 0.06 ? 50% perf-profile.children.cycles-pp.xas_start
0.11 ? 3% -0.1 0.04 ? 47% perf-profile.children.cycles-pp.radix_tree_delete_item
0.08 ? 6% -0.0 0.02 ? 99% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.08 ? 4% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.__list_add_valid
0.13 ? 8% -0.0 0.08 ? 16% perf-profile.children.cycles-pp.__mark_inode_dirty
0.07 -0.0 0.02 ? 99% perf-profile.children.cycles-pp.f2fs_drop_nlink
0.12 ? 9% -0.0 0.08 perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.08 ? 4% -0.0 0.05 ? 8% perf-profile.children.cycles-pp.__slab_free
0.08 ? 8% -0.0 0.05 ? 8% perf-profile.children.cycles-pp.folio_mark_accessed
0.08 ? 8% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.asm_sysvec_reschedule_ipi
0.16 ? 4% -0.0 0.13 ? 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.05 ? 45% +0.0 0.08 ? 10% perf-profile.children.cycles-pp.folio_clear_dirty_for_io
0.03 ? 70% +0.0 0.08 ? 16% perf-profile.children.cycles-pp.__cond_resched
0.29 ? 3% +0.0 0.34 ? 4% perf-profile.children.cycles-pp.scheduler_tick
0.08 ? 6% +0.1 0.13 ? 16% perf-profile.children.cycles-pp.up_read
0.08 ? 5% +0.1 0.14 ? 11% perf-profile.children.cycles-pp.__list_del_entry_valid
0.03 ? 70% +0.1 0.10 ? 20% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.1 0.06 ? 14% perf-profile.children.cycles-pp.__xa_set_mark
0.20 ? 4% +0.1 0.26 ? 11% perf-profile.children.cycles-pp.filemap_dirty_folio
0.00 +0.1 0.07 ? 23% perf-profile.children.cycles-pp.native_sched_clock
0.00 +0.1 0.07 ? 15% perf-profile.children.cycles-pp.mutex_lock
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.get_next_nat_page
0.00 +0.1 0.08 ? 18% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.1 0.08 ? 26% perf-profile.children.cycles-pp.f2fs_do_write_meta_page
0.34 ? 2% +0.1 0.42 ? 3% perf-profile.children.cycles-pp.update_process_times
0.13 ? 2% +0.1 0.21 ? 16% perf-profile.children.cycles-pp.down_read
0.35 ? 2% +0.1 0.43 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
0.00 +0.1 0.08 ? 21% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.08 ? 17% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.1 0.08 ? 25% perf-profile.children.cycles-pp.__f2fs_write_meta_page
0.00 +0.1 0.08 ? 20% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.1 0.08 ? 22% perf-profile.children.cycles-pp.sched_clock_cpu
0.00 +0.1 0.10 ? 21% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.00 +0.1 0.10 ? 25% perf-profile.children.cycles-pp.f2fs_sync_meta_pages
0.02 ?141% +0.1 0.12 ? 16% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.11 ? 35% perf-profile.children.cycles-pp.tick_nohz_next_event
0.00 +0.1 0.11 ? 31% perf-profile.children.cycles-pp.f2fs_ra_meta_pages
0.00 +0.1 0.11 ? 26% perf-profile.children.cycles-pp.do_checkpoint
0.00 +0.1 0.12 ? 65% perf-profile.children.cycles-pp.tick_irq_enter
0.37 ? 3% +0.1 0.49 ? 6% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.1 0.13 ? 34% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +0.1 0.13 ? 29% perf-profile.children.cycles-pp.__folio_start_writeback
0.00 +0.1 0.14 ? 18% perf-profile.children.cycles-pp.block_operations
0.00 +0.1 0.14 ? 54% perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.2 0.16 ? 26% perf-profile.children.cycles-pp.__folio_end_writeback
0.14 ? 2% +0.2 0.31 ? 18% perf-profile.children.cycles-pp.f2fs_get_node_info
0.09 ? 8% +0.2 0.26 ? 27% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.2 0.18 ? 24% perf-profile.children.cycles-pp.folio_end_writeback
0.00 +0.2 0.21 ? 23% perf-profile.children.cycles-pp.__flush_nat_entry_set
0.01 ?223% +0.2 0.23 ? 20% perf-profile.children.cycles-pp.__lookup_nat_cache
0.43 ? 2% +0.2 0.66 ? 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.24 ? 22% perf-profile.children.cycles-pp.f2fs_flush_nat_entries
0.00 +0.2 0.24 ? 25% perf-profile.children.cycles-pp.menu_select
0.00 +0.3 0.26 ? 24% perf-profile.children.cycles-pp.f2fs_allocate_data_block
0.09 ? 4% +0.3 0.38 ? 39% perf-profile.children.cycles-pp.ktime_get
0.00 +0.3 0.32 ? 24% perf-profile.children.cycles-pp.copy_to_brd
0.00 +0.3 0.33 ? 25% perf-profile.children.cycles-pp.f2fs_write_end_io
0.00 +0.4 0.40 ? 26% perf-profile.children.cycles-pp.__submit_merged_write_cond
0.07 ? 14% +0.4 0.49 ? 22% perf-profile.children.cycles-pp.f2fs_write_checkpoint
0.56 ? 2% +0.5 1.03 ? 14% perf-profile.children.cycles-pp.hrtimer_interrupt
0.00 +0.5 0.48 ? 26% perf-profile.children.cycles-pp.brd_do_bvec
0.00 +0.5 0.50 ? 26% perf-profile.children.cycles-pp.brd_submit_bio
0.58 +0.5 1.11 ? 14% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.00 +0.5 0.55 ? 25% perf-profile.children.cycles-pp.f2fs_submit_page_write
0.00 +0.8 0.78 ? 25% perf-profile.children.cycles-pp.f2fs_do_write_node_page
0.00 +0.8 0.79 ? 25% perf-profile.children.cycles-pp.do_write_page
0.97 +0.8 1.79 ? 16% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.00 +0.8 0.84 ? 26% perf-profile.children.cycles-pp.__submit_bio_noacct
0.00 +0.8 0.84 ? 26% perf-profile.children.cycles-pp.__submit_bio
0.00 +0.8 0.85 ? 25% perf-profile.children.cycles-pp.__submit_merged_bio
0.00 +1.3 1.30 ? 26% perf-profile.children.cycles-pp.__write_node_page
0.00 +1.5 1.53 ? 26% perf-profile.children.cycles-pp.f2fs_move_node_page
0.98 ? 2% +1.6 2.56 ? 18% perf-profile.children.cycles-pp.vfs_unlink
0.94 ? 2% +1.6 2.54 ? 18% perf-profile.children.cycles-pp.f2fs_unlink
0.00 +2.2 2.25 ? 27% perf-profile.children.cycles-pp.gc_node_segment
0.00 +2.6 2.61 ? 27% perf-profile.children.cycles-pp.do_garbage_collect
0.00 +3.1 3.12 ? 26% perf-profile.children.cycles-pp.f2fs_gc
1.30 ? 2% +3.2 4.50 ? 22% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.76 ? 4% +3.4 4.20 ? 25% perf-profile.children.cycles-pp.acpi_idle_do_entry
0.77 ? 4% +3.5 4.22 ? 25% perf-profile.children.cycles-pp.acpi_idle_enter
0.82 ? 4% +3.5 4.32 ? 25% perf-profile.children.cycles-pp.cpuidle_enter_state
0.82 ? 4% +3.5 4.34 ? 25% perf-profile.children.cycles-pp.cpuidle_enter
48.99 +3.6 52.54 perf-profile.children.cycles-pp.creat64
48.94 +3.6 52.52 perf-profile.children.cycles-pp.__x64_sys_creat
48.94 +3.6 52.53 perf-profile.children.cycles-pp.do_sys_openat2
48.87 +3.6 52.49 perf-profile.children.cycles-pp.do_filp_open
48.87 +3.6 52.49 perf-profile.children.cycles-pp.path_openat
48.64 +3.8 52.40 perf-profile.children.cycles-pp.open_last_lookups
0.88 ? 3% +3.8 4.65 ? 25% perf-profile.children.cycles-pp.cpuidle_idle_call
0.97 ? 4% +3.8 4.74 ? 24% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
0.97 ? 4% +3.8 4.74 ? 24% perf-profile.children.cycles-pp.cpu_startup_entry
0.97 ? 4% +3.8 4.74 ? 24% perf-profile.children.cycles-pp.do_idle
3.61 ? 3% +9.6 13.18 ? 21% perf-profile.children.cycles-pp.lookup_open
3.54 ? 3% +9.6 13.14 ? 21% perf-profile.children.cycles-pp.f2fs_create
0.18 ? 2% +10.7 10.85 ? 26% perf-profile.children.cycles-pp.f2fs_balance_fs
3.65 ? 5% +11.0 14.63 ? 22% perf-profile.children.cycles-pp.rwsem_spin_on_owner
86.80 -13.0 73.78 ? 6% perf-profile.self.cycles-pp.osq_lock
0.67 ? 3% -0.4 0.27 ? 21% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.40 -0.2 0.16 ? 20% perf-profile.self.cycles-pp.f2fs_find_target_dentry
0.55 -0.2 0.37 ? 4% perf-profile.self.cycles-pp._raw_spin_lock
0.12 ? 20% -0.1 0.05 ? 51% perf-profile.self.cycles-pp.xas_start
0.08 ? 6% -0.1 0.03 ?100% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.08 -0.0 0.03 ? 70% perf-profile.self.cycles-pp.__list_add_valid
0.08 -0.0 0.04 ? 45% perf-profile.self.cycles-pp.__slab_free
0.08 ? 4% -0.0 0.06 ? 7% perf-profile.self.cycles-pp.down_write
0.07 ? 8% +0.0 0.10 ? 9% perf-profile.self.cycles-pp.update_load_avg
0.08 ? 6% +0.0 0.10 ? 9% perf-profile.self.cycles-pp.__might_resched
0.11 ? 4% +0.0 0.16 ? 12% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.11 ? 6% +0.0 0.16 ? 12% perf-profile.self.cycles-pp.xas_load
0.08 ? 6% +0.0 0.12 ? 14% perf-profile.self.cycles-pp.up_read
0.08 ? 4% +0.1 0.14 ? 10% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.07 ? 23% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.1 0.08 ? 20% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.09 ? 29% perf-profile.self.cycles-pp.brd_do_bvec
0.02 ?141% +0.1 0.12 ? 16% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.10 ? 18% perf-profile.self.cycles-pp.menu_select
0.08 ? 7% +0.2 0.33 ? 42% perf-profile.self.cycles-pp.ktime_get
0.00 +0.3 0.30 ? 25% perf-profile.self.cycles-pp.copy_to_brd
0.40 ? 5% +2.3 2.71 ? 26% perf-profile.self.cycles-pp.acpi_idle_do_entry
3.61 ? 4% +10.9 14.55 ? 22% perf-profile.self.cycles-pp.rwsem_spin_on_owner
***************************************************************************************************
lkp-icl-2sp1: 96 threads 2 sockets Ice Lake with 256G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
filesystem/gcc-11/performance/1HDD/f2fs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp1/open/stress-ng/60s/0xb000280
commit:
86aef22d0f ("f2fs: fix to do sanity check on total_data_blocks")
b11c83553a ("f2fs: fix deadloop in foreground GC")
86aef22d0f7cc71d b11c83553a02b58ee0934cb80c8
---------------- ---------------------------
%stddev %change %stddev
\ | \
2070616 -49.0% 1056747 ? 2% stress-ng.open.ops
34470 -49.0% 17566 ? 2% stress-ng.open.ops_per_sec
6283577 -13.5% 5434177 stress-ng.time.file_system_outputs
11719 -8.2% 10752 stress-ng.time.minor_page_faults
187.17 ? 2% -57.1% 80.33 ? 8% stress-ng.time.percent_of_cpu_this_job_got
121.23 ? 2% -58.0% 50.92 ? 8% stress-ng.time.system_time
193520 ? 4% -40.5% 115235 ? 7% stress-ng.time.voluntary_context_switches
9030 ? 36% -82.9% 1548 ?120% numa-meminfo.node0.Inactive(file)
4215 ? 10% -9.1% 3833 ? 7% sched_debug.cpu.nr_switches.avg
2258 ? 38% -82.9% 386.67 ?120% numa-vmstat.node0.nr_inactive_file
2258 ? 38% -82.9% 386.67 ?120% numa-vmstat.node0.nr_zone_inactive_file
101.83 ? 9% -37.2% 64.00 ? 14% turbostat.Avg_MHz
2137 ? 2% -23.0% 1646 ? 3% turbostat.Bzy_MHz
0.70 +0.1 0.84 mpstat.cpu.all.iowait%
1.99 ? 2% -1.1 0.89 ? 8% mpstat.cpu.all.sys%
0.05 ? 3% -0.0 0.04 mpstat.cpu.all.usr%
49704 -16.6% 41465 vmstat.io.bo
5704780 -26.7% 4183757 vmstat.memory.cache
7807 ? 3% -30.2% 5449 ? 3% vmstat.system.cs
1606611 -46.6% 858138 ? 2% meminfo.Active
6157 ? 2% -18.0% 5051 meminfo.Active(anon)
1600453 -46.7% 853085 ? 2% meminfo.Active(file)
3993676 -19.0% 3234808 meminfo.Cached
15911 -65.4% 5500 ? 8% meminfo.Inactive(file)
1738696 -44.1% 971973 meminfo.KReclaimable
8023709 -21.9% 6265599 meminfo.Memused
1738696 -44.1% 971973 meminfo.SReclaimable
729042 -31.1% 502656 meminfo.SUnreclaim
2467739 -40.2% 1474629 meminfo.Slab
3343 ? 10% -38.4% 2059 ? 26% meminfo.Writeback
8254599 -22.9% 6364707 meminfo.max_used_kB
1535 ? 2% -17.8% 1262 proc-vmstat.nr_active_anon
400479 -46.8% 213146 ? 2% proc-vmstat.nr_active_file
858018 -18.9% 696225 proc-vmstat.nr_dirtied
999140 -19.0% 808836 proc-vmstat.nr_file_pages
4060 ? 3% -66.1% 1376 ? 8% proc-vmstat.nr_inactive_file
6478 -4.5% 6188 proc-vmstat.nr_shmem
435045 -44.1% 242975 proc-vmstat.nr_slab_reclaimable
182411 -31.1% 125636 proc-vmstat.nr_slab_unreclaimable
818.00 ? 6% -34.8% 533.33 ? 30% proc-vmstat.nr_writeback
857410 -18.8% 696192 proc-vmstat.nr_written
1535 ? 2% -17.8% 1262 proc-vmstat.nr_zone_active_anon
400479 -46.8% 213146 ? 2% proc-vmstat.nr_zone_active_file
4060 ? 3% -66.1% 1376 ? 8% proc-vmstat.nr_zone_inactive_file
4914 ? 2% -5.2% 4660 ? 4% proc-vmstat.nr_zone_write_pending
1449994 -34.5% 949373 proc-vmstat.numa_hit
1363144 -36.7% 862660 proc-vmstat.numa_local
258854 -88.7% 29230 ? 2% proc-vmstat.pgactivate
1454927 -34.4% 953990 proc-vmstat.pgalloc_normal
356400 -2.0% 349319 proc-vmstat.pgfault
1132979 -20.6% 899950 proc-vmstat.pgfree
7204 ? 15% -44.3% 4010 ? 25% proc-vmstat.pgpgin
3429646 -18.8% 2784945 proc-vmstat.pgpgout
25629 -1.6% 25207 proc-vmstat.pgreuse
6.399e+08 ? 2% -39.0% 3.903e+08 ? 2% perf-stat.i.branch-instructions
8174174 ? 6% -50.3% 4061607 ? 21% perf-stat.i.cache-misses
7733 ? 3% -40.5% 4602 ? 16% perf-stat.i.context-switches
9.249e+09 ? 7% -43.3% 5.241e+09 ? 14% perf-stat.i.cpu-cycles
118.65 -3.7% 114.23 perf-stat.i.cpu-migrations
8.274e+08 -38.9% 5.058e+08 perf-stat.i.dTLB-loads
3.905e+08 -32.2% 2.647e+08 perf-stat.i.dTLB-stores
3.102e+09 ? 2% -38.8% 1.898e+09 ? 2% perf-stat.i.instructions
0.10 ? 7% -43.3% 0.05 ? 14% perf-stat.i.metric.GHz
19.57 -38.0% 12.14 perf-stat.i.metric.M/sec
912084 ? 10% -54.0% 419574 ? 37% perf-stat.i.node-load-misses
658862 ? 11% -46.9% 349786 ? 16% perf-stat.i.node-loads
754069 ? 12% -52.1% 361284 ? 35% perf-stat.i.node-store-misses
2021167 ? 3% -42.9% 1154611 ? 5% perf-stat.i.node-stores
6.303e+08 ? 2% -39.0% 3.845e+08 perf-stat.ps.branch-instructions
8046224 ? 6% -50.3% 4001758 ? 21% perf-stat.ps.cache-misses
7605 ? 3% -40.4% 4530 ? 16% perf-stat.ps.context-switches
9.11e+09 ? 7% -43.3% 5.164e+09 ? 14% perf-stat.ps.cpu-cycles
116.91 -3.7% 112.54 perf-stat.ps.cpu-migrations
8.149e+08 -38.8% 4.984e+08 perf-stat.ps.dTLB-loads
3.846e+08 -32.2% 2.608e+08 perf-stat.ps.dTLB-stores
3.055e+09 ? 2% -38.8% 1.871e+09 ? 2% perf-stat.ps.instructions
897623 ? 10% -53.9% 413374 ? 37% perf-stat.ps.node-load-misses
648122 ? 11% -46.9% 344345 ? 16% perf-stat.ps.node-loads
741956 ? 12% -52.1% 355756 ? 35% perf-stat.ps.node-store-misses
1990856 ? 3% -42.8% 1138407 ? 5% perf-stat.ps.node-stores
2.03e+11 ? 2% -40.2% 1.214e+11 ? 2% perf-stat.total.instructions
24.38 ? 7% -9.3 15.09 ? 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
24.40 ? 7% -9.3 15.12 ? 14% perf-profile.calltrace.cycles-pp.unlink
24.37 ? 7% -9.3 15.09 ? 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
24.35 ? 7% -9.3 15.07 ? 14% perf-profile.calltrace.cycles-pp.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
24.31 ? 7% -9.3 15.04 ? 14% perf-profile.calltrace.cycles-pp.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
22.45 ? 7% -8.7 13.78 ? 14% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64
22.48 ? 7% -8.6 13.87 ? 14% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.18 ? 7% -7.7 11.49 ? 14% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
19.50 ? 6% -7.4 12.08 ? 14% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open
19.53 ? 6% -7.4 12.16 ? 13% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
16.62 ? 7% -6.5 10.16 ? 14% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
20.01 ? 6% -6.5 13.55 ? 12% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.98 ? 6% -6.4 13.54 ? 12% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
18.46 ? 6% -6.0 12.43 ? 13% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
16.99 ? 6% -5.6 11.38 ? 12% perf-profile.calltrace.cycles-pp.openat64
16.97 ? 6% -5.6 11.36 ? 12% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.openat64
16.96 ? 6% -5.6 11.36 ? 12% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.openat64
16.95 ? 6% -5.6 11.35 ? 12% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.openat64
16.94 ? 6% -5.6 11.34 ? 12% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.openat64
10.66 ? 6% -3.4 7.27 ? 14% perf-profile.calltrace.cycles-pp.syscall
10.54 ? 7% -3.4 7.17 ? 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.syscall
10.52 ? 6% -3.4 7.16 ? 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
8.64 ? 7% -2.9 5.74 ? 15% perf-profile.calltrace.cycles-pp.do_sys_openat2.__do_sys_openat2.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
8.65 ? 7% -2.9 5.75 ? 15% perf-profile.calltrace.cycles-pp.__do_sys_openat2.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
8.61 ? 7% -2.9 5.71 ? 15% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__do_sys_openat2.do_syscall_64
8.61 ? 7% -2.9 5.72 ? 15% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__do_sys_openat2.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.50 ? 7% -2.9 5.62 ? 15% perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__do_sys_openat2
6.57 ? 8% -2.6 3.95 ? 14% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.vfs_utimes.do_utimes.__x64_sys_utimensat
6.58 ? 8% -2.6 3.97 ? 14% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64
6.83 ? 8% -2.6 4.26 ? 14% perf-profile.calltrace.cycles-pp.futimes
6.78 ? 8% -2.6 4.22 ? 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.futimes
6.77 ? 8% -2.6 4.22 ? 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.futimes
6.74 ? 8% -2.5 4.20 ? 14% perf-profile.calltrace.cycles-pp.__x64_sys_utimensat.do_syscall_64.entry_SYSCALL_64_after_hwframe.futimes
6.73 ? 8% -2.5 4.19 ? 14% perf-profile.calltrace.cycles-pp.do_utimes.__x64_sys_utimensat.do_syscall_64.entry_SYSCALL_64_after_hwframe.futimes
6.71 ? 8% -2.5 4.18 ? 14% perf-profile.calltrace.cycles-pp.vfs_utimes.do_utimes.__x64_sys_utimensat.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.60 ? 8% -2.3 3.32 ? 15% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.vfs_utimes.do_utimes
6.84 ? 6% -1.4 5.44 ? 13% perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
3.53 ? 7% -1.0 2.51 ? 13% perf-profile.calltrace.cycles-pp.open64
2.80 ? 4% -1.0 1.78 ? 24% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.open_last_lookups.path_openat
3.44 ? 7% -1.0 2.44 ? 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
3.42 ? 7% -1.0 2.43 ? 13% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.38 ? 7% -1.0 2.40 ? 13% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.35 ? 7% -1.0 2.39 ? 13% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.19 ? 9% -1.0 2.24 ? 16% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.__x64_sys_unlink
1.14 ? 15% -0.9 0.28 ?100% perf-profile.calltrace.cycles-pp.f2fs_write_checkpoint.__checkpoint_and_complete_reqs.issue_checkpoint_thread.kthread.ret_from_fork
1.14 ? 15% -0.9 0.28 ?100% perf-profile.calltrace.cycles-pp.__checkpoint_and_complete_reqs.issue_checkpoint_thread.kthread.ret_from_fork
1.14 ? 15% -0.9 0.28 ?100% perf-profile.calltrace.cycles-pp.issue_checkpoint_thread.kthread.ret_from_fork
1.78 ? 9% -0.9 0.92 ? 26% perf-profile.calltrace.cycles-pp.f2fs_do_add_link.f2fs_create.lookup_open.open_last_lookups.path_openat
1.50 ? 24% -0.7 0.77 ? 12% perf-profile.calltrace.cycles-pp.f2fs_add_dentry.f2fs_do_add_link.f2fs_create.lookup_open.open_last_lookups
1.50 ? 24% -0.7 0.77 ? 12% perf-profile.calltrace.cycles-pp.f2fs_add_regular_entry.f2fs_add_dentry.f2fs_do_add_link.f2fs_create.lookup_open
1.48 ? 10% -0.6 0.90 ? 16% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.43 ? 10% -0.6 0.86 ? 16% perf-profile.calltrace.cycles-pp.f2fs_unlink.vfs_unlink.do_unlinkat.__x64_sys_unlink.do_syscall_64
0.74 ? 7% -0.4 0.29 ?100% perf-profile.calltrace.cycles-pp.f2fs_init_inode_metadata.f2fs_add_regular_entry.f2fs_add_dentry.f2fs_do_add_link.f2fs_create
1.54 ? 12% -0.4 1.10 ? 8% perf-profile.calltrace.cycles-pp.ret_from_fork
1.54 ? 12% -0.4 1.10 ? 8% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.56 ? 7% -0.4 1.14 ? 11% perf-profile.calltrace.cycles-pp.__x64_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
1.53 ? 6% -0.4 1.12 ? 11% perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
0.95 ? 10% -0.4 0.54 ? 45% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.vfs_utimes.do_utimes
1.33 ? 6% -0.4 0.96 ? 11% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.32 ? 6% -0.4 0.96 ? 10% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_open.do_syscall_64
1.07 ? 9% -0.3 0.78 ? 14% perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 ? 10% -0.3 0.76 ? 14% perf-profile.calltrace.cycles-pp.seq_read_iter.seq_read.vfs_read.ksys_read.do_syscall_64
1.19 ? 9% -0.3 0.91 ? 13% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.81 ? 6% -0.3 0.54 ? 45% perf-profile.calltrace.cycles-pp.proc_fdinfo_instantiate.proc_lookupfd_common.lookup_open.open_last_lookups.path_openat
0.99 ? 7% -0.3 0.72 ? 13% perf-profile.calltrace.cycles-pp.proc_lookupfd_common.lookup_open.open_last_lookups.path_openat.do_filp_open
0.78 ? 7% -0.3 0.51 ? 44% perf-profile.calltrace.cycles-pp.proc_pid_make_inode.proc_fdinfo_instantiate.proc_lookupfd_common.lookup_open.open_last_lookups
1.26 ? 8% -0.2 1.02 ? 17% perf-profile.calltrace.cycles-pp.__close
1.03 ? 8% -0.2 0.83 ? 15% perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.00 ? 8% -0.2 0.81 ? 15% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.56 ? 48% +0.5 1.10 ? 22% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.48 ? 45% +0.5 1.01 ? 10% perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.83 ? 20% +0.6 1.41 ? 16% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle
0.48 ? 47% +0.6 1.08 ? 7% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +0.7 0.67 ? 12% perf-profile.calltrace.cycles-pp.native_apic_msr_eoi_write.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.32 ? 15% +0.7 2.03 ? 20% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.14 ? 17% +0.8 1.96 ? 11% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.cpuidle_idle_call.do_idle.cpu_startup_entry
1.60 ? 16% +0.9 2.50 ? 20% perf-profile.calltrace.cycles-pp.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
2.37 ? 23% +1.8 4.20 ? 26% perf-profile.calltrace.cycles-pp.menu_select.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
4.69 ? 21% +3.8 8.47 ? 18% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
5.21 ? 19% +4.2 9.44 ? 16% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
8.03 ? 18% +5.9 13.90 ? 17% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
10.89 ? 13% +8.2 19.06 ? 11% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
15.64 ? 14% +10.5 26.15 ? 7% perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
15.80 ? 14% +10.7 26.47 ? 7% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
27.42 ? 14% +19.6 47.01 ? 9% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
28.02 ? 13% +19.8 47.78 ? 8% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
30.94 ? 14% +22.0 52.95 ? 10% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
31.37 ? 14% +22.3 53.70 ? 10% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.secondary_startup_64_no_verify
31.44 ? 14% +22.4 53.83 ? 10% perf-profile.calltrace.cycles-pp.cpu_startup_entry.secondary_startup_64_no_verify
31.82 ? 14% +22.8 54.58 ? 10% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
65.45 ? 7% -22.2 43.22 ? 13% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
65.36 ? 7% -22.2 43.17 ? 13% perf-profile.children.cycles-pp.do_syscall_64
48.53 ? 7% -18.7 29.85 ? 14% perf-profile.children.cycles-pp.rwsem_optimistic_spin
48.58 ? 7% -18.5 30.04 ? 13% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
41.42 ? 7% -16.4 24.98 ? 14% perf-profile.children.cycles-pp.osq_lock
30.54 ? 6% -9.8 20.70 ? 13% perf-profile.children.cycles-pp.do_sys_openat2
30.01 ? 6% -9.7 20.32 ? 13% perf-profile.children.cycles-pp.do_filp_open
29.99 ? 6% -9.7 20.31 ? 13% perf-profile.children.cycles-pp.path_openat
24.41 ? 7% -9.3 15.12 ? 14% perf-profile.children.cycles-pp.unlink
24.35 ? 7% -9.3 15.08 ? 14% perf-profile.children.cycles-pp.__x64_sys_unlink
24.31 ? 7% -9.3 15.04 ? 14% perf-profile.children.cycles-pp.do_unlinkat
26.98 ? 6% -8.9 18.07 ? 13% perf-profile.children.cycles-pp.open_last_lookups
20.39 ? 6% -6.5 13.87 ? 12% perf-profile.children.cycles-pp.__x64_sys_openat
17.00 ? 6% -5.6 11.39 ? 12% perf-profile.children.cycles-pp.openat64
10.70 ? 6% -3.4 7.30 ? 14% perf-profile.children.cycles-pp.syscall
8.65 ? 7% -2.9 5.75 ? 15% perf-profile.children.cycles-pp.__do_sys_openat2
6.84 ? 8% -2.6 4.26 ? 14% perf-profile.children.cycles-pp.futimes
6.75 ? 8% -2.6 4.20 ? 14% perf-profile.children.cycles-pp.__x64_sys_utimensat
6.88 ? 8% -2.5 4.36 ? 14% perf-profile.children.cycles-pp.do_utimes
6.78 ? 8% -2.5 4.28 ? 14% perf-profile.children.cycles-pp.vfs_utimes
6.94 ? 7% -2.1 4.88 ? 12% perf-profile.children.cycles-pp.rwsem_spin_on_owner
2.09 ? 10% -1.5 0.61 ? 13% perf-profile.children.cycles-pp.__f2fs_find_entry
6.84 ? 6% -1.4 5.45 ? 13% perf-profile.children.cycles-pp.lookup_open
3.54 ? 7% -1.0 2.53 ? 13% perf-profile.children.cycles-pp.open64
1.15 ? 16% -1.0 0.19 ? 16% perf-profile.children.cycles-pp.f2fs_find_target_dentry
1.52 ? 8% -0.9 0.63 ? 15% perf-profile.children.cycles-pp.f2fs_lookup
0.84 ? 16% -0.7 0.17 ? 25% perf-profile.children.cycles-pp.f2fs_find_entry
1.14 ? 15% -0.6 0.50 ? 13% perf-profile.children.cycles-pp.__checkpoint_and_complete_reqs
1.14 ? 15% -0.6 0.50 ? 13% perf-profile.children.cycles-pp.issue_checkpoint_thread
1.48 ? 9% -0.6 0.90 ? 16% perf-profile.children.cycles-pp.vfs_unlink
1.43 ? 9% -0.6 0.86 ? 16% perf-profile.children.cycles-pp.f2fs_unlink
1.78 ? 9% -0.5 1.24 ? 13% perf-profile.children.cycles-pp.f2fs_do_add_link
1.29 ? 9% -0.5 0.76 ? 13% perf-profile.children.cycles-pp.pagecache_get_page
1.66 ? 9% -0.5 1.14 ? 14% perf-profile.children.cycles-pp.f2fs_add_dentry
1.66 ? 9% -0.5 1.14 ? 14% perf-profile.children.cycles-pp.f2fs_add_regular_entry
1.25 ? 9% -0.5 0.74 ? 14% perf-profile.children.cycles-pp.__filemap_get_folio
0.85 ? 6% -0.5 0.38 ? 16% perf-profile.children.cycles-pp.f2fs_find_data_page
1.54 ? 12% -0.4 1.10 ? 8% perf-profile.children.cycles-pp.kthread
1.54 ? 12% -0.4 1.10 ? 8% perf-profile.children.cycles-pp.ret_from_fork
1.56 ? 7% -0.4 1.14 ? 12% perf-profile.children.cycles-pp.__x64_sys_open
1.10 ? 10% -0.3 0.78 ? 15% perf-profile.children.cycles-pp.f2fs_init_inode_metadata
1.04 ? 9% -0.3 0.74 ? 15% perf-profile.children.cycles-pp.f2fs_new_inode_page
1.04 ? 9% -0.3 0.74 ? 15% perf-profile.children.cycles-pp.f2fs_new_node_page
0.98 ? 6% -0.3 0.69 ? 14% perf-profile.children.cycles-pp.kmem_cache_alloc_lru
1.07 ? 9% -0.3 0.78 ? 14% perf-profile.children.cycles-pp.seq_read
1.10 ? 5% -0.3 0.82 ? 17% perf-profile.children.cycles-pp.new_inode
1.09 ? 9% -0.3 0.80 ? 14% perf-profile.children.cycles-pp.seq_read_iter
0.99 ? 7% -0.3 0.72 ? 13% perf-profile.children.cycles-pp.proc_lookupfd_common
1.21 ? 9% -0.3 0.94 ? 13% perf-profile.children.cycles-pp.vfs_read
1.28 ? 7% -0.3 1.02 ? 15% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.82 ? 7% -0.2 0.58 ? 15% perf-profile.children.cycles-pp.do_open
1.28 ? 8% -0.2 1.04 ? 17% perf-profile.children.cycles-pp.__close
1.16 ? 7% -0.2 0.92 ? 15% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.73 ? 6% -0.2 0.50 ? 16% perf-profile.children.cycles-pp.do_dentry_open
1.13 ? 8% -0.2 0.90 ? 15% perf-profile.children.cycles-pp.exit_to_user_mode_loop
1.10 ? 8% -0.2 0.88 ? 14% perf-profile.children.cycles-pp.task_work_run
0.85 ? 9% -0.2 0.64 ? 17% perf-profile.children.cycles-pp.alloc_inode
0.74 ? 5% -0.2 0.53 ? 11% perf-profile.children.cycles-pp.d_alloc_parallel
0.91 ? 9% -0.2 0.70 ? 14% perf-profile.children.cycles-pp.__alloc_file
0.81 ? 6% -0.2 0.61 ? 14% perf-profile.children.cycles-pp.proc_fdinfo_instantiate
0.78 ? 7% -0.2 0.58 ? 14% perf-profile.children.cycles-pp.proc_pid_make_inode
0.83 ? 8% -0.2 0.64 ? 14% perf-profile.children.cycles-pp.kmem_cache_alloc
0.63 ? 4% -0.2 0.44 ? 10% perf-profile.children.cycles-pp.d_alloc
0.76 ? 5% -0.2 0.58 ? 20% perf-profile.children.cycles-pp.f2fs_new_inode
0.80 ? 7% -0.2 0.63 ? 15% perf-profile.children.cycles-pp.dput
0.59 ? 10% -0.2 0.42 ? 22% perf-profile.children.cycles-pp.block_operations
0.55 ? 8% -0.2 0.39 ? 19% perf-profile.children.cycles-pp.___slab_alloc
0.46 ? 9% -0.2 0.30 ? 21% perf-profile.children.cycles-pp.allocate_slab
0.60 ? 12% -0.2 0.45 ? 17% perf-profile.children.cycles-pp.is_extension_exist
0.52 ? 12% -0.1 0.38 ? 12% perf-profile.children.cycles-pp.obj_cgroup_charge
0.51 ? 6% -0.1 0.37 ? 22% perf-profile.children.cycles-pp.try_to_unlazy
0.37 ? 10% -0.1 0.24 ? 19% perf-profile.children.cycles-pp.path_lookupat
0.52 ? 12% -0.1 0.38 ? 16% perf-profile.children.cycles-pp.__kmalloc_node
0.48 ? 4% -0.1 0.35 ? 22% perf-profile.children.cycles-pp.__legitimize_path
0.36 ? 4% -0.1 0.24 ? 11% perf-profile.children.cycles-pp.__d_alloc
0.40 ? 10% -0.1 0.28 ? 18% perf-profile.children.cycles-pp.seq_show
0.44 ? 10% -0.1 0.33 ? 17% perf-profile.children.cycles-pp.chdir
0.42 ? 13% -0.1 0.30 ? 19% perf-profile.children.cycles-pp.proc_alloc_inode
0.29 ? 10% -0.1 0.18 ? 27% perf-profile.children.cycles-pp.f2fs_update_parent_metadata
0.35 ? 8% -0.1 0.24 ? 22% perf-profile.children.cycles-pp.complete_walk
0.38 ? 11% -0.1 0.27 ? 15% perf-profile.children.cycles-pp.__x64_sys_chdir
0.38 ? 6% -0.1 0.27 ? 18% perf-profile.children.cycles-pp.getname_flags
0.38 ? 10% -0.1 0.27 ? 14% perf-profile.children.cycles-pp.mod_objcg_state
0.33 ? 12% -0.1 0.22 ? 18% perf-profile.children.cycles-pp.user_path_at_empty
0.44 ? 10% -0.1 0.34 ? 13% perf-profile.children.cycles-pp.link_path_walk
0.41 ? 3% -0.1 0.31 ? 22% perf-profile.children.cycles-pp.lockref_get_not_dead
0.32 ? 12% -0.1 0.22 ? 21% perf-profile.children.cycles-pp.seq_printf
0.29 ? 12% -0.1 0.19 ? 23% perf-profile.children.cycles-pp.setup_object
0.40 ? 5% -0.1 0.31 ? 15% perf-profile.children.cycles-pp.vfs_tmpfile
0.32 ? 12% -0.1 0.22 ? 20% perf-profile.children.cycles-pp.vsnprintf
0.17 ? 15% -0.1 0.08 ? 16% perf-profile.children.cycles-pp.__vsnprintf_chk
0.36 ? 12% -0.1 0.27 ? 15% perf-profile.children.cycles-pp.security_file_alloc
0.45 ? 7% -0.1 0.36 ? 10% perf-profile.children.cycles-pp.memcg_slab_free_hook
0.35 ? 7% -0.1 0.26 ? 8% perf-profile.children.cycles-pp.slab_pre_alloc_hook
0.34 ? 8% -0.1 0.26 ? 13% perf-profile.children.cycles-pp.page_counter_charge
0.31 ? 9% -0.1 0.23 ? 21% perf-profile.children.cycles-pp.lockref_put_return
0.24 ? 12% -0.1 0.17 ? 20% perf-profile.children.cycles-pp.filename_lookup
0.12 ? 23% -0.1 0.05 ? 75% perf-profile.children.cycles-pp.__writeback_single_inode
0.20 ? 13% -0.1 0.12 ? 22% perf-profile.children.cycles-pp.single_open
0.20 ? 5% -0.1 0.13 ? 21% perf-profile.children.cycles-pp.d_splice_alias
0.28 ? 7% -0.1 0.21 ? 15% perf-profile.children.cycles-pp.strlen
0.31 ? 6% -0.1 0.24 ? 15% perf-profile.children.cycles-pp.shmem_tmpfile
0.20 ? 10% -0.1 0.14 ? 14% perf-profile.children.cycles-pp.terminate_walk
0.24 ? 9% -0.1 0.17 ? 25% perf-profile.children.cycles-pp.strncpy_from_user
0.25 ? 12% -0.1 0.19 ? 17% perf-profile.children.cycles-pp.xas_load
0.25 ? 9% -0.1 0.19 ? 6% perf-profile.children.cycles-pp.set_node_addr
0.13 ? 18% -0.1 0.07 ? 15% perf-profile.children.cycles-pp.vfprintf
0.18 ? 7% -0.1 0.12 ? 20% perf-profile.children.cycles-pp.__d_add
0.12 ? 22% -0.1 0.06 ? 71% perf-profile.children.cycles-pp.seq_open
0.24 ? 10% -0.1 0.18 ? 18% perf-profile.children.cycles-pp.f2fs_mark_inode_dirty_sync
0.24 ? 11% -0.1 0.18 ? 16% perf-profile.children.cycles-pp.f2fs_inode_dirtied
0.09 ? 14% -0.1 0.04 ?101% perf-profile.children.cycles-pp.d_lookup
0.24 ? 11% -0.1 0.18 ? 13% perf-profile.children.cycles-pp.walk_component
0.10 ? 14% -0.1 0.05 ? 72% perf-profile.children.cycles-pp.alloc_fd
0.08 ? 16% -0.1 0.03 ?100% perf-profile.children.cycles-pp.__d_lookup
0.08 ? 14% -0.1 0.03 ?101% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.10 ? 18% -0.0 0.05 ? 49% perf-profile.children.cycles-pp.f2fs_get_new_data_page
0.12 ? 16% -0.0 0.07 ? 21% perf-profile.children.cycles-pp.number
0.20 ? 10% -0.0 0.16 ? 13% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.07 ? 13% -0.0 0.03 ?100% perf-profile.children.cycles-pp.xas_start
0.10 ? 18% -0.0 0.06 ? 50% perf-profile.children.cycles-pp.getcwd
0.21 ? 7% -0.0 0.18 ? 11% perf-profile.children.cycles-pp.down_write
0.08 ? 8% +0.0 0.12 ? 17% perf-profile.children.cycles-pp._find_next_bit
0.14 ? 9% +0.0 0.19 ? 9% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.04 ? 73% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.__libc_fork
0.06 ? 11% +0.1 0.11 ? 25% perf-profile.children.cycles-pp.f2fs_allocate_data_block
0.05 ? 75% +0.1 0.10 ? 16% perf-profile.children.cycles-pp.__libc_start_main
0.05 ? 75% +0.1 0.10 ? 16% perf-profile.children.cycles-pp.main
0.05 ? 75% +0.1 0.10 ? 16% perf-profile.children.cycles-pp.run_builtin
0.05 ? 49% +0.1 0.10 ? 27% perf-profile.children.cycles-pp.mmput
0.05 ? 49% +0.1 0.10 ? 27% perf-profile.children.cycles-pp.exit_mmap
0.06 ? 46% +0.1 0.11 ? 17% perf-profile.children.cycles-pp.load_elf_binary
0.06 ? 46% +0.1 0.11 ? 18% perf-profile.children.cycles-pp.exec_binprm
0.06 ? 46% +0.1 0.11 ? 18% perf-profile.children.cycles-pp.search_binary_handler
0.06 ? 17% +0.1 0.12 ? 20% perf-profile.children.cycles-pp.cpumask_next_and
0.10 ? 29% +0.1 0.16 ? 37% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.06 ? 47% +0.1 0.12 ? 20% perf-profile.children.cycles-pp.bprm_execve
0.06 ? 7% +0.1 0.12 ? 23% perf-profile.children.cycles-pp.arch_cpu_idle_exit
0.09 ? 21% +0.1 0.15 ? 12% perf-profile.children.cycles-pp.__x64_sys_execve
0.09 ? 21% +0.1 0.15 ? 12% perf-profile.children.cycles-pp.do_execveat_common
0.04 ? 73% +0.1 0.10 ? 27% perf-profile.children.cycles-pp.do_fault
0.09 ? 21% +0.1 0.15 ? 12% perf-profile.children.cycles-pp.execve
0.04 ? 72% +0.1 0.11 ? 19% perf-profile.children.cycles-pp.sched_clock
0.02 ?141% +0.1 0.08 ? 70% perf-profile.children.cycles-pp.up_read
0.09 ? 25% +0.1 0.16 ? 16% perf-profile.children.cycles-pp.handle_mm_fault
0.09 ? 23% +0.1 0.16 ? 15% perf-profile.children.cycles-pp.__handle_mm_fault
0.08 ? 12% +0.1 0.14 ? 18% perf-profile.children.cycles-pp.note_gp_changes
0.11 ? 17% +0.1 0.18 ? 16% perf-profile.children.cycles-pp.do_write_page
0.11 ? 17% +0.1 0.18 ? 16% perf-profile.children.cycles-pp.f2fs_do_write_node_page
0.10 ? 21% +0.1 0.18 ? 17% perf-profile.children.cycles-pp.do_user_addr_fault
0.10 ? 21% +0.1 0.19 ? 18% perf-profile.children.cycles-pp.exc_page_fault
0.11 ? 18% +0.1 0.20 ? 15% perf-profile.children.cycles-pp.asm_exc_page_fault
0.11 ? 10% +0.1 0.20 ? 20% perf-profile.children.cycles-pp.call_cpuidle
0.12 ? 22% +0.1 0.23 ? 13% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.12 ? 12% +0.1 0.24 ? 15% perf-profile.children.cycles-pp.rcu_dynticks_inc
0.14 ? 21% +0.1 0.26 ? 28% perf-profile.children.cycles-pp.update_irq_load_avg
0.13 ? 23% +0.1 0.26 ? 15% perf-profile.children.cycles-pp.tick_sched_do_timer
0.12 ? 17% +0.1 0.25 ? 22% perf-profile.children.cycles-pp.notify_change
0.32 ? 4% +0.1 0.46 ? 16% perf-profile.children.cycles-pp.__write_node_page
0.06 ? 17% +0.2 0.22 ? 25% perf-profile.children.cycles-pp.f2fs_setattr
0.28 ? 14% +0.2 0.45 ? 21% perf-profile.children.cycles-pp.update_rq_clock
0.03 ?100% +0.2 0.20 ? 48% perf-profile.children.cycles-pp.f2fs_del_fsync_node_entry
0.18 ? 16% +0.2 0.36 ? 16% perf-profile.children.cycles-pp.update_sg_lb_stats
0.12 ? 37% +0.2 0.31 ? 36% perf-profile.children.cycles-pp.f2fs_write_end_io
0.12 ? 37% +0.2 0.31 ? 36% perf-profile.children.cycles-pp.blk_update_request
0.13 ? 33% +0.2 0.32 ? 35% perf-profile.children.cycles-pp.scsi_io_completion
0.13 ? 33% +0.2 0.32 ? 35% perf-profile.children.cycles-pp.scsi_end_request
0.29 ? 27% +0.2 0.48 ? 24% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.13 ? 34% +0.2 0.33 ? 33% perf-profile.children.cycles-pp.blk_complete_reqs
0.24 ? 33% +0.2 0.44 ? 13% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.25 ? 5% +0.2 0.46 ? 11% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.27 ? 21% +0.2 0.49 ? 14% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.25 ? 18% +0.2 0.48 ? 22% perf-profile.children.cycles-pp.update_sd_lb_stats
0.33 ? 3% +0.2 0.57 ? 14% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.27 ? 20% +0.2 0.52 ? 22% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.2 0.25 ? 27% perf-profile.children.cycles-pp.f2fs_move_node_page
0.35 ? 22% +0.3 0.61 ? 16% perf-profile.children.cycles-pp.rcu_idle_exit
0.28 ? 9% +0.3 0.54 ? 10% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.36 ? 17% +0.3 0.63 ? 17% perf-profile.children.cycles-pp.irqtime_account_irq
0.32 ? 23% +0.3 0.61 ? 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.35 ? 25% +0.3 0.68 ? 31% perf-profile.children.cycles-pp.load_balance
0.53 ? 27% +0.3 0.86 ? 10% perf-profile.children.cycles-pp.do_checkpoint
0.46 ? 14% +0.4 0.81 ? 12% perf-profile.children.cycles-pp.native_sched_clock
0.46 ? 7% +0.4 0.82 ? 11% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.00 +0.4 0.40 ? 27% perf-profile.children.cycles-pp.gc_node_segment
0.00 +0.4 0.40 ? 27% perf-profile.children.cycles-pp.do_garbage_collect
0.56 ? 13% +0.4 1.00 ? 9% perf-profile.children.cycles-pp.sched_clock_cpu
0.66 ? 18% +0.5 1.12 ? 21% perf-profile.children.cycles-pp.rebalance_domains
0.59 ? 10% +0.5 1.07 ? 10% perf-profile.children.cycles-pp.lapic_next_deadline
0.53 ? 7% +0.5 1.03 ? 9% perf-profile.children.cycles-pp.read_tsc
0.60 ? 13% +0.5 1.12 ? 7% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.76 ? 9% +0.6 1.35 ? 12% perf-profile.children.cycles-pp.native_irq_return_iret
0.87 ? 19% +0.6 1.47 ? 15% perf-profile.children.cycles-pp.tick_nohz_next_event
1.16 ? 17% +0.9 2.02 ? 11% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.50 ? 15% +0.9 2.40 ? 17% perf-profile.children.cycles-pp.__softirqentry_text_start
1.74 ? 18% +1.0 2.72 ? 18% perf-profile.children.cycles-pp.__irq_exit_rcu
1.26 ? 19% +1.0 2.28 ? 24% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +1.3 1.26 ? 17% perf-profile.children.cycles-pp.f2fs_gc
2.42 ? 23% +1.9 4.30 ? 26% perf-profile.children.cycles-pp.menu_select
4.93 ? 20% +3.8 8.72 ? 18% perf-profile.children.cycles-pp.hrtimer_interrupt
5.46 ? 18% +4.2 9.67 ? 16% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
8.35 ? 17% +5.9 14.26 ? 17% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
9.96 ? 14% +7.2 17.18 ? 14% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
15.83 ? 14% +10.6 26.47 ? 8% perf-profile.children.cycles-pp.mwait_idle_with_hints
15.99 ? 14% +10.8 26.78 ? 7% perf-profile.children.cycles-pp.intel_idle
28.31 ? 13% +20.0 48.33 ? 9% perf-profile.children.cycles-pp.cpuidle_enter_state
28.39 ? 13% +20.1 48.48 ? 9% perf-profile.children.cycles-pp.cpuidle_enter
31.38 ? 14% +22.4 53.80 ? 10% perf-profile.children.cycles-pp.cpuidle_idle_call
31.82 ? 14% +22.8 54.58 ? 10% perf-profile.children.cycles-pp.do_idle
31.82 ? 14% +22.8 54.58 ? 10% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
31.82 ? 14% +22.8 54.58 ? 10% perf-profile.children.cycles-pp.cpu_startup_entry
41.29 ? 7% -16.4 24.90 ? 14% perf-profile.self.cycles-pp.osq_lock
6.91 ? 7% -2.1 4.86 ? 13% perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.14 ? 16% -1.0 0.18 ? 17% perf-profile.self.cycles-pp.f2fs_find_target_dentry
0.66 ? 7% -0.4 0.32 ? 18% perf-profile.self.cycles-pp.__filemap_get_folio
0.30 ? 8% -0.1 0.21 ? 21% perf-profile.self.cycles-pp.f2fs_new_node_page
0.35 ? 3% -0.1 0.26 ? 20% perf-profile.self.cycles-pp.lockref_get_not_dead
0.31 ? 10% -0.1 0.23 ? 22% perf-profile.self.cycles-pp.lockref_put_return
0.29 ? 8% -0.1 0.21 ? 11% perf-profile.self.cycles-pp.page_counter_charge
0.27 ? 7% -0.1 0.20 ? 14% perf-profile.self.cycles-pp.strlen
0.08 ? 22% -0.1 0.03 ?100% perf-profile.self.cycles-pp.setup_object
0.12 ? 17% -0.1 0.06 ? 14% perf-profile.self.cycles-pp.vfprintf
0.10 ? 19% -0.1 0.04 ? 77% perf-profile.self.cycles-pp.number
0.22 ? 6% -0.1 0.17 ? 24% perf-profile.self.cycles-pp.kmem_cache_alloc
0.13 ? 10% -0.0 0.09 ? 19% perf-profile.self.cycles-pp.allocate_slab
0.14 ? 11% +0.0 0.18 ? 13% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.05 ? 46% +0.0 0.09 ? 17% perf-profile.self.cycles-pp.__softirqentry_text_start
0.07 ? 12% +0.0 0.12 ? 17% perf-profile.self.cycles-pp._find_next_bit
0.07 ? 28% +0.1 0.13 ? 27% perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.02 ?143% +0.1 0.10 ? 28% perf-profile.self.cycles-pp.clockevents_program_event
0.07 ? 49% +0.1 0.14 ? 26% perf-profile.self.cycles-pp.cpuidle_enter
0.01 ?223% +0.1 0.09 ? 40% perf-profile.self.cycles-pp.tick_nohz_get_sleep_length
0.07 ? 18% +0.1 0.16 ? 12% perf-profile.self.cycles-pp.sched_clock_cpu
0.10 ? 15% +0.1 0.19 ? 20% perf-profile.self.cycles-pp.call_cpuidle
0.01 ?223% +0.1 0.09 ? 11% perf-profile.self.cycles-pp.tick_sched_timer
0.10 ? 30% +0.1 0.20 ? 26% perf-profile.self.cycles-pp.tick_sched_do_timer
0.11 ? 14% +0.1 0.22 ? 15% perf-profile.self.cycles-pp.rcu_dynticks_inc
0.17 ? 29% +0.1 0.28 ? 14% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.14 ? 24% +0.1 0.26 ? 30% perf-profile.self.cycles-pp.update_irq_load_avg
0.15 ? 4% +0.1 0.27 ? 15% perf-profile.self.cycles-pp.do_idle
0.00 +0.1 0.13 ? 23% perf-profile.self.cycles-pp.f2fs_del_fsync_node_entry
0.19 ? 11% +0.1 0.32 ? 7% perf-profile.self.cycles-pp.rcu_idle_exit
0.12 ? 19% +0.1 0.26 ? 14% perf-profile.self.cycles-pp.update_sg_lb_stats
0.16 ? 13% +0.1 0.30 ? 12% perf-profile.self.cycles-pp.intel_idle
0.18 ? 13% +0.2 0.37 ? 13% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.25 ? 5% +0.2 0.46 ? 11% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.28 ? 16% +0.2 0.50 ? 8% perf-profile.self.cycles-pp.cpuidle_idle_call
0.30 ? 5% +0.2 0.54 ? 14% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.28 ? 10% +0.3 0.54 ? 11% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.49 ? 29% +0.3 0.77 ? 10% perf-profile.self.cycles-pp.do_checkpoint
0.43 ? 12% +0.3 0.77 ? 12% perf-profile.self.cycles-pp.native_sched_clock
0.46 ? 7% +0.4 0.82 ? 11% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.58 ? 10% +0.5 1.06 ? 10% perf-profile.self.cycles-pp.lapic_next_deadline
0.51 ? 6% +0.5 1.00 ? 8% perf-profile.self.cycles-pp.read_tsc
0.76 ? 9% +0.6 1.35 ? 12% perf-profile.self.cycles-pp.native_irq_return_iret
2.17 ? 5% +1.7 3.89 ? 6% perf-profile.self.cycles-pp.cpuidle_enter_state
15.83 ? 14% +10.6 26.45 ? 7% perf-profile.self.cycles-pp.mwait_idle_with_hints
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://01.org/lkp