Hello,
kernel test robot noticed a 140.4% improvement of aim7.jobs-per-min on:
commit: 7db922bae3abdf0a1db81ef7228cc0b996a0c1e3 ("md/raid1-10: submit write io directly if bitmap is not enabled")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: aim7
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
parameters:
disk: 4BRD_12G
md: RAID1
fs: xfs
test: sync_disk_rw
load: 300
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-12/performance/4BRD_12G/xfs/x86_64-rhel-8.3/300/RAID1/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp3/sync_disk_rw/aim7
commit:
8295efbe68 ("md/raid1-10: factor out a helper to submit normal write")
7db922bae3 ("md/raid1-10: submit write io directly if bitmap is not enabled")
8295efbe68c08004 7db922bae3abdf0a1db81ef7228
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.75e+10 -81.0% 3.329e+09 cpuidle..time
81895449 -58.5% 34004295 cpuidle..usage
77515 -9.5% 70161 numa-meminfo.node1.Inactive(file)
27116 ? 34% +220.5% 86894 ? 26% numa-meminfo.node1.Mapped
29.86 +15.0% 34.34 iostat.cpu.idle
44.63 -99.8% 0.09 ? 10% iostat.cpu.iowait
25.28 +157.2% 65.04 iostat.cpu.system
2124 ? 6% +247.5% 7383 ? 3% numa-vmstat.node0.nr_zone_write_pending
19395 -9.5% 17547 numa-vmstat.node1.nr_inactive_file
6786 ? 34% +220.6% 21756 ? 26% numa-vmstat.node1.nr_mapped
19394 -9.5% 17545 numa-vmstat.node1.nr_zone_inactive_file
2211 ? 5% +231.3% 7325 ? 6% numa-vmstat.node1.nr_zone_write_pending
224.67 ? 11% +193.0% 658.17 ? 9% perf-c2c.DRAM.local
2027 ? 8% +123.5% 4530 ? 8% perf-c2c.DRAM.remote
2999 ? 11% +101.2% 6036 ? 10% perf-c2c.HITM.local
1325 ? 8% +107.8% 2753 ? 8% perf-c2c.HITM.remote
4325 ? 10% +103.2% 8790 ? 9% perf-c2c.HITM.total
7183 +140.4% 17266 aim7.jobs-per-min
250.68 -58.4% 104.30 aim7.time.elapsed_time
250.68 -58.4% 104.30 aim7.time.elapsed_time.max
883140 +136.5% 2088719 aim7.time.involuntary_context_switches
5418 +15.5% 6257 aim7.time.system_time
71447015 -20.4% 56860640 aim7.time.voluntary_context_switches
29.32 +3.8 33.15 mpstat.cpu.all.idle%
44.98 -44.9 0.08 ? 10% mpstat.cpu.all.iowait%
0.93 +0.3 1.22 mpstat.cpu.all.irq%
0.08 +0.0 0.11 ? 2% mpstat.cpu.all.soft%
24.46 +40.4 64.89 mpstat.cpu.all.sys%
0.23 +0.3 0.54 mpstat.cpu.all.usr%
29.17 +16.0% 33.83 vmstat.cpu.id
24.83 +159.7% 64.50 vmstat.cpu.sy
44.17 -100.0% 0.00 vmstat.cpu.wa
296829 +131.6% 687420 vmstat.io.bo
64.83 ? 3% -99.2% 0.50 ?100% vmstat.procs.b
22.50 ? 10% +355.6% 102.50 ? 6% vmstat.procs.r
580135 +57.2% 912228 vmstat.system.cs
125548 +51.0% 189630 vmstat.system.in
1074010 +21.4% 1304145 meminfo.Active
1073325 +21.4% 1303462 meminfo.Active(anon)
154366 ? 5% -31.3% 106082 ? 11% meminfo.AnonHugePages
306887 +12.1% 344110 meminfo.AnonPages
1770333 +19.3% 2111835 meminfo.Committed_AS
504841 +13.3% 571754 meminfo.Inactive
351821 +22.4% 430731 ? 2% meminfo.Inactive(anon)
69700 ? 2% +129.3% 159793 ? 6% meminfo.Mapped
14378 ? 5% +9.4% 15724 ? 5% meminfo.PageTables
1118364 +24.3% 1390248 meminfo.Shmem
779.67 +168.4% 2093 turbostat.Avg_MHz
28.88 +39.1 68.00 turbostat.Busy%
2707 +14.0% 3085 turbostat.Bzy_MHz
16276146 -33.0% 10911982 ? 2% turbostat.C1
7.80 -1.5 6.31 ? 2% turbostat.C1%
57931623 -62.8% 21545399 turbostat.C1E
59.06 -43.4 15.64 turbostat.C1E%
1300410 -16.0% 1092925 turbostat.C6
4.83 +5.8 10.67 turbostat.C6%
69.70 -59.0% 28.60 turbostat.CPU%c1
1.43 ? 2% +138.3% 3.40 ? 3% turbostat.CPU%c6
42.67 ? 2% +16.0% 49.50 ? 2% turbostat.CoreTmp
31837757 -35.9% 20394594 turbostat.IRQ
6367753 ? 3% -93.2% 433501 ? 3% turbostat.POLL
0.52 ? 4% -0.5 0.05 turbostat.POLL%
0.71 ? 4% +133.6% 1.67 ? 6% turbostat.Pkg%pc2
0.52 ? 4% +144.3% 1.26 ? 9% turbostat.Pkg%pc6
43.17 ? 2% +15.1% 49.67 turbostat.PkgTmp
181.53 +35.9% 246.68 turbostat.PkgWatt
38.88 +8.9% 42.34 turbostat.RAMWatt
268332 +21.4% 325878 proc-vmstat.nr_active_anon
76721 +12.1% 86028 proc-vmstat.nr_anon_pages
1005548 +6.5% 1070506 proc-vmstat.nr_file_pages
87958 +22.4% 107685 ? 2% proc-vmstat.nr_inactive_anon
38252 -7.9% 35234 proc-vmstat.nr_inactive_file
29247 -7.3% 27114 proc-vmstat.nr_kernel_stack
17434 ? 2% +129.3% 39978 ? 6% proc-vmstat.nr_mapped
3593 ? 5% +9.4% 3930 ? 5% proc-vmstat.nr_page_table_pages
279595 +24.3% 347577 proc-vmstat.nr_shmem
33624 -1.9% 32976 proc-vmstat.nr_slab_reclaimable
59243 -5.0% 56307 proc-vmstat.nr_slab_unreclaimable
268332 +21.4% 325878 proc-vmstat.nr_zone_active_anon
87958 +22.4% 107685 ? 2% proc-vmstat.nr_zone_inactive_anon
38252 -7.9% 35234 proc-vmstat.nr_zone_inactive_file
4451 ? 7% +227.9% 14595 ? 7% proc-vmstat.nr_zone_write_pending
255591 -18.1% 209324 ? 6% proc-vmstat.pgactivate
5464088 -2.3% 5336776 proc-vmstat.pgalloc_normal
955015 -21.5% 749659 ? 2% proc-vmstat.pgfault
4893840 -6.0% 4598554 proc-vmstat.pgfree
75378961 -1.7% 74083008 proc-vmstat.pgpgout
41266 ? 6% -21.5% 32384 ? 12% proc-vmstat.pgreuse
2023936 -53.9% 932480 proc-vmstat.unevictable_pgs_scanned
0.22 ? 11% +204.8% 0.69 ? 22% sched_debug.cfs_rq:/.h_nr_running.avg
1.53 ? 16% +492.4% 9.08 ? 59% sched_debug.cfs_rq:/.h_nr_running.max
0.44 ? 6% +172.8% 1.20 ? 42% sched_debug.cfs_rq:/.h_nr_running.stddev
211.08 ? 3% +96.0% 413.60 ? 3% sched_debug.cfs_rq:/.load_avg.avg
6916 ? 7% -58.6% 2860 ? 10% sched_debug.cfs_rq:/.load_avg.max
4.80 ? 6% +245.5% 16.58 ? 13% sched_debug.cfs_rq:/.load_avg.min
984.17 ? 5% -35.4% 635.92 ? 4% sched_debug.cfs_rq:/.load_avg.stddev
528760 ? 3% +175.5% 1456888 sched_debug.cfs_rq:/.min_vruntime.avg
613278 ? 4% +163.6% 1616433 sched_debug.cfs_rq:/.min_vruntime.max
511276 ? 3% +175.0% 1406036 sched_debug.cfs_rq:/.min_vruntime.min
12541 ? 10% +105.9% 25821 ? 8% sched_debug.cfs_rq:/.min_vruntime.stddev
0.22 ? 9% +91.8% 0.41 ? 6% sched_debug.cfs_rq:/.nr_running.avg
13.30 ? 19% +98.4% 26.40 ? 25% sched_debug.cfs_rq:/.removed.load_avg.avg
275.13 ? 34% +86.1% 512.00 sched_debug.cfs_rq:/.removed.load_avg.max
55.26 ? 9% +101.8% 111.55 ? 12% sched_debug.cfs_rq:/.removed.load_avg.stddev
4.72 ? 21% +134.9% 11.09 ? 22% sched_debug.cfs_rq:/.removed.runnable_avg.avg
135.93 ? 30% +94.8% 264.75 ? 5% sched_debug.cfs_rq:/.removed.runnable_avg.max
22.26 ? 9% +118.4% 48.61 ? 9% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
4.72 ? 21% +135.0% 11.09 ? 22% sched_debug.cfs_rq:/.removed.util_avg.avg
135.87 ? 30% +94.9% 264.75 ? 5% sched_debug.cfs_rq:/.removed.util_avg.max
22.25 ? 9% +118.5% 48.61 ? 9% sched_debug.cfs_rq:/.removed.util_avg.stddev
229.34 ? 6% +224.5% 744.17 ? 13% sched_debug.cfs_rq:/.runnable_avg.avg
1151 ? 6% +224.0% 3730 ? 44% sched_debug.cfs_rq:/.runnable_avg.max
165.72 ? 8% +222.2% 534.02 ? 29% sched_debug.cfs_rq:/.runnable_avg.stddev
34040 ? 56% +335.1% 148112 ? 37% sched_debug.cfs_rq:/.spread0.max
12547 ? 10% +106.0% 25846 ? 8% sched_debug.cfs_rq:/.spread0.stddev
221.02 ? 6% +140.0% 530.54 ? 7% sched_debug.cfs_rq:/.util_avg.avg
947.60 ? 8% +159.2% 2456 ? 36% sched_debug.cfs_rq:/.util_avg.max
146.95 ? 8% +157.5% 378.32 ? 20% sched_debug.cfs_rq:/.util_avg.stddev
20.94 ? 20% +531.7% 132.27 ? 26% sched_debug.cfs_rq:/.util_est_enqueued.avg
687.73 ? 16% +199.6% 2060 ? 48% sched_debug.cfs_rq:/.util_est_enqueued.max
84.67 ? 16% +208.7% 261.37 ? 38% sched_debug.cfs_rq:/.util_est_enqueued.stddev
399478 ? 6% +22.0% 487190 ? 6% sched_debug.cpu.avg_idle.avg
80764 ? 12% -79.4% 16651 ? 50% sched_debug.cpu.avg_idle.min
165665 -53.3% 77433 ? 2% sched_debug.cpu.clock_task.min
1240 ? 7% +83.7% 2278 ? 6% sched_debug.cpu.curr->pid.avg
7702 ? 8% -26.1% 5691 ? 5% sched_debug.cpu.curr->pid.max
2293 ? 2% -10.4% 2055 ? 3% sched_debug.cpu.curr->pid.stddev
0.00 ? 4% +59.7% 0.00 ? 10% sched_debug.cpu.next_balance.stddev
0.22 ? 11% +206.1% 0.68 ? 23% sched_debug.cpu.nr_running.avg
1.70 ? 8% +429.4% 9.00 ? 61% sched_debug.cpu.nr_running.max
0.44 ? 6% +170.3% 1.20 ? 42% sched_debug.cpu.nr_running.stddev
710875 ? 2% -63.5% 259242 sched_debug.cpu.nr_switches.avg
798152 ? 3% -64.1% 286588 sched_debug.cpu.nr_switches.max
653132 ? 2% -64.7% 230262 ? 3% sched_debug.cpu.nr_switches.min
30628 ? 25% -76.2% 7296 ? 5% sched_debug.cpu.nr_switches.stddev
173637 -51.2% 84667 ? 2% sched_debug.sched_clk
20.63 -17.1% 17.10 perf-stat.i.MPKI
2.991e+09 +192.6% 8.752e+09 perf-stat.i.branch-instructions
1.71 -0.1 1.64 perf-stat.i.branch-miss-rate%
42725828 ? 2% +74.7% 74627220 ? 4% perf-stat.i.branch-misses
19.37 ? 2% +6.6 25.99 ? 2% perf-stat.i.cache-miss-rate%
49244826 +87.9% 92555177 ? 4% perf-stat.i.cache-misses
2.512e+08 ? 2% +34.2% 3.371e+08 ? 3% perf-stat.i.cache-references
587232 +59.9% 939182 perf-stat.i.context-switches
5.18 -5.0% 4.92 perf-stat.i.cpi
7.466e+10 +171.4% 2.027e+11 perf-stat.i.cpu-cycles
96178 +159.7% 249742 perf-stat.i.cpu-migrations
1524 +42.0% 2164 ? 3% perf-stat.i.cycles-between-cache-misses
0.12 ? 5% +0.1 0.20 ? 21% perf-stat.i.dTLB-load-miss-rate%
2834512 ? 6% +195.1% 8363601 ? 68% perf-stat.i.dTLB-load-misses
4.128e+09 +175.6% 1.138e+10 perf-stat.i.dTLB-loads
0.04 ? 3% +0.0 0.05 ? 3% perf-stat.i.dTLB-store-miss-rate%
489675 ? 4% +97.4% 966722 ? 3% perf-stat.i.dTLB-store-misses
1.354e+09 +63.7% 2.216e+09 perf-stat.i.dTLB-stores
22.05 +7.0 29.01 perf-stat.i.iTLB-load-miss-rate%
2485101 +156.0% 6363100 perf-stat.i.iTLB-load-misses
9693681 +101.9% 19570449 perf-stat.i.iTLB-loads
1.466e+10 +189.6% 4.245e+10 perf-stat.i.instructions
5977 ? 2% +7.2% 6407 ? 2% perf-stat.i.instructions-per-iTLB-miss
0.20 +11.8% 0.23 perf-stat.i.ipc
0.45 ? 26% +191.8% 1.31 ? 19% perf-stat.i.major-faults
0.78 +171.4% 2.11 perf-stat.i.metric.GHz
497.43 +76.2% 876.38 perf-stat.i.metric.K/sec
90.85 +160.0% 236.18 perf-stat.i.metric.M/sec
3017 +78.7% 5393 ? 3% perf-stat.i.minor-faults
89.36 -1.5 87.87 perf-stat.i.node-load-miss-rate%
17746505 +64.8% 29253032 ? 3% perf-stat.i.node-load-misses
2029519 +84.9% 3751768 ? 3% perf-stat.i.node-loads
10891509 ? 2% +59.8% 17400987 perf-stat.i.node-store-misses
3732971 ? 7% +54.9% 5783856 perf-stat.i.node-stores
3017 +78.8% 5394 ? 3% perf-stat.i.page-faults
17.13 ? 2% -53.6% 7.94 ? 3% perf-stat.overall.MPKI
1.43 ? 2% -0.6 0.85 ? 4% perf-stat.overall.branch-miss-rate%
19.62 ? 2% +7.8 27.45 ? 2% perf-stat.overall.cache-miss-rate%
5.09 -6.3% 4.77 perf-stat.overall.cpi
1516 +44.7% 2193 ? 3% perf-stat.overall.cycles-between-cache-misses
0.04 ? 4% +0.0 0.04 ? 4% perf-stat.overall.dTLB-store-miss-rate%
20.41 +4.1 24.54 perf-stat.overall.iTLB-load-miss-rate%
5900 +13.1% 6675 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.20 +6.7% 0.21 perf-stat.overall.ipc
89.74 -1.1 88.64 perf-stat.overall.node-load-miss-rate%
2.979e+09 +191.2% 8.673e+09 perf-stat.ps.branch-instructions
42548503 ? 2% +73.7% 73925126 ? 4% perf-stat.ps.branch-misses
49037766 +87.0% 91714153 ? 4% perf-stat.ps.cache-misses
2.501e+08 ? 2% +33.6% 3.342e+08 ? 3% perf-stat.ps.cache-references
584739 +59.1% 930500 perf-stat.ps.context-switches
7.435e+10 +170.2% 2.009e+11 perf-stat.ps.cpu-cycles
95769 +158.5% 247518 perf-stat.ps.cpu-migrations
2823348 ? 6% +193.3% 8281262 ? 68% perf-stat.ps.dTLB-load-misses
4.111e+09 +174.3% 1.128e+10 perf-stat.ps.dTLB-loads
487641 ? 4% +96.4% 957657 ? 3% perf-stat.ps.dTLB-store-misses
1.348e+09 +62.9% 2.196e+09 perf-stat.ps.dTLB-stores
2475112 +154.8% 6305876 perf-stat.ps.iTLB-load-misses
9652488 +100.9% 19389559 perf-stat.ps.iTLB-loads
1.46e+10 +188.2% 4.207e+10 perf-stat.ps.instructions
0.45 ? 26% +187.4% 1.29 ? 19% perf-stat.ps.major-faults
3006 +77.1% 5323 ? 3% perf-stat.ps.minor-faults
17671314 +64.1% 28989870 ? 3% perf-stat.ps.node-load-misses
2020921 +83.9% 3716917 ? 3% perf-stat.ps.node-loads
10845226 ? 2% +59.0% 17243481 perf-stat.ps.node-store-misses
3717390 ? 7% +54.1% 5730191 perf-stat.ps.node-stores
3006 +77.1% 5324 ? 3% perf-stat.ps.page-faults
3.667e+12 +20.4% 4.414e+12 perf-stat.total.instructions
0.01 ? 53% +1298.8% 0.19 ? 23% perf-sched.sch_delay.avg.ms.__cond_resched.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.04 ? 5% +847.9% 0.41 ? 4% perf-sched.sch_delay.avg.ms.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.02 ?134% +890.8% 0.18 ? 45% perf-sched.sch_delay.avg.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time
0.02 ? 67% +1844.7% 0.30 ? 21% perf-sched.sch_delay.avg.ms.__cond_resched.down_write.xfs_ilock.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.01 ? 69% +1278.9% 0.17 ? 29% perf-sched.sch_delay.avg.ms.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time.file_modified_flags
0.00 ?145% +5670.0% 0.10 ? 46% perf-sched.sch_delay.avg.ms.__cond_resched.down_write_killable.exec_mmap.begin_new_exec.load_elf_binary
0.01 ? 74% +2838.2% 0.17 ? 23% perf-sched.sch_delay.avg.ms.__cond_resched.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.00 ? 62% +3742.9% 0.18 ? 53% perf-sched.sch_delay.avg.ms.__cond_resched.md_write_start.raid1_make_request.md_handle_request.__submit_bio
0.02 ? 23% +1115.3% 0.20 ? 18% perf-sched.sch_delay.avg.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.bio_alloc_clone.raid1_write_request
0.00 ?116% +6004.8% 0.21 ? 40% perf-sched.sch_delay.avg.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_add_to_ioend.iomap_writepage_map
0.01 ?121% +991.2% 0.06 perf-sched.sch_delay.avg.ms.__cond_resched.process_one_work.worker_thread.kthread.ret_from_fork
0.02 ? 88% +1250.0% 0.25 ? 23% perf-sched.sch_delay.avg.ms.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate.isra
0.01 ? 59% +1192.4% 0.14 ? 30% perf-sched.sch_delay.avg.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.01 ? 6% +623.9% 0.11 ? 5% perf-sched.sch_delay.avg.ms.__cond_resched.submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.00 ? 80% +3936.8% 0.13 ? 76% perf-sched.sch_delay.avg.ms.__cond_resched.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 ? 92% +2382.6% 0.10 ? 75% perf-sched.sch_delay.avg.ms.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.02 ? 93% +836.1% 0.19 ? 28% perf-sched.sch_delay.avg.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
0.01 ? 11% +1157.1% 0.12 ?109% perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.00 ? 19% +400.0% 0.01 ? 33% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.01 ? 4% +134.5% 0.02 ? 29% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.02 ?134% +630.3% 0.14 ? 43% perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
0.01 ?124% +807.2% 0.13 ? 58% perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.01 ? 79% +413.5% 0.03 ? 18% perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.02 ? 2% +647.6% 0.18 perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
0.02 ? 2% +1904.2% 0.48 ? 2% perf-sched.sch_delay.avg.ms.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
0.00 ? 20% +818.2% 0.02 ? 40% perf-sched.sch_delay.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
0.01 ? 11% +133.3% 0.02 ? 25% perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
0.01 ? 34% +1084.9% 0.10 ? 63% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.01 ? 42% +1492.2% 0.20 ? 14% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.usleep_range_state.ipmi_thread.kthread
0.05 ? 46% +108.8% 0.09 ? 5% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.__flush_workqueue
0.04 ? 15% +603.3% 0.29 ? 2% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xlog_cil_commit
0.02 ? 62% +1479.4% 0.28 ? 12% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
0.01 ? 38% +1675.3% 0.26 ? 51% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
0.01 ? 54% +350.0% 0.04 ? 6% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xlog_cil_push_work
0.01 ? 6% +267.9% 0.05 ? 26% perf-sched.sch_delay.avg.ms.schedule_timeout.___down_common.__down.down
0.01 +2402.1% 0.20 perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
0.01 ? 20% +216.7% 0.03 ? 20% perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.00 ? 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.md_thread.kthread.ret_from_fork
0.01 ? 6% +82.1% 0.02 ? 10% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.01 ? 10% +96.4% 0.02 ? 18% perf-sched.sch_delay.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.01 ? 7% +102.6% 0.01 ? 8% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
0.01 ? 14% +5327.5% 0.36 ?136% perf-sched.sch_delay.avg.ms.syslog_print.do_syslog.kmsg_read.vfs_read
0.00 ? 30% +2393.3% 0.06 ? 52% perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
0.01 ? 21% +1811.9% 0.13 ? 8% perf-sched.sch_delay.avg.ms.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.00 ?223% +828.6% 0.01 ? 39% perf-sched.sch_delay.avg.ms.xlog_cil_order_write.xlog_cil_write_commit_record.xlog_cil_push_work.process_one_work
0.01 ? 4% +1817.0% 0.15 ? 5% perf-sched.sch_delay.avg.ms.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.03 ? 5% +63.6% 0.05 ? 11% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.xfs_file_fsync.xfs_file_buffered_write.vfs_write
0.01 ? 7% +3217.9% 0.22 perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.07 ? 79% +3019.8% 2.20 ? 37% perf-sched.sch_delay.max.ms.__cond_resched.__filemap_get_folio.iomap_write_begin.iomap_write_iter.iomap_file_buffered_write
0.03 ?197% +7797.9% 2.51 ? 37% perf-sched.sch_delay.max.ms.__cond_resched.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
4.42 ? 34% +106.3% 9.13 ? 11% perf-sched.sch_delay.max.ms.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.08 ?147% +1783.0% 1.50 ? 59% perf-sched.sch_delay.max.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time
0.14 ? 82% +1821.6% 2.78 ? 16% perf-sched.sch_delay.max.ms.__cond_resched.down_write.xfs_ilock.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.07 ? 96% +2785.8% 1.97 ? 28% perf-sched.sch_delay.max.ms.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time.file_modified_flags
0.00 ?145% +60670.0% 1.01 ? 49% perf-sched.sch_delay.max.ms.__cond_resched.down_write_killable.exec_mmap.begin_new_exec.load_elf_binary
0.26 ? 39% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
0.02 ? 88% +11421.1% 1.73 ? 31% perf-sched.sch_delay.max.ms.__cond_resched.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.08 ?151% +1187.1% 1.02 ? 79% perf-sched.sch_delay.max.ms.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time.file_modified_flags
0.01 ? 87% +15961.7% 1.26 ? 55% perf-sched.sch_delay.max.ms.__cond_resched.md_write_start.raid1_make_request.md_handle_request.__submit_bio
0.37 ? 49% +334.4% 1.61 ? 43% perf-sched.sch_delay.max.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.bio_alloc_clone.raid1_write_request
0.01 ?154% +18477.8% 1.39 ? 72% perf-sched.sch_delay.max.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_add_to_ioend.iomap_writepage_map
0.09 ?107% +3793.8% 3.37 ? 21% perf-sched.sch_delay.max.ms.__cond_resched.mutex_lock.__flush_workqueue.xlog_cil_push_now.isra
0.01 ?146% +15241.7% 1.23 ? 9% perf-sched.sch_delay.max.ms.__cond_resched.process_one_work.worker_thread.kthread.ret_from_fork
0.09 ?107% +2738.3% 2.62 ? 48% perf-sched.sch_delay.max.ms.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate.isra
0.12 ? 84% +1025.4% 1.36 ? 34% perf-sched.sch_delay.max.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.27 ? 34% +1505.8% 4.41 ? 44% perf-sched.sch_delay.max.ms.__cond_resched.submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.00 ? 64% +15537.9% 0.76 ? 68% perf-sched.sch_delay.max.ms.__cond_resched.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.00 ? 92% +17704.3% 0.68 ? 60% perf-sched.sch_delay.max.ms.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.15 ? 80% +1663.5% 2.62 ? 26% perf-sched.sch_delay.max.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
0.01 ? 20% +1714.3% 0.21 ?122% perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.05 ? 85% +1156.2% 0.68 ? 23% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.16 ? 18% +258.3% 0.59 ? 53% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.00 ?141% +41337.5% 0.55 ? 82% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.07 ?134% +2603.3% 1.78 ? 58% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
0.07 ?134% +2494.1% 1.77 ? 40% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.05 ?117% +3411.7% 1.91 ? 35% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
7.40 ? 23% +38.7% 10.26 ? 10% perf-sched.sch_delay.max.ms.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
0.25 ? 34% +1000.9% 2.79 ? 43% perf-sched.sch_delay.max.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
0.13 ? 54% +482.4% 0.76 ?120% perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork
0.03 ? 77% +3936.0% 1.18 ? 80% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.12 ? 79% +988.4% 1.30 ? 34% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.usleep_range_state.ipmi_thread.kthread
4.20 ? 36% +70.4% 7.16 ? 14% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.__flush_workqueue
1.93 ? 8% +419.5% 10.00 ? 42% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xlog_cil_commit
0.25 ? 43% +639.0% 1.84 ? 18% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
0.28 ? 44% +489.2% 1.63 ? 49% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
0.02 ? 88% +7476.1% 1.38 ?105% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xlog_cil_push_work
0.75 ? 68% +321.7% 3.16 ? 25% perf-sched.sch_delay.max.ms.schedule_timeout.___down_common.__down.down
7.21 ? 9% +52.4% 10.99 ? 18% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
0.16 ? 21% +163.4% 0.42 ? 31% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.02 ? 41% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.md_thread.kthread.ret_from_fork
0.24 ? 16% +855.6% 2.26 ? 65% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.07 ? 69% +720.3% 0.56 ? 46% perf-sched.sch_delay.max.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.27 ? 59% +333.2% 1.15 ? 51% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
0.01 ? 24% +8455.1% 0.70 ?142% perf-sched.sch_delay.max.ms.syslog_print.do_syslog.kmsg_read.vfs_read
0.01 ? 82% +6441.6% 0.97 ? 64% perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
0.00 ?223% +2485.7% 0.03 ? 95% perf-sched.sch_delay.max.ms.xlog_cil_order_write.xlog_cil_write_commit_record.xlog_cil_push_work.process_one_work
7.41 ? 20% +38.1% 10.22 ? 20% perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.02 ? 11% +1571.0% 0.26 perf-sched.total_sch_delay.average.ms
2.48 -10.3% 2.22 perf-sched.total_wait_and_delay.average.ms
1143525 +33.8% 1530352 perf-sched.total_wait_and_delay.count.ms
2.46 -20.3% 1.96 perf-sched.total_wait_time.average.ms
0.03 ? 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
151.49 ? 18% +41.6% 214.52 ? 6% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.14 ? 3% +258.9% 0.50 perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
186.49 ? 57% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.kthreadd.ret_from_fork
30.53 ? 5% +170.0% 82.41 ? 24% perf-sched.wait_and_delay.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
3.48 ? 4% -9.4% 3.16 ? 2% perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
28.30 ? 2% +287.8% 109.76 ? 12% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
23.14 ? 37% -94.2% 1.34 ? 7% perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xlog_cil_commit
6.39 ? 7% -86.5% 0.86 ?223% perf-sched.wait_and_delay.avg.ms.schedule_timeout.___down_common.__down.down
1.27 -50.0% 0.63 perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
32.83 ? 57% -98.4% 0.53 ?223% perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.blk_execute_rq
10.76 ? 16% -61.5% 4.14 ? 8% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
47.77 -32.5% 32.24 ? 2% perf-sched.wait_and_delay.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
11.47 ? 8% -42.6% 6.59 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
2.61 -51.1% 1.27 ? 2% perf-sched.wait_and_delay.avg.ms.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
1.77 ? 2% -64.4% 0.63 ? 44% perf-sched.wait_and_delay.avg.ms.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
2.42 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.xlog_wait_on_iclog.xfs_file_fsync.xfs_file_buffered_write.vfs_write
2.05 ? 2% -38.8% 1.25 perf-sched.wait_and_delay.avg.ms.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
3.12 ? 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
9453 ? 3% +408.3% 48047 ? 2% perf-sched.wait_and_delay.count.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
62017 ? 8% -100.0% 0.00 perf-sched.wait_and_delay.count.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
0.67 ?111% +3e+05% 2011 ? 3% perf-sched.wait_and_delay.count.__cond_resched.process_one_work.worker_thread.kthread.ret_from_fork
228223 ? 2% -48.8% 116807 perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
12.00 ? 46% -100.0% 0.00 perf-sched.wait_and_delay.count.kthreadd.ret_from_fork
222022 ? 2% +96.9% 437150 perf-sched.wait_and_delay.count.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
2107 ? 5% -60.7% 828.17 ? 25% perf-sched.wait_and_delay.count.pipe_read.vfs_read.ksys_read.do_syscall_64
117.50 ? 15% +62.4% 190.83 ? 11% perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork
417.50 ? 2% -81.6% 76.83 ? 14% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
1008 ? 11% +4241.0% 43772 ? 3% perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xlog_cil_commit
243276 ? 2% +88.9% 459530 perf-sched.wait_and_delay.count.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
435.83 ? 14% +146.2% 1072 ? 8% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
103.33 +45.2% 150.00 ? 2% perf-sched.wait_and_delay.count.schedule_timeout.xfsaild.kthread.ret_from_fork
769.00 ? 3% +27.8% 982.50 ? 2% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
75830 ? 6% +141.2% 182913 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
12582 ? 8% +340.3% 55404 ? 2% perf-sched.wait_and_delay.count.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
47833 ? 6% -85.9% 6760 ? 44% perf-sched.wait_and_delay.count.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
47814 ? 6% -100.0% 0.00 perf-sched.wait_and_delay.count.xlog_wait_on_iclog.xfs_file_fsync.xfs_file_buffered_write.vfs_write
179144 ? 4% -33.3% 119538 perf-sched.wait_and_delay.count.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
1703 -100.0% 0.00 perf-sched.wait_and_delay.count.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
6.13 ? 16% +189.3% 17.73 ? 19% perf-sched.wait_and_delay.max.ms.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
1.63 ? 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
21.45 ?102% +4096.9% 900.29 ? 58% perf-sched.wait_and_delay.max.ms.__cond_resched.process_one_work.worker_thread.kthread.ret_from_fork
10.54 ? 29% +68.3% 17.73 ? 11% perf-sched.wait_and_delay.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
1331 ? 62% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.kthreadd.ret_from_fork
30.29 ?108% +2051.7% 651.68 ? 76% perf-sched.wait_and_delay.max.ms.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
9.92 ? 30% +408.9% 50.47 perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
57.95 ? 74% -98.8% 0.69 ?223% perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.blk_execute_rq
495.53 -62.5% 185.80 ? 52% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
8.95 ? 29% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.xlog_wait_on_iclog.xfs_file_fsync.xfs_file_buffered_write.vfs_write
273.87 ? 87% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
1.73 ? 3% -25.3% 1.29 ? 3% perf-sched.wait_time.avg.ms.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.04 ? 80% +364.9% 0.20 ? 40% perf-sched.wait_time.avg.ms.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map.write_cache_pages
0.05 ? 76% +448.4% 0.29 ? 31% perf-sched.wait_time.avg.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time
0.07 ? 27% +649.1% 0.50 ? 6% perf-sched.wait_time.avg.ms.__cond_resched.down_write.xfs_ilock.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.07 ? 31% +471.7% 0.37 ? 17% perf-sched.wait_time.avg.ms.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time.file_modified_flags
0.00 ?223% +3316.7% 0.07 ?103% perf-sched.wait_time.avg.ms.__cond_resched.dput.__fput.task_work_run.exit_to_user_mode_loop
0.03 ? 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
0.05 ? 31% +517.0% 0.32 ? 19% perf-sched.wait_time.avg.ms.__cond_resched.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.03 ?139% +1590.2% 0.43 ? 76% perf-sched.wait_time.avg.ms.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve.xfs_trans_reserve
0.04 ? 86% +658.8% 0.27 ? 49% perf-sched.wait_time.avg.ms.__cond_resched.md_write_start.raid1_make_request.md_handle_request.__submit_bio
0.08 ? 10% +383.3% 0.39 ? 19% perf-sched.wait_time.avg.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.bio_alloc_clone.raid1_write_request
0.03 ? 88% +1280.8% 0.38 ? 39% perf-sched.wait_time.avg.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_add_to_ioend.iomap_writepage_map
0.05 ? 50% +474.0% 0.29 ? 28% perf-sched.wait_time.avg.ms.__cond_resched.mempool_alloc.raid1_write_request.raid1_make_request.md_handle_request
1.28 ? 29% -59.7% 0.52 ? 4% perf-sched.wait_time.avg.ms.__cond_resched.mutex_lock.__flush_workqueue.xlog_cil_push_now.isra
0.07 ? 78% +555.4% 0.44 ? 18% perf-sched.wait_time.avg.ms.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate.isra
0.00 ?223% +4166.7% 0.04 ? 95% perf-sched.wait_time.avg.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.01 ? 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.submit_bio_noacct.flush_bio_list.flush_pending_writes.raid1d
0.00 ?223% +10036.0% 0.42 ? 62% perf-sched.wait_time.avg.ms.__cond_resched.submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages.do_writepages
1.84 ? 3% -58.1% 0.77 perf-sched.wait_time.avg.ms.__cond_resched.submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.04 ? 46% +694.5% 0.31 ? 38% perf-sched.wait_time.avg.ms.__cond_resched.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.04 ?110% +491.0% 0.21 ? 27% perf-sched.wait_time.avg.ms.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.08 ? 62% +289.0% 0.33 ? 16% perf-sched.wait_time.avg.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
0.00 ?223% +5682.6% 0.22 ?132% perf-sched.wait_time.avg.ms.__cond_resched.xfs_trans_alloc.xfs_vn_update_time.file_modified_flags.xfs_file_write_checks
151.49 ? 18% +41.6% 214.50 ? 6% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ? 92% +44323.8% 7.77 ? 27% perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.07 ? 84% +439.4% 0.40 ? 92% perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.11 ? 3% +177.0% 0.32 perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
186.49 ? 57% -100.0% 0.00 perf-sched.wait_time.avg.ms.kthreadd.ret_from_fork
1.85 ? 3% -28.0% 1.33 perf-sched.wait_time.avg.ms.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
30.52 ? 5% +169.9% 82.39 ? 24% perf-sched.wait_time.avg.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
3.48 ? 4% -9.8% 3.13 ? 2% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork
28.30 ? 2% +287.8% 109.76 ? 12% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.72 ? 13% +113.2% 1.52 ? 64% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
0.20 ? 5% +103.5% 0.40 ? 11% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.usleep_range_state.ipmi_thread.kthread
0.65 ? 7% -60.1% 0.26 perf-sched.wait_time.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.__flush_workqueue
23.10 ? 37% -95.4% 1.06 ? 8% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xlog_cil_commit
0.10 ? 24% +355.8% 0.45 ? 8% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
0.10 ? 15% +410.2% 0.49 ? 23% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
6.38 ? 7% -63.2% 2.35 ? 56% perf-sched.wait_time.avg.ms.schedule_timeout.___down_common.__down.down
1.26 -65.6% 0.43 perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
90.13 ?148% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
0.53 ? 2% +49.4% 0.79 ? 4% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
32.81 ? 57% -94.3% 1.86 ? 33% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.blk_execute_rq
0.05 ? 2% +31.8% 0.07 ? 3% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.submit_bio_wait
0.06 ? 37% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.md_thread.kthread.ret_from_fork
10.74 ? 16% -61.7% 4.12 ? 8% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
47.76 -32.5% 32.22 ? 2% perf-sched.wait_time.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.01 ? 51% +945.7% 0.06 ? 62% perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
11.45 ? 8% -42.6% 6.57 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
2.60 -56.1% 1.14 perf-sched.wait_time.avg.ms.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
1.76 ? 2% -64.7% 0.62 ? 4% perf-sched.wait_time.avg.ms.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
2.39 -83.5% 0.39 ? 2% perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.xfs_file_fsync.xfs_file_buffered_write.vfs_write
2.04 ? 2% -49.1% 1.04 perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
3.12 ? 4% -92.7% 0.23 ? 15% perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
1.44 ? 88% +655.6% 10.85 ? 18% perf-sched.wait_time.max.ms.__cond_resched.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
4.95 ? 8% +141.8% 11.98 ? 6% perf-sched.wait_time.max.ms.__cond_resched.__wait_for_common.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.06 ? 86% +1215.8% 0.75 ? 30% perf-sched.wait_time.max.ms.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map.write_cache_pages
0.14 ?105% +1181.5% 1.73 ? 44% perf-sched.wait_time.max.ms.__cond_resched.down_read.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time
0.31 ? 46% +847.2% 2.90 ? 15% perf-sched.wait_time.max.ms.__cond_resched.down_write.xfs_ilock.xfs_ilock_for_iomap.xfs_buffered_write_iomap_begin
0.24 ? 54% +787.7% 2.13 ? 30% perf-sched.wait_time.max.ms.__cond_resched.down_write.xfs_ilock.xfs_vn_update_time.file_modified_flags
0.00 ?223% +31391.7% 0.63 ?122% perf-sched.wait_time.max.ms.__cond_resched.dput.__fput.task_work_run.exit_to_user_mode_loop
1.62 ? 27% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.flush_bio_list.flush_pending_writes.raid1d.md_thread
0.13 ? 32% +1568.1% 2.15 ? 23% perf-sched.wait_time.max.ms.__cond_resched.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write
0.17 ? 85% +690.6% 1.31 ? 64% perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_vn_update_time.file_modified_flags
0.03 ?129% +4934.1% 1.43 ? 51% perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve.xfs_trans_reserve
0.05 ?106% +2391.4% 1.35 ? 51% perf-sched.wait_time.max.ms.__cond_resched.md_write_start.raid1_make_request.md_handle_request.__submit_bio
0.04 ?101% +4873.6% 1.98 ? 61% perf-sched.wait_time.max.ms.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_add_to_ioend.iomap_writepage_map
0.07 ? 46% +1572.7% 1.17 ? 18% perf-sched.wait_time.max.ms.__cond_resched.mempool_alloc.raid1_write_request.raid1_make_request.md_handle_request
21.44 ?102% +4098.1% 900.20 ? 58% perf-sched.wait_time.max.ms.__cond_resched.process_one_work.worker_thread.kthread.ret_from_fork
0.22 ?100% +1250.5% 3.04 ? 35% perf-sched.wait_time.max.ms.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate.isra
0.02 ?211% +3208.7% 0.70 ?120% perf-sched.wait_time.max.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.04 ?140% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.submit_bio_noacct.flush_bio_list.flush_pending_writes.raid1d
0.00 ?223% +20644.0% 0.86 ? 52% perf-sched.wait_time.max.ms.__cond_resched.submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages.do_writepages
3.52 ? 17% +182.8% 9.95 ? 18% perf-sched.wait_time.max.ms.__cond_resched.submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
0.07 ? 83% +2032.6% 1.43 ? 51% perf-sched.wait_time.max.ms.__cond_resched.task_work_run.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode
0.04 ?101% +2869.6% 1.17 ? 31% perf-sched.wait_time.max.ms.__cond_resched.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.30 ? 54% +819.1% 2.76 ? 22% perf-sched.wait_time.max.ms.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
0.00 ?223% +9330.4% 0.36 ?129% perf-sched.wait_time.max.ms.__cond_resched.xfs_trans_alloc.xfs_vn_update_time.file_modified_flags.xfs_file_write_checks
0.02 ? 99% +1e+06% 211.92 ? 56% perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.25 ? 94% +779.2% 2.18 ? 57% perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
0.21 ?100% +4146.7% 8.84 ?167% perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
6.35 ? 18% +58.2% 10.04 ? 12% perf-sched.wait_time.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
1331 ? 62% -100.0% 0.00 perf-sched.wait_time.max.ms.kthreadd.ret_from_fork
0.39 ? 21% +284.1% 1.49 ? 29% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.usleep_range_state.ipmi_thread.kthread
3.58 ? 20% +147.3% 8.85 ? 19% perf-sched.wait_time.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.__flush_workqueue
0.43 ? 21% +354.5% 1.97 ? 15% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
5.62 ? 89% +14748.4% 833.74 ? 57% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xlog_cil_push_work
186.49 ? 28% +485.6% 1092 ? 44% perf-sched.wait_time.max.ms.schedule_timeout.___down_common.__down.down
6.58 ? 27% +667.0% 50.46 perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
826.34 ?139% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
1.23 ? 3% +124.1% 2.75 ? 10% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
57.94 ? 74% -95.6% 2.56 ? 36% perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.blk_execute_rq
0.47 ? 39% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.md_thread.kthread.ret_from_fork
495.52 -62.5% 185.74 ? 52% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.11 ? 80% +610.9% 0.76 ? 50% perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
27.07 ? 2% -13.6 13.47 ? 4% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
26.80 ? 2% -13.5 13.33 ? 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
26.80 ? 2% -13.5 13.33 ? 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
26.79 ? 2% -13.5 13.33 ? 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
13.36 ? 2% -12.7 0.62 ? 10% perf-profile.calltrace.cycles-pp.raid1_write_request.raid1_make_request.md_handle_request.__submit_bio.__submit_bio_noacct
25.19 ? 2% -12.7 12.52 ? 4% perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
24.71 ? 2% -12.4 12.32 ? 4% perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
24.69 ? 2% -12.4 12.31 ? 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
13.42 ? 2% -12.2 1.23 ? 9% perf-profile.calltrace.cycles-pp.raid1_make_request.md_handle_request.__submit_bio.__submit_bio_noacct.iomap_submit_ioend
13.42 ? 2% -12.2 1.24 ? 9% perf-profile.calltrace.cycles-pp.md_handle_request.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages
12.12 ? 2% -12.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.raid1_write_request.raid1_make_request.md_handle_request.__submit_bio
11.96 ? 2% -12.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.raid1_write_request.raid1_make_request.md_handle_request
15.90 ? 2% -10.9 5.00 ? 4% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
13.43 ? 2% -9.3 4.16 ? 5% perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages.do_writepages
13.46 ? 2% -9.3 4.20 ? 5% perf-profile.calltrace.cycles-pp.iomap_submit_ioend.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
13.44 ? 2% -9.3 4.18 ? 5% perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
17.95 ? 2% -9.2 8.71 ? 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
14.01 ? 2% -9.2 4.84 ? 5% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
14.05 ? 2% -9.2 4.88 ? 5% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.vfs_write
14.04 ? 2% -9.2 4.88 ? 5% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write
14.01 ? 2% -9.2 4.85 ? 5% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
10.60 -6.6 4.01 perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write
9.62 -6.0 3.63 perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
9.60 -6.0 3.62 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync
9.57 -6.0 3.60 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq
27.01 -3.6 23.44 ? 3% perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
26.89 -3.6 23.32 ? 3% perf-profile.calltrace.cycles-pp.md_handle_request.__submit_bio.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush
26.86 -3.6 23.29 ? 3% perf-profile.calltrace.cycles-pp.md_flush_request.raid1_make_request.md_handle_request.__submit_bio.__submit_bio_noacct
26.91 -3.6 23.34 ? 3% perf-profile.calltrace.cycles-pp.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write
26.99 -3.6 23.42 ? 3% perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.vfs_write
26.91 -3.6 23.34 ? 3% perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
26.87 -3.6 23.31 ? 3% perf-profile.calltrace.cycles-pp.raid1_make_request.md_handle_request.__submit_bio.__submit_bio_noacct.submit_bio_wait
3.63 ? 2% -3.4 0.26 ?100% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.intel_idle_irq.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
3.87 ? 28% -2.8 1.10 ? 48% perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule
24.30 -2.6 21.67 ? 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
24.21 -2.6 21.59 ? 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid1_make_request.md_handle_request
2.70 ? 38% -2.4 0.32 ?101% perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule
2.67 ? 38% -2.4 0.31 ?102% perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair
4.41 ? 3% -1.7 2.70 perf-profile.calltrace.cycles-pp.ret_from_fork
4.41 ? 3% -1.7 2.70 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.50 ? 23% -1.1 0.36 ?101% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.md_flush_request.raid1_make_request
1.49 ? 23% -1.1 0.35 ?101% perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.md_flush_request
1.90 ? 18% -1.0 0.95 ? 14% perf-profile.calltrace.cycles-pp.schedule.md_flush_request.raid1_make_request.md_handle_request.__submit_bio
1.82 ? 20% -1.0 0.86 ? 16% perf-profile.calltrace.cycles-pp.__schedule.schedule.md_flush_request.raid1_make_request.md_handle_request
0.96 -0.7 0.26 ?100% perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.63 ? 2% +0.1 0.71 ? 2% perf-profile.calltrace.cycles-pp.md_submit_flush_data.process_one_work.worker_thread.kthread.ret_from_fork
0.54 ? 3% +0.1 0.63 ? 6% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
0.54 ? 3% +0.1 0.63 ? 8% perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time.file_modified_flags.xfs_file_write_checks
0.53 ? 3% +0.1 0.62 ? 6% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
0.58 ? 2% +0.1 0.68 ? 9% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_modified_flags.xfs_file_write_checks.xfs_file_buffered_write
0.92 ? 15% +0.4 1.33 ? 16% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.__wait_for_common.__flush_workqueue
0.93 ? 15% +0.4 1.35 ? 16% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now
0.94 ? 15% +0.4 1.36 ? 16% perf-profile.calltrace.cycles-pp.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
1.00 ? 14% +0.4 1.42 ? 15% perf-profile.calltrace.cycles-pp.__wait_for_common.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
2.26 +0.4 2.69 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
2.14 +0.4 2.59 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.6 0.63 ? 7% perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
0.00 +0.6 0.63 ? 6% perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.6 0.64 ? 19% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.raid_end_bio_io.raid1_end_write_request.__submit_bio.__submit_bio_noacct
0.00 +0.7 0.66 ? 2% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
0.00 +0.7 0.70 perf-profile.calltrace.cycles-pp.xlog_cil_push_work.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +0.8 0.77 ? 12% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync
0.00 +0.8 0.79 ? 12% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.09 ?223% +0.9 1.01 ? 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
0.00 +1.0 0.97 ? 2% perf-profile.calltrace.cycles-pp.copy_to_brd.brd_submit_bio.__submit_bio.__submit_bio_noacct.iomap_submit_ioend
0.00 +1.0 0.99 ? 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
0.00 +1.2 1.19 ? 10% perf-profile.calltrace.cycles-pp.raid_end_bio_io.raid1_end_write_request.__submit_bio.__submit_bio_noacct.iomap_submit_ioend
0.00 +1.3 1.25 ? 2% perf-profile.calltrace.cycles-pp.brd_submit_bio.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages
0.00 +1.6 1.60 ? 8% perf-profile.calltrace.cycles-pp.raid1_end_write_request.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.xfs_vm_writepages
0.90 ? 9% +2.2 3.05 ? 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
0.94 ? 9% +2.2 3.09 ? 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
1.26 ? 10% +2.8 4.10 ? 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq
1.28 ? 10% +2.8 4.12 ? 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
1.28 ? 10% +2.8 4.13 ? 2% perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
65.88 +15.2 81.12 perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
67.86 +15.4 83.26 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
67.87 +15.4 83.27 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
68.04 +15.4 83.44 perf-profile.calltrace.cycles-pp.write
67.62 +15.4 83.04 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
67.60 +15.4 83.03 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
67.52 +15.4 82.94 perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.69 ? 5% +30.8 35.53 ? 2% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
21.24 +31.0 52.28 perf-profile.calltrace.cycles-pp.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
5.08 ? 5% +31.2 36.30 ? 2% perf-profile.calltrace.cycles-pp.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
6.52 ? 4% +32.0 38.55 ? 2% perf-profile.calltrace.cycles-pp.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
7.48 ? 4% +34.2 41.67 perf-profile.calltrace.cycles-pp.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
9.46 ? 4% +37.8 47.23 perf-profile.calltrace.cycles-pp.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write
40.54 -15.7 24.79 ? 4% perf-profile.children.cycles-pp.md_handle_request
40.52 -15.7 24.78 ? 4% perf-profile.children.cycles-pp.raid1_make_request
26.02 -15.5 10.49 ? 2% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
52.31 -14.8 37.51 ? 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
42.50 -14.6 27.87 ? 3% perf-profile.children.cycles-pp.__submit_bio
42.50 -14.6 27.88 ? 3% perf-profile.children.cycles-pp.__submit_bio_noacct
27.07 ? 2% -13.6 13.47 ? 4% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
27.07 ? 2% -13.6 13.47 ? 4% perf-profile.children.cycles-pp.cpu_startup_entry
27.06 ? 2% -13.6 13.46 ? 4% perf-profile.children.cycles-pp.do_idle
26.80 ? 2% -13.5 13.33 ? 4% perf-profile.children.cycles-pp.start_secondary
13.48 ? 2% -12.8 0.68 ? 10% perf-profile.children.cycles-pp.raid1_write_request
25.44 ? 2% -12.8 12.65 ? 4% perf-profile.children.cycles-pp.cpuidle_idle_call
24.96 ? 2% -12.5 12.44 ? 4% perf-profile.children.cycles-pp.cpuidle_enter
24.96 ? 2% -12.5 12.44 ? 4% perf-profile.children.cycles-pp.cpuidle_enter_state
15.90 ? 2% -10.9 5.00 ? 4% perf-profile.children.cycles-pp.file_write_and_wait_range
18.13 ? 2% -9.3 8.79 ? 3% perf-profile.children.cycles-pp.intel_idle
13.46 ? 2% -9.3 4.20 ? 5% perf-profile.children.cycles-pp.iomap_submit_ioend
14.01 ? 2% -9.2 4.84 ? 5% perf-profile.children.cycles-pp.xfs_vm_writepages
14.05 ? 2% -9.2 4.88 ? 5% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
14.04 ? 2% -9.2 4.88 ? 5% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
14.01 ? 2% -9.2 4.85 ? 5% perf-profile.children.cycles-pp.do_writepages
11.84 -7.7 4.17 perf-profile.children.cycles-pp.xlog_wait_on_iclog
12.58 -4.6 7.98 perf-profile.children.cycles-pp.remove_wait_queue
27.01 -3.6 23.44 ? 3% perf-profile.children.cycles-pp.blkdev_issue_flush
26.99 -3.6 23.42 ? 3% perf-profile.children.cycles-pp.submit_bio_wait
26.97 -3.5 23.46 ? 3% perf-profile.children.cycles-pp.md_flush_request
6.50 ? 18% -3.1 3.43 ? 14% perf-profile.children.cycles-pp.__schedule
6.10 ? 19% -2.8 3.30 ? 15% perf-profile.children.cycles-pp.schedule
4.92 ? 23% -2.7 2.19 ? 22% perf-profile.children.cycles-pp.pick_next_task_fair
4.78 ? 23% -2.7 2.05 ? 24% perf-profile.children.cycles-pp.newidle_balance
4.50 ? 25% -2.6 1.87 ? 26% perf-profile.children.cycles-pp.load_balance
24.78 -2.5 22.26 ? 3% perf-profile.children.cycles-pp._raw_spin_lock_irq
3.50 ? 21% -2.3 1.19 ? 27% perf-profile.children.cycles-pp.find_busiest_group
3.46 ? 21% -2.3 1.16 ? 28% perf-profile.children.cycles-pp.update_sd_lb_stats
3.24 ? 22% -2.2 1.06 ? 29% perf-profile.children.cycles-pp.update_sg_lb_stats
2.83 -1.7 1.10 ? 2% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.80 ? 19% -1.7 0.10 ? 5% perf-profile.children.cycles-pp.__filemap_fdatawait_range
4.41 ? 3% -1.7 2.70 perf-profile.children.cycles-pp.ret_from_fork
4.41 ? 3% -1.7 2.70 perf-profile.children.cycles-pp.kthread
1.76 ? 20% -1.7 0.08 ? 7% perf-profile.children.cycles-pp.folio_wait_writeback
1.74 ? 20% -1.7 0.08 ? 4% perf-profile.children.cycles-pp.folio_wait_bit_common
1.67 ? 21% -1.6 0.06 ? 7% perf-profile.children.cycles-pp.io_schedule
1.48 ? 9% -1.4 0.06 ? 13% perf-profile.children.cycles-pp.poll_idle
4.79 ? 3% -1.3 3.48 ? 6% perf-profile.children.cycles-pp.intel_idle_irq
0.97 -0.5 0.51 ? 4% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.60 ? 25% -0.4 0.16 ? 29% perf-profile.children.cycles-pp.idle_cpu
1.08 ? 3% -0.4 0.65 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
1.58 ? 5% -0.4 1.20 ? 8% perf-profile.children.cycles-pp.xlog_force_lsn
0.82 ? 4% -0.3 0.50 ? 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.81 ? 3% -0.3 0.50 ? 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.84 ? 3% -0.3 0.57 perf-profile.children.cycles-pp.sched_ttwu_pending
0.65 ? 3% -0.3 0.39 ? 2% perf-profile.children.cycles-pp.xlog_ioend_work
0.71 ? 4% -0.3 0.45 perf-profile.children.cycles-pp.__hrtimer_run_queues
0.49 ? 3% -0.3 0.23 ? 2% perf-profile.children.cycles-pp.schedule_idle
0.39 ? 5% -0.2 0.17 ? 6% perf-profile.children.cycles-pp.menu_select
0.41 ? 4% -0.2 0.19 perf-profile.children.cycles-pp.xlog_state_clean_iclog
0.61 ? 6% -0.2 0.40 ? 3% perf-profile.children.cycles-pp.tick_sched_timer
0.56 ? 4% -0.2 0.36 perf-profile.children.cycles-pp.xlog_state_do_iclog_callbacks
0.56 ? 4% -0.2 0.37 perf-profile.children.cycles-pp.xlog_state_do_callback
0.99 ? 3% -0.2 0.79 perf-profile.children.cycles-pp.__wake_up_common
0.55 ? 6% -0.2 0.37 ? 2% perf-profile.children.cycles-pp.update_process_times
0.56 ? 5% -0.2 0.38 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
0.21 ? 34% -0.2 0.04 ?104% perf-profile.children.cycles-pp.find_busiest_queue
0.23 ? 6% -0.2 0.07 ? 7% perf-profile.children.cycles-pp.folio_wake_bit
0.24 ? 24% -0.2 0.08 ? 26% perf-profile.children.cycles-pp._find_next_and_bit
0.45 ? 5% -0.1 0.30 ? 3% perf-profile.children.cycles-pp.scheduler_tick
0.25 ? 6% -0.1 0.12 ? 6% perf-profile.children.cycles-pp.perf_adjust_freq_unthr_context
0.25 ? 7% -0.1 0.12 ? 6% perf-profile.children.cycles-pp.perf_event_task_tick
0.18 ? 7% -0.1 0.06 ? 6% perf-profile.children.cycles-pp.wake_page_function
0.26 ? 2% -0.1 0.15 ? 4% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.18 ? 3% -0.1 0.10 ? 6% perf-profile.children.cycles-pp.__smp_call_single_queue
0.22 ? 6% -0.1 0.14 ? 3% perf-profile.children.cycles-pp.__irq_exit_rcu
0.54 ? 3% -0.1 0.47 ? 5% perf-profile.children.cycles-pp.dequeue_entity
0.16 ? 5% -0.1 0.10 ? 5% perf-profile.children.cycles-pp.bio_alloc_clone
0.12 ? 8% -0.1 0.06 ? 6% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.14 ? 3% -0.1 0.08 ? 6% perf-profile.children.cycles-pp.llist_add_batch
0.19 ? 5% -0.1 0.13 ? 4% perf-profile.children.cycles-pp.__do_softirq
0.17 ? 10% -0.1 0.11 ? 6% perf-profile.children.cycles-pp.finish_task_switch
0.18 ? 7% -0.1 0.13 ? 3% perf-profile.children.cycles-pp.update_blocked_averages
0.10 ? 8% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.ktime_get
0.15 ? 7% -0.0 0.10 ? 4% perf-profile.children.cycles-pp.__flush_smp_call_function_queue
0.11 ? 9% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.exit_to_user_mode_loop
0.11 ? 9% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.task_work_run
0.14 ? 5% -0.0 0.10 ? 3% perf-profile.children.cycles-pp.prepare_task_switch
0.10 ? 3% -0.0 0.06 perf-profile.children.cycles-pp.native_sched_clock
0.08 ? 4% -0.0 0.04 ? 44% perf-profile.children.cycles-pp.down_write
0.18 ? 6% -0.0 0.15 ? 3% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.18 ? 8% -0.0 0.15 ? 8% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.09 ? 4% -0.0 0.06 perf-profile.children.cycles-pp.sched_clock_cpu
0.10 ? 7% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.rebalance_domains
0.06 ? 6% -0.0 0.03 ? 70% perf-profile.children.cycles-pp.bio_associate_blkg_from_css
0.08 ? 11% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.llist_reverse_order
0.17 ? 2% -0.0 0.15 ? 6% perf-profile.children.cycles-pp.bio_alloc_bioset
0.08 ? 8% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.__update_blocked_fair
0.11 ? 4% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.mempool_alloc
0.10 ? 4% -0.0 0.08 perf-profile.children.cycles-pp.__switch_to
0.08 ? 7% -0.0 0.06 ? 6% perf-profile.children.cycles-pp.iomap_add_to_ioend
0.14 ? 5% -0.0 0.12 ? 4% perf-profile.children.cycles-pp.__switch_to_asm
0.10 ? 9% -0.0 0.08 ? 5% perf-profile.children.cycles-pp.update_rq_clock_task
0.09 ? 5% -0.0 0.07 ? 7% perf-profile.children.cycles-pp.___perf_sw_event
0.09 ? 7% -0.0 0.07 ? 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.09 ? 4% -0.0 0.07 perf-profile.children.cycles-pp.perf_tp_event
0.08 ? 6% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.bio_associate_blkg
0.08 ? 8% -0.0 0.06 ? 6% perf-profile.children.cycles-pp.update_rq_clock
0.06 ? 7% -0.0 0.05 perf-profile.children.cycles-pp.xfs_ilock
0.08 +0.0 0.09 ? 4% perf-profile.children.cycles-pp.memcpy_orig
0.14 ? 4% +0.0 0.16 ? 2% perf-profile.children.cycles-pp.xlog_cil_process_committed
0.14 ? 4% +0.0 0.16 ? 2% perf-profile.children.cycles-pp.xlog_cil_committed
0.04 ? 45% +0.0 0.06 ? 7% perf-profile.children.cycles-pp.xlog_cil_alloc_shadow_bufs
0.09 ? 8% +0.0 0.11 ? 3% perf-profile.children.cycles-pp.sched_mm_cid_migrate_to
0.09 ? 7% +0.0 0.12 ? 6% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
0.06 ? 9% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.reweight_entity
0.27 ? 5% +0.0 0.30 ? 4% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.14 ? 7% +0.0 0.18 ? 3% perf-profile.children.cycles-pp.xa_load
0.04 ? 71% +0.0 0.07 ? 6% perf-profile.children.cycles-pp.xfs_end_bio
0.30 ? 6% +0.0 0.34 ? 2% perf-profile.children.cycles-pp.select_idle_sibling
0.06 ? 9% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.__queue_work
0.06 ? 11% +0.0 0.10 ? 4% perf-profile.children.cycles-pp.queue_work_on
0.56 ? 4% +0.0 0.60 ? 4% perf-profile.children.cycles-pp.update_load_avg
0.00 +0.1 0.05 perf-profile.children.cycles-pp.kmem_cache_free
0.48 ? 4% +0.1 0.53 ? 3% perf-profile.children.cycles-pp.select_task_rq
0.45 ? 4% +0.1 0.50 ? 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.00 +0.1 0.06 ? 8% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.11 ? 6% +0.1 0.17 ? 4% perf-profile.children.cycles-pp.iomap_finish_ioends
0.00 +0.1 0.06 ? 11% perf-profile.children.cycles-pp.mutex_lock
0.14 ? 4% +0.1 0.20 ? 4% perf-profile.children.cycles-pp.update_cfs_group
0.06 ? 7% +0.1 0.13 ? 6% perf-profile.children.cycles-pp.task_tick_fair
0.63 ? 2% +0.1 0.71 ? 2% perf-profile.children.cycles-pp.md_submit_flush_data
0.07 ? 10% +0.1 0.15 ? 3% perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
0.16 ? 14% +0.1 0.24 ? 16% perf-profile.children.cycles-pp.detach_tasks
0.00 +0.1 0.08 ? 27% perf-profile.children.cycles-pp.xfs_trans_alloc_inode
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.schedule_preempt_disabled
0.10 ? 10% +0.1 0.19 ? 3% perf-profile.children.cycles-pp.xfs_bmapi_write
0.07 ? 9% +0.1 0.16 ? 4% perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
0.00 +0.1 0.09 ? 5% perf-profile.children.cycles-pp.xfs_btree_lookup
0.09 ? 6% +0.1 0.18 ? 7% perf-profile.children.cycles-pp.down_read
0.54 ? 3% +0.1 0.63 ? 6% perf-profile.children.cycles-pp.iomap_writepages
0.54 ? 3% +0.1 0.63 ? 6% perf-profile.children.cycles-pp.write_cache_pages
0.41 ? 3% +0.1 0.51 ? 8% perf-profile.children.cycles-pp.iomap_writepage_map
1.02 ? 7% +0.1 1.12 ? 2% perf-profile.children.cycles-pp.copy_to_brd
0.45 ? 4% +0.1 0.55 ? 5% perf-profile.children.cycles-pp.iomap_finish_ioend
0.10 ? 11% +0.1 0.22 ? 3% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.00 +0.1 0.12 ? 10% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
0.11 ? 10% +0.1 0.23 ? 4% perf-profile.children.cycles-pp.sysvec_call_function_single
0.26 ? 6% +0.1 0.39 ? 10% perf-profile.children.cycles-pp.__folio_start_writeback
0.13 ? 10% +0.1 0.25 ? 2% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
1.24 ? 2% +0.1 1.37 perf-profile.children.cycles-pp.try_to_wake_up
0.18 ? 10% +0.1 0.31 ? 13% perf-profile.children.cycles-pp.sb_mark_inode_writeback
0.05 ? 8% +0.1 0.18 ? 3% perf-profile.children.cycles-pp.xlog_cil_set_ctx_write_state
0.04 ? 44% +0.1 0.19 ? 3% perf-profile.children.cycles-pp.xlog_cil_write_commit_record
0.20 ? 5% +0.2 0.36 ? 2% perf-profile.children.cycles-pp.xlog_state_release_iclog
0.36 ? 5% +0.2 0.53 ? 5% perf-profile.children.cycles-pp.folio_end_writeback
0.63 ? 3% +0.2 0.81 ? 7% perf-profile.children.cycles-pp.xlog_cil_commit
0.04 ? 44% +0.2 0.24 perf-profile.children.cycles-pp.xlog_write_partial
0.68 ? 3% +0.2 0.88 ? 8% perf-profile.children.cycles-pp.__xfs_trans_commit
0.03 ? 70% +0.2 0.24 perf-profile.children.cycles-pp.xlog_write_get_more_iclog_space
0.22 ? 4% +0.2 0.46 ? 8% perf-profile.children.cycles-pp.xfs_iomap_write_unwritten
0.33 ? 2% +0.2 0.58 ? 2% perf-profile.children.cycles-pp.complete
0.00 +0.3 0.27 ? 10% perf-profile.children.cycles-pp.sb_clear_inode_writeback
0.34 ? 4% +0.3 0.63 ? 6% perf-profile.children.cycles-pp.xfs_end_io
0.34 ? 5% +0.3 0.63 ? 7% perf-profile.children.cycles-pp.xfs_end_ioend
0.36 ? 3% +0.3 0.66 ? 2% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.14 ? 7% +0.3 0.45 ? 6% perf-profile.children.cycles-pp.__folio_end_writeback
0.37 ? 3% +0.3 0.70 perf-profile.children.cycles-pp.xlog_cil_push_work
0.18 ? 5% +0.4 0.55 perf-profile.children.cycles-pp.xlog_write
1.00 ? 14% +0.4 1.40 ? 15% perf-profile.children.cycles-pp.schedule_timeout
1.05 ? 13% +0.4 1.48 ? 14% perf-profile.children.cycles-pp.__wait_for_common
2.26 +0.4 2.69 perf-profile.children.cycles-pp.worker_thread
2.14 +0.4 2.59 perf-profile.children.cycles-pp.process_one_work
0.45 ? 5% +0.8 1.20 ? 10% perf-profile.children.cycles-pp.raid_end_bio_io
1.08 +0.9 2.03 ? 12% perf-profile.children.cycles-pp.__wake_up_common_lock
0.56 ? 5% +1.0 1.62 ? 8% perf-profile.children.cycles-pp.raid1_end_write_request
3.02 ? 5% +2.9 5.90 ? 3% perf-profile.children.cycles-pp._raw_spin_lock
65.88 +15.2 81.12 perf-profile.children.cycles-pp.xfs_file_fsync
68.12 +15.3 83.46 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
68.11 +15.3 83.45 perf-profile.children.cycles-pp.do_syscall_64
68.06 +15.4 83.47 perf-profile.children.cycles-pp.write
67.52 +15.4 82.94 perf-profile.children.cycles-pp.xfs_file_buffered_write
67.62 +15.4 83.05 perf-profile.children.cycles-pp.ksys_write
67.62 +15.4 83.04 perf-profile.children.cycles-pp.vfs_write
4.71 ? 5% +30.8 35.55 ? 2% perf-profile.children.cycles-pp.osq_lock
21.24 +31.0 52.28 perf-profile.children.cycles-pp.xfs_log_force_seq
5.08 ? 5% +31.2 36.30 ? 2% perf-profile.children.cycles-pp.__mutex_lock
6.52 ? 4% +32.0 38.55 ? 2% perf-profile.children.cycles-pp.__flush_workqueue
7.48 ? 4% +34.2 41.67 perf-profile.children.cycles-pp.xlog_cil_push_now
9.46 ? 4% +37.8 47.24 perf-profile.children.cycles-pp.xlog_cil_force_seq
52.30 -14.8 37.46 ? 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
18.13 ? 2% -9.3 8.79 ? 3% perf-profile.self.cycles-pp.intel_idle
2.45 ? 22% -1.6 0.83 ? 28% perf-profile.self.cycles-pp.update_sg_lb_stats
1.41 ? 9% -1.4 0.05 ? 45% perf-profile.self.cycles-pp.poll_idle
4.48 ? 3% -1.1 3.39 ? 6% perf-profile.self.cycles-pp.intel_idle_irq
0.60 ? 25% -0.4 0.16 ? 30% perf-profile.self.cycles-pp.idle_cpu
0.72 ? 6% -0.2 0.52 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.23 ? 24% -0.2 0.08 ? 26% perf-profile.self.cycles-pp._find_next_and_bit
0.18 ? 35% -0.2 0.03 ?103% perf-profile.self.cycles-pp.find_busiest_queue
0.58 ? 5% -0.2 0.43 ? 3% perf-profile.self.cycles-pp._raw_spin_lock
0.24 ? 4% -0.1 0.10 ? 7% perf-profile.self.cycles-pp.menu_select
0.18 ? 7% -0.1 0.09 ? 15% perf-profile.self.cycles-pp.update_sd_lb_stats
0.16 ? 7% -0.1 0.08 ? 7% perf-profile.self.cycles-pp.perf_adjust_freq_unthr_context
0.11 ? 21% -0.1 0.04 ? 72% perf-profile.self.cycles-pp.load_balance
0.14 ? 3% -0.1 0.08 ? 6% perf-profile.self.cycles-pp.llist_add_batch
0.13 ? 8% -0.0 0.08 ? 11% perf-profile.self.cycles-pp.enqueue_entity
0.21 ? 7% -0.0 0.17 ? 4% perf-profile.self.cycles-pp.__schedule
0.11 ? 9% -0.0 0.07 ? 6% perf-profile.self.cycles-pp.newidle_balance
0.10 ? 5% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.native_sched_clock
0.18 ? 9% -0.0 0.15 ? 6% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.08 ? 11% -0.0 0.05 ? 8% perf-profile.self.cycles-pp.llist_reverse_order
0.06 ? 6% -0.0 0.03 ? 70% perf-profile.self.cycles-pp.bio_associate_blkg_from_css
0.08 ? 8% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.finish_task_switch
0.08 ? 8% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.___perf_sw_event
0.10 ? 8% -0.0 0.08 perf-profile.self.cycles-pp.__switch_to
0.09 ? 8% -0.0 0.07 ? 5% perf-profile.self.cycles-pp.update_rq_clock_task
0.05 ? 8% +0.0 0.07 ? 12% perf-profile.self.cycles-pp.xlog_cil_insert_items
0.08 ? 5% +0.0 0.11 ? 3% perf-profile.self.cycles-pp.sched_mm_cid_migrate_to
0.04 ? 45% +0.0 0.07 ? 10% perf-profile.self.cycles-pp.__flush_workqueue
0.02 ?141% +0.0 0.06 ? 9% perf-profile.self.cycles-pp.update_min_vruntime
0.02 ?141% +0.0 0.06 ? 7% perf-profile.self.cycles-pp.perf_trace_sched_wakeup_template
0.00 +0.1 0.05 perf-profile.self.cycles-pp.xa_load
0.00 +0.1 0.06 ? 9% perf-profile.self.cycles-pp.mutex_lock
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.__mutex_lock
0.13 ? 5% +0.1 0.20 ? 2% perf-profile.self.cycles-pp.update_cfs_group
0.26 ? 4% +0.1 0.33 ? 3% perf-profile.self.cycles-pp.update_load_avg
0.00 +0.1 0.07 ? 10% perf-profile.self.cycles-pp.raid_end_bio_io
1.00 ? 6% +0.1 1.10 ? 2% perf-profile.self.cycles-pp.copy_to_brd
0.07 ? 5% +0.3 0.34 ? 8% perf-profile.self.cycles-pp.raid1_end_write_request
0.36 ? 2% +0.3 0.65 ? 3% perf-profile.self.cycles-pp.mutex_spin_on_owner
4.68 ? 5% +30.5 35.17 ? 2% perf-profile.self.cycles-pp.osq_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki